forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
BJcib5mFe | Delving into adversarial attacks on deep policies | [
"Jernej Kos",
"Dawn Song"
] | Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs. random noise. We present a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function. We further explore how re-training on random noise and FGSM perturbations affects the resilience against adversarial examples. | [
"adversarial attacks",
"adversarial examples",
"deep policies",
"variety",
"deep learning architectures",
"deep reinforcement learning",
"promising results",
"agent policies",
"raw inputs"
] | https://openreview.net/pdf?id=BJcib5mFe | https://openreview.net/forum?id=BJcib5mFe | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"rJ10qgmjg",
"SJbXuFajx",
"SyVa-27se",
"ryAsJK1sl"
],
"note_type": [
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1489337030590,
1490028568686,
1489383868365,
1489108902383
],
"note_signatures": [
[
"~Jernej_Kos1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper46/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper46/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Reply\", \"comment\": \"Thank you for your review!\\n\\nWe would like to point out that the paper by Huang et al. is not from 2016, but was posted on arXiv the 8th of February 2017 and we noticed it just slightly before submitting to ICLR on 16th of February. Their paper has also been submitted to ICLR 2017 workshop track (https://openreview.net/forum?id=ryvlRyBKl). So both works are concurrent and independent and we've updated the introduction to make this more clear.\\n\\nAlso, as far as the analysis of the experimental results goes, the page limit unfortunately prevented us from including any more substantial analysis.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting investigation in efficiency of adversarial attacks on a DRL agent\", \"rating\": \"7: Good paper, accept\", \"review\": \"This work tests the robustness of DRL policies to adversarial examples. It also proposes a new method of reducing the number of adversarial 'injections' needed to disrupt the policy.\\n\\nMost of the results seem expected and in-line with the existing results on FGSM adversarial examples. For example, it is well known that FGSM adversarial examples have a much larger effect on image classifiers than examples with random noise added. However, using the value function to decide when to inject the perturbation for the highest effectiveness seems like a nice contribution and paves the way for most advanced injection timing algorithms. Overall, this seems like a nice contribution for the workshop.\", \"pros\": [\"Nice idea for timing adversarial injections.\", \"Good experiments and plots.\"], \"cons\": [\"It would be interesting to see if the VF method could result in much fewer injections compared to the attack in Fig. 1b.\", \"Other results seem expected and not particularly novel.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper continues on the work of Huang(2016), investigating further possible adversarial attacks on deep reinforcement learning policies. It also tests the performance of adversarial training on the robustification of the learning. Different from in the standard supervised learning setting, this paper, to my best knowledge for the first time in the literature, also looks into the question when to attack in the sequential learning setting. In general, the results of the paper are as expected, similar to the ones in the supervised learning setting.\\n\\nThe questions that are investigated in the paper are important, but also to be expected in the sense that parallel questions have been investigated in the supervised learning setting. One significant contribution of the paper is the investigation of the \\u2018low frequency\\u2019 adversarial attacks that is unique in sequential learning. The paper is well written and easy to follow.\", \"pros\": \"1. The highlight of the paper is the investigation of the \\u2018low frequency\\u2019 adversarial attacks\\n2. Although in general the questions and the results of this paper are as expected, these questions are important for further investigations.\", \"cons\": \"1. This paper seems to be lack of novelty. Experimental results are reported with no insightful analyses.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1PyAP4Yl | Neural Clustering: Concatenating Layers for Better Projections | [
"Sean Saito",
"Robby T. Tan"
] | Effective clustering can be achieved by mapping the input to an embedded space rather than clustering on the raw data itself. However, there is limited focus on transformation methods that improve clustering accuracies. In this paper, we introduce Neural Clustering, a simple yet effective unsupervised model to project data onto an embedded space where intermediate layers of a deep autoencoder are concatenated to generate high-dimensional representations. Optimization of the autoencoder via reconstruction error allows the layers in the network to learn semantic representations of different classes of data. Our experimental results yield significant improvements on other models and a robustness across different kinds of datasets. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1PyAP4Yl | https://openreview.net/forum?id=r1PyAP4Yl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"S14NZJC9g",
"ByLyBvRqg",
"H1XN_FToe",
"Bkhbpkdte",
"SkuaTA6cl",
"BkvBdD0ce",
"B1bDrZqtx",
"r1WW0iesg",
"r1q0OKtKg"
],
"note_type": [
"comment",
"official_review",
"comment",
"comment",
"official_comment",
"comment",
"comment",
"official_review",
"comment"
],
"note_created": [
1489002796265,
1489036509788,
1490028587025,
1487564036314,
1489001919670,
1489037375554,
1487701336607,
1489186297090,
1487669458064
],
"note_signatures": [
[
"~Pierre-Alexandre_Mattei1"
],
[
"ICLR.cc/2017/workshop/paper81/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Jianwei_Yang1"
],
[
"ICLR.cc/2017/workshop/paper81/AnonReviewer2"
],
[
"~Sean_Saito1"
],
[
"~Jianwei_Yang1"
],
[
"ICLR.cc/2017/workshop/paper81/AnonReviewer2"
],
[
"~Sean_Saito1"
]
],
"structured_content_str": [
"{\"title\": \"Convolutional autoencoder\", \"comment\": \"Hi, congrats for this thought-provoking work !\\n\\nWhat kind of architecture did you use regarding the deep convolutional autoencoder ?\\n\\nWe also submitted a \\\"deep clustering paper\\\" to this workshop, \\\"Deep Adversarial Gaussian Mixture Auto-Encoder for Clustering\\\". Another interesting paper on the subject is\\n\\n\\\"Variational Deep Embedding: A Generative Approach to Clustering\\\", Jiang et al. (arxiv.org/abs/1611.05148)\"}",
"{\"title\": \"more details on the experimentation required\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The idea of using concatenation of layer outputs as input to deeper layers was introduced in supervised learning context in works like Highway Networks. They show that combination of representations from different layers can lead to better discriminative features resulting in better classification performance.\\n\\nIn this work a similar approach is used in the context of unsupervised clustering. A deep convolutional autoencoder is trained on the data and combination of multiple layer representations is used for clustering. The approach seems quite intuitive and straight forward. Regarding the experiments, I have following questions,\\n\\nassuming that the clustering accuracies is classification performance,\\n\\nIs the classifier in every experiment a k-NN?\\nHow is k chosen for k-NN and k-means experiments?\\nWhat is the performance when k-NN is applied on individual layer representations from convolutional autoencoder rather than on combination?\\n\\nAlso there are previous works which in detail talk of autoencoder methods as clustering approaches. for example,\", \"http\": \"//www.jmlr.org/proceedings/papers/v27/baldi12a/baldi12a.pdf\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Some comments, and comparison with one more relevant work\", \"comment\": \"Hi, I read your paper and found the proposed method is straightforward and interesting. I did not expect straightly learn auto-encoder would achieve such a good results, even on CIFAR-10.\\n\\nIn the paper, you mentioned: \\n\\n1) Our method does not depend on any heuristics such as the number of desired centroids or the target distribution of the embedded space. \\n\\n2) Note our method outperforms all other models and produces state-of-the-art results.\\n\\nHowever, I think it is not that accurate. \\n\\n1) Firstly, though you do not use the number of desired centroids during the training, you will use it during testing. So it is not totally independent on the cluster number, which differs from those methods that also predict the cluster numbers. As a result, I think it is more accurate to say \\\"does not depend on the number of desired centroids during training\\\". \\n\\n2) Secondly, though DEC used a target distribution of the embedded space as prior, there are some other works in the same line but without the usage of prior distributions, such as our paper \\\"Joint Unsupervised Learning of Deep Representations and Image Clusters\\\", CVPR 2016. In our paper, we simultaneously learn the image representation and cluster the images from scratch, and achieved very nice results.\\n\\n3) Thirdly, our method JULE achieved comparable clustering accuracy after applying K-means on the learnt representations on MNIST, and also significantly better performance on many other image datasets. I think comparing with our method on more image datasets will make the paper more comprehensive and persuasive.\", \"the_code_for_our_paper_is_available_on\": \"https://github.com/jwyang/JULE-Torch\\n\\nthanks,\\nJianwei Yang\"}",
"{\"title\": \"Follow-up\", \"comment\": [\"I have the same questions as Jianwei, I hope you can address these issues since it will constitute a major part of my review.\", \"Clustering and classification seem to be used almost interchangeably throughout the paper. Can you clarify these concepts? What do you formally mean when you say clustering accuracy? How is it different than classification accuracy?\", \"What is k-means in Table 1? What is dec?\", \"If you are passing your concatenated representations directly to k-NN which is a classification method, where do you achieve clustering?\", \"Thanks.\"]}",
"{\"title\": \"Re: Follow Up\", \"comment\": \"Hi Jianwei & Reviewer,\\n\\nApologies for the late reply and thank you for the feedback/review.\\n\\nYes, I've somehow confused classification accuracy for clustering accuracy. k-NN is used for the former, and I would use NMI for the latter.\\n\\n>>> - What is k-means in Table 1? What is dec?\\n\\nFor K-Means I set k= # of classes. DEC is Deep Embedded Clustering, taken from this paper https://arxiv.org/abs/1511.06335.\\n\\n>>>- If you are passing your concatenated representations directly to k-NN which is a classification method, where do you achieve clustering?\\n\\nYes, the concatenated representations are used directly for classification. I will upload a revised version that hopefully clarifies the confusion.\\n\\nThank you.\\n\\nSean\"}",
"{\"title\": \"Re: Re: Some comments, and comparison with one more relevant work\", \"comment\": \"Hi, Sean,\\n\\nThanks for your prompt reply. \\n\\n1) In the testing stage, we use a nearest-neighbor classifier. Hence I do not know the number of clusters even after training. Perhaps an agglomerative clustering algorithm could find the optimal number of clusters. What's interesting is that this method does not directly optimize clustering performance, yet can generate comparable results. Something I'm looking forward to exploring further.\\n\\n>> The 'Clustering Accuracy' is a bit confusing to me. For me, it means the clustering accuracy metric as used in previous literatures, by comparing the clustering label and the ground-truth label. To get the clustering labels, you need to know the number of clusters, right? In your case, it seems that you are using KNN as the classifier and evaluate the learnt representation on test set and report the classification accuracy? However, the confusion comes. In Table-1, both kmeans and K-NN results are reported. But the former one is a clustering method and the second one is actually a classification method. How to compare them in the same line?\\n\\n\\n2) & 3) Thank you for the suggestion. I will read your paper.\\n\\n>> Thanks, if you want to find the number of clustering accuracies, please refer to Table 10 and 11 in the paper. If you want to find the 1-NN classification accuracy based on the learnt representation, please refer to Table 13\\n\\nJianwei\"}",
"{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": [\"Even though I enjoy the main idea of concatenating multiple layer information into an embedding for the purpose of clustering, I think this papers needs more work before acceptance.\", \"As I pointed out earlier, the presentation of the paper is a bit confusing.\", \"Some of the experiments and some of the follow-up discussion does not seem faithful to the task of clustering. Directly passing concatenated embeddings to kNN might be used to measure the quality of those with respect to classification, but this skips clustering step entirely. Instead of this, one can use clustering performance directly (using appropriate metrics) or by clustering first into K clusters and then using cluster assignments as the instance representation used during classification.\", \"Following the point above, when you say you don't need to know the number of clusters beforehand because you pass embeddings directly to the classification method, I don't follow. What constitutes a cluster in this instance?\", \"I was also curious about how it would perform when you have only the topmost encoding layer (or an arbitrary one) compared to using all, since you claim in Section 2.2. this way yields more complex representations?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Re: Some comments, and comparison with one more relevant work\", \"comment\": \"Hi Jianwei,\\n\\nThank you for the comments. Yes, the method is straightforward, and I am also quite surprised how effective it is across different datasets.\\n\\n1) In the testing stage, we use a nearest-neighbor classifier. Hence I do not know the number of clusters even after training. Perhaps an agglomerative clustering algorithm could find the optimal number of clusters. What's interesting is that this method does not directly optimize clustering performance, yet can generate comparable results. Something I'm looking forward to exploring further.\\n\\n2) & 3) Thank you for the suggestion. I will read your paper.\\n\\nThis is my first attempt at academic research, so I really appreciate the feedback. Thank you!\\n\\nBest,\\nSean\"}"
]
} |
|
BJWGMK7tx | Learning a Metric for Relational Data | [
"Jiajun Pan",
"Hoel Le Capitaine",
"Philippe Leray"
] | The vast majority of metric learning approaches are dedicated to be applied on data described by feature vectors, with some notable exceptions such as times series and trees or graphs. The objective of this paper is to propose metric learning algorithms that consider (multi)-relational data. The proposed approach takes benefit from both the topological structure of the data and supervised labels. | [
"metric",
"data",
"relational data",
"vast majority",
"metric learning approaches",
"feature vectors",
"notable exceptions",
"times series",
"trees",
"graphs"
] | https://openreview.net/pdf?id=BJWGMK7tx | https://openreview.net/forum?id=BJWGMK7tx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"H1JQOYpoe",
"BJlvrYC9g",
"SklgZ5Lje",
"HJGbWdQje",
"BykxHDj5l"
],
"note_type": [
"comment",
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028567032,
1489044823950,
1489572072367,
1489367289999,
1488839910867
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"~Hoel_Le_Capitaine1"
],
[
"~Hoel_Le_Capitaine1"
],
[
"ICLR.cc/2017/workshop/paper44/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper44/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"reply to the review\", \"comment\": \"We thank the referee for the valuable comments.\\n\\n\\n>The authors proposed a new metric learning algorithm for learning from relational data. They argued that most metric learning algorithms focusing on learning from data represented by feature vectors and the proposed algorithm handles relational data. However, probably due to the page limitation --- I feel that the authors do not explain it very well on why we should use the propose algorithm.\\n\\nIndeed, the page limitation does not allow to describe extensively the approach. Nonetheless, we removed some parts on the paper so that we hope the motivation and the reasons of introducing the algorithm are better described now than before.\\n\\n\\n\\n>First, the MovieLens dataset is a relatively popular dataset, so I think the readers would want to know the advantages of the proposed algorithm on this dataset compared to other algorithms. If there are no comparable algorithms in the proposed settings, then the authors should state the reasons of using the new settings. For example, list the new applications which can be enabled by the new settings. Finally, from Table 1 along, it is hard to tell if the empirical results of the proposed algorithm are strong or not. \\n>In short, I appreciate the fact that the authors want to propose a new method for metric learning. However, it is not clear what the main research question is in the paper. I recommend the author to improve the description and list the advantages of using the proposed algorithm in the next version of the paper.\\n\\n\\nWe are proposing a metric learning approach adapted to relational datasets. In particular, we propose a solution that is able to incorporate relational information within metric learning and illustrate the benefit of considering this information over traditional \\\"flat\\\" metric learning algorithms. In terms of metric learning, there are no equivalent approaches, to the best of our knowledge. \\nA new version of the paper, taking into account these comments have been uploaded. Note that we have other results, with other relational datasets (imdb+rotten tomatoes [1], book crossing [2]), that are not included due to page limitation.\\n\\nBest regards,\\n\\n[1] Cantador, I., Brusilovsky, P., & Kuflik, T. (2011, October). Second workshop on information heterogeneity and fusion in recommender systems (HetRec2011). In RecSys (pp. 387-388).\\n[2] Ziegler, C. N., McNee, S. M., Konstan, J. A., & Lausen, G. (2005, May). Improving recommendation lists through topic diversification. In Proceedings of the 14th international conference on World Wide Web (pp. 22-32). ACM.\"}",
"{\"title\": \"comments on review\", \"comment\": \"Thank you for the feedback on our paper.\\n>First, I am not surprised that both entity (features of movies) and association (user ratings) information do help in predicting the type of the movies. But there aren\\u2019t any comparisons to other approaches or even discussions about related work.\\nThis is the objective of the paper to show that this intuition is true. A number of approaches are considering whether the structural (links) information, or the feature information, but few on both. Such works are mostly focusing on graph clustering and community detection.\\nOne can find such work (for graphs, not multi-relational data) in e.g. [1], [2] and references therein. Due to page limitation, it was difficult to provide a discussion on this aspect, but we are aware that it would deserve a deeper description.\\nConsidering both structural and feature information of multi-relational data for metric learning has never been considered, to the best of our knowledge.\\n> Equation (1) only measures the link strength between two entities. For your Labels + Relations experiment, it is unclear how you put both constraints in Algorithm 1.\\n\\nAlgorithm 1 is only concerned with constraints obtained from the structure of the data. Label constraints are obtained exactly the same way as almost every metric learning algorithm, i.e. by considering that similar objects are objects having the same label (and dissimilar objects have different labels)\\n\\n\\n> I think it is also necessary to specify what features (attributes) you used exactly. How many of them are numerical and how many are categorical (otherwise why set gamma = 0.5)? Also how do you pick the #constraints (13680, 36594)? Need to clarify.\\n\\nThe link strength is computed with association information, in the case of MovieLens, there is only one numerical attribute, the value of the rating. The value of gamma is not changing the link strength ordering in this case.\\n\\nThe number of constraints was mainly constrained by the number of possible pairs. There is no rule of thumb in metric learning community for this, but a commonly used one is to set it as function of squared number of different labels (as in ITML).\\n\\n[1] Yang, J., McAuley, J., & Leskovec, J. (2013, December). Community detection in networks with node attributes. In Data Mining (ICDM), 2013 IEEE 13th international conference on (pp. 1151-1156). IEEE.\\n[2] Smith, L. M., Zhu, L., Lerman, K., & Percus, A. G. (2016). Partitioning Networks with Node Attributes by Compressing Information Flow. ACM Transactions on Knowledge Discovery from Data (TKDD), 11(2), 15.\"}",
"{\"title\": \"review\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper proposes to incorporate relational information in metric learning, and experiments on MovieLens show that using both label and relational information can outperform that using only one single component in a K-NN classification setting.\\n\\nI am not an expert in metric learning, but I find it really difficult to evaluate the effectiveness of the proposed approach given the results presented in the paper. I also feel that the paper writing is not clear enough and some details are missing.\\n\\nFirst, I am not surprised that both entity (features of movies) and association (user ratings) information do help in predicting the type of the movies. But there aren\\u2019t any comparisons to other approaches or even discussions about related work.\\n\\nEquation (1) only measures the link strength between two entities. For your Labels + Relations experiment, it is unclear how you put both constraints in Algorithm 1.\\n\\nI think it is also necessary to specify what features (attributes) you used exactly. How many of them are numerical and how many are categorical (otherwise why set gamma = 0.5)? Also how do you pick the #constraints (13680, 36594)? Need to clarify.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors proposed a new metric learning algorithm for learning from relational data. They argued that most metric learning algorithms focusing on learning from data represented by feature vectors and the proposed algorithm handles relational data. However, probably due to the page limitation --- I feel that the authors do not explain it very well on why we should use the propose algorithm.\\n\\nFirst, the MovieLens dataset is a relatively popular dataset, so I think the readers would want to know the advantages of the proposed algorithm on this dataset compared to other algorithms. If there are no comparable algorithms in the proposed settings, then the authors should state the reasons of using the new settings. For example, list the new applications which can be enabled by the new settings. Finally, from Table 1 along, it is hard to tell if the empirical results of the proposed algorithm are strong or not. \\n\\nIn short, I appreciate the fact that the authors want to propose a new method for metric learning. However, it is not clear what the main research question is in the paper. I recommend the author to improve the description and list the advantages of using the proposed algorithm in the next version of the paper.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJyt71xvl | Summarized Behavioral Prediction | [
"Shih-Chieh Su"
] | In this work, we study the topical behavior in a large scale. Both the temporal and the spatial relationships of the behavior are explored with the deep learning architectures combing the recurrent neural network (RNN) and the convolutional neural network (CNN). To make the behavioral data appropriate for the spatial learning in the CNN, several reduction steps are taken in forming the topical metrics and placing them homogeneously like pixels in the images. The experimental result shows both temporal and spatial gains when compared against a multilayer perceptron (MLP) network. A new learning framework called the spatially connected convolutional networks (SCCN) is introduced to better predict the behavior. | [
"Deep learning",
"Supervised Learning",
"Applications",
"Structured prediction"
] | https://openreview.net/pdf?id=rJyt71xvl | https://openreview.net/forum?id=rJyt71xvl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"S1HiPqxog",
"BkT939Eix",
"SynZuFaix",
"ryv9tWPie"
],
"note_type": [
"official_review",
"comment",
"comment",
"official_review"
],
"note_created": [
1489180573324,
1489443988641,
1490028547822,
1489602959520
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper1/AnonReviewer1"
],
[
"~Shih-Chieh_Su1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper1/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"# Summary\\nThis paper proposes a way to represent behavior data as 2D map and build a CNN to exploit the local information (relationship between similar topics). The experimental results show that CNN on top of the 2D representation of the topics performs better than MLPs. Although the proposed idea is interesting, the experimental result is not very convincing enough to support the main hypothesis of the paper (see below).\\n\\n# Novelty\\nThe proposed idea to represent topics into 2D map is new (to my knowledge).\\n\\n# Clarity\\nThe paper is well-written.\\n\\n# Quality\\nThe experimental result is not much convincing due to the following. 1) SCCN performs worse than LRCN which contradicts the paper's assumption that position-specific relationship is more theoretically suitable for topical metrics. 2) I think the paper should have shown learning curves on the training data. Since TDRN can be viewed as an instance of SCCN/LRCN with large convolution filters, it should be able to perform as well as SCCN/LRCN on the \\\"training data\\\" with a large number of hidden units. But, it may be prone to overfitting due to large number of parameters if the paper's hypothesis is correct. However, the validation/testing performance of TDRN keeps improving over epochs in the paper, which means that TDRN is underfitting and the size of TDRN is too small. In order to show that SCCN/LRCN are better than TDRN, the paper should have shown training curves and showed that TDRN is overfitting. \\n\\n# Significance\\nEven if the main result of the paper is correct, I am not sure that representing this specific behavior data is interesting and significant enough to be presented in the workshop. However, I am willing to withdraw this opinion if other reviewers think this is an important problem/data. \\n\\n# Pros\\n- The proposed method to represent behavior data in a 2D map is new.\\n\\n# Cons\\n- The empirical result is not convincing.\\n- The data and problem discussed in the paper may not be significant or interesting to the community.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Revised according to the reviewer's feedback\", \"comment\": \"Thanks very much for the feedback.\", \"i_have_revised_and_uploaded_a_newer_version_with\": \"1. Replaced the validation curve with the training curve.\\n(The validation curve is very similar to the testing curve)\\n2. Removed the assumption \\\"position-specific relationship is more theoretically suitable for topical metric\\\".\\n3. Instead, focus on the regulation capability of SCCN, stating \\\"The regulation is more effective on the locally customized patch dictionaries in the LCN, compared to that on a global dictionary in the CNN.\\\"\\n4. Minor typos in the charts.\\n\\nAppreciate any further comments.\\nThanks.\\n\\nShih-Chieh\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Review\", \"rating\": \"3: Clear rejection\", \"review\": [\"Clarity: Could be much improved, although some of it may be due to my unfamiliarity with behavior studies.\", \"What is the dataset used, what domain? What does the raw data represent, i.e. what are entries what are entities? There is mention of \\\"predict the response\\\", \\\"organize activities into topics\\\", and at end of Sec3 mention of data set size. Quite unclear what the problem is this paper is trying to solve.\", \"Is the goal to predict future topical volume based on past topical volume? Or predict from past topical volume + neighbor current topical volume?\", \"Fig1: is this for all entities, all time? (d) is for one entity, single timepoint? What is the color in Fig1(d) representing? To my best guess, in Fig1(c) neighbors now correspond to different topics, which are related across the dataset. But for a single entry, single timepoint you will either observe this map, or not. So how does this help for prediction?\", \"RLE definition: what is being summed over? $\\\\forall v \\\\in \\\\mathcal{V}$ -- both $v$ and $\\\\mathcal{V}$ are not defined, and not easily inferred from context either.\", \"\\\"topical behavior\\\" in first sentence of abstract, is not defined and I can't find it with google. I assume this refers to the fact some topic modeling is done on the behavioral data?\", \"In general it seems this paper wants to be formulated broadly to be applicable to many specific instances of behavioral data, but sacrifices clarity and specificity which makes the paper almost unintelligeble.\", \"Significance\", \"CON: It seems this is a proprietary unnamed dataset no-one else has worked on?\", \"CON: Results with CNN or LCN are only marginally better than pure LSTM, with lot of noise on test error.\", \"PRO: If this work is relevant for the broad range of applications named in the first paragraph this would be quite significant. Not quite convinced if that is the case though.\", \"Originality, Novelty, Quality\", \"This idea to map behavioral data through topic modeling and mapping to a regular 2D Grid is probably novel.\", \"At the other hand, it lacks motivation or arguments why this is even a sensible thing to do in the first place. With dimensionality reduction and mapping to a grid, most information about the original datapoints will probably be lost.\"], \"in_summary\": \"this paper would need to be thoroughly rewritten to explain (1) what is the problem we're solving, (2) how is this problem approached, and (3) intuition into the quantities introduced and motivation of the proposed approach.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1gNakBFx | Audio Super-Resolution using Neural Networks | [
"Volodymyr Kuleshov",
"S. Zayd Enam",
"Stefano Ermon"
] | We propose a neural network-based technique for enhancing the quality of audio signals such as speech or music by transforming inputs encoded at low sampling rates into higher-quality signals with an increased resolution in the time domain. This amounts to generating the missing samples within the low-resolution signal in a process akin to image super-resolution. On standard speech and music datasets, this approach outperforms baselines at 2x, 4x, and 6x upscaling ratios. The method has practical applications in telephony, compression, and text-to-speech generation; it can also be used to improve the scalability of recently-proposed generative models of audio. | [
"audio",
"neural networks audio",
"neural networks",
"neural",
"technique",
"quality",
"audio signals",
"speech",
"music",
"inputs"
] | https://openreview.net/pdf?id=S1gNakBFx | https://openreview.net/forum?id=S1gNakBFx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"BJczsDFsg",
"ryRvsKbox",
"Sk5D8tiie",
"SyPAYAeil",
"ByzdjAOjg",
"rkv6eYUjl",
"S1wZ1IKAg",
"HJ_KJFUsl",
"SyR8_K6jg",
"By3h79Usg"
],
"note_type": [
"official_comment",
"official_review",
"comment",
"official_review",
"comment",
"comment",
"comment",
"comment",
"comment",
"official_comment"
],
"note_created": [
1489758993712,
1489242981830,
1489897057822,
1489197519199,
1489722217739,
1489567935081,
1492897535227,
1489567616111,
1490028629946,
1489572788410
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper146/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper146/AnonReviewer1"
],
[
"~Kyle_Kastner1"
],
[
"ICLR.cc/2017/workshop/paper146/AnonReviewer2"
],
[
"~Volodymyr_Kuleshov1"
],
[
"~Volodymyr_Kuleshov1"
],
[
"~Volodymyr_Kuleshov1"
],
[
"~Volodymyr_Kuleshov1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper146/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Score updated\", \"comment\": \"I would like to thanks a lot for their detailed response and for providing audio samples.\\n\\nI believe that the samples are actually good, although some artifacts can be heard for larger up-sampling scales. It is hard to judge without a solid baseline though. I think that the samples should come along some strong baselines, such an HMM-based system or some recently proposed method, such as the work by Li et al or for example,\\n\\nPeharz, R. et al. \\\"Modeling speech with sum-product networks: Application to bandwidth extension.\\\" ICASSP, 2014.\\n\\nThe paper would be much stronger with a detailed qualitative comparison with other methods. \\n\\nIn any case, the samples are good and I find this results interesting given the limitations of the loss function. Thus, I believe that it is interesting to have this paper in the workshop. I updated the score to 6.\"}",
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes an end-to-end method for audio super resolution. The proposed approach treats this problem as a regression task: predict the high resolution audio signal from the lower resolution one operating directly with raw audio samples.\", \"pros\": [\"The method is very simple and straight forward (if it works well, it would be a big plus)\", \"The paper is well written.\"], \"cons\": \"+ The method optimizes a measure that is not correlated with perceptual quality.\\n+ Current results seem to be competitive with previous approaches in terms of PSNR, but it is not clear the perceptual relevance\\n\\nIt would be very surprising to me that using an L2 loss in the time domain would lead to good perceptual results, particularly when having large upscaling factors, as there is a lot of uncertainty in the desired reconstruction. While some high frequencies can be predicted based on the statistics of the training data, some aspects of the signal are essentially unpredictable. For instance, predicting in time domain the samples of an unvoiced sound (which is essentially colored noise) would be impossible, thus, the best strategy (in terms of L2) would be to predict the mean. Because of this, I would expect a good reconstruction in terms of MSE would lead to smooth signals. The results shown in Figure 2, show that this is certainly not very dramatic, but ultimately qualitative evaluation is key.\\n\\nA similar issue is encountered when up-sampling images, where people have looked for better losses, see for instance (many more references look at the same issues):\\n\\nBruna, et al. \\\"Super-resolution with deep convolutional sufficient statistics.\\\" ICLR 2016.\\nJohnson, et al. \\\"Perceptual losses for real-time style transfer and super-resolution.\\\" ECCV, 2016.\\nLedig, Christian, et al. \\\"Photo-realistic single image super-resolution using a generative adversarial network.\\\" arXiv preprint arXiv:1609.04802 (2016).\\n\\nI am not surprised that the model gives good results in terms of SNR, as this is the loss being optimized for. However, it is well known that SNR is not linked to good perceptual quality. It would be interesting to provide PESQ scores for the speech samples, as well as audio examples of the enhanced samples (if not a perceptual evaluation). \\n\\nHow does the proposed model compare with Li et al in terms of computational complexity? Meaning, operating in raw audio should be much more demanding than in higher level features.\\n\\nPlease provide more details regarding the architecture. I understand that the up-sampled signal could be a good input to the network, but it is a bit counter intuitive to me to up-sample the audio signal to then use downsampling modules. Why not to just start from the original input?\\n\\nThe authors mention that this technique would help speed up methods that work directly in the audio domain (I assume that by producing compressed samples and then expanding them with the proposed method). For example in WaveNet type of models, it could be a good idea to replace the L2 norm with the likelihood function given by the learned model. That would be in my a opinion a more sensible loss function. \\n\\nI am open to reconsider my review based on the the authors response and, in particular, if they provide audio samples.\\n\\nSCORE UPDATED FROM 4 to 6.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some questions on the experimental setup\", \"comment\": \"Thanks for updating the paper, and providing some samples for the method. We have been looking forward to this paper, and after analyzing the samples we have a few questions with regard to the experimental procedure.\\n\\nIn the paper it is mentioned \\\"We normalize all files to 16,000 Hz and generate high-resolution patches of length 6000.\\\". This means that the high resolution samplerate is always 16kHz, with 2, 4, 6, and 8x tasks corresponding to 8, 4, 2.666, and 2kHz downsampled input, correct? Along with Nyquist frequencies of 4kHz , 2kHz, 1.333kHz, and 1kHz?\\n\\nThis would also mean each training sample corresponds to .375 seconds. Is any special care taken to avoid training and/or evaluating on silent or near silent frames, or is the full dataset used to build the training and test sets?\\n\\nAre all of these samples from the test set in each case?\\n\\nSome of the sample files in the samples github repo (such as samples/sp1/{4,6,8}) are all at the same sample rate (16k), while others (such as samples/msp/4) have the low-resolution (.lr.wav) at 4k sample rate, as might expected, while still others (such as samples/msp/2) have the low resolution at 32k sample rate. Can you clarify a bit how the low-resolution base was generated - tools, filters and so on?\\n\\nIn particular, we notice that samples/sp1/4/* seems to be processed significantly differently than sp1/{6, 8} - see the plots here ( https://github.com/kastnerkyle/analysis_of_audio_superresolution_using_neural_nets/blob/master/sp1_4_plots/1_4_hr_lr.png vs https://github.com/kastnerkyle/analysis_of_audio_superresolution_using_neural_nets/blob/master/sp1_6_plots/1_6_hr_lr.png ). sp1 6 and 8 appear to have heavy aliasing, but also do not appear to be low-pass filtered like sp1/4. \\n\\nIn msp/2 the low sample rate sounds somewhat downsampled, but when plotted the spectrum stops at the same place as the hr (samples/msp/2/msp.*.lr.wav all have 32kHz sample rate, vs samples/msp/2/msp.*.hr.wav at 16kHz). See the plots here ( https://github.com/kastnerkyle/analysis_of_audio_superresolution_using_neural_nets/blob/master/msp_2_plots/3_2_hr_lr.png ), noting that lr has 2x the samplerate - this means that the \\\"upsampling\\\" would have full access to the same information for the msp 2x task, if these samples match the preprocessing procedures. \\n\\nWe also see heavy banding artifacts in some of the proposed method's reconstructions - see the plots here ( https://github.com/kastnerkyle/analysis_of_audio_superresolution_using_neural_nets/tree/master/banding_artifacts_plots ). These artifacts seem to show up in piano and msp, but not sp1 (though sp1 has some artifacts which appear to run the duration of the signal, still).\\n\\nCan the authors comment if they observed this as well, or have an explanation to why this particular strong banding artifact exists? It seems the reconstructions could greatly improve by eliminating this issue - especially in the speech case. The piano harmonics may well be relatively unaffected.\\n\\nWe also note that some of the reconstructions seem to be creating \\\"aliases\\\" across the band - see the inversion pattern on the right part of this plot ( https://github.com/kastnerkyle/analysis_of_audio_superresolution_using_neural_nets/blob/master/alias_plots/piano_1_4.png ). This may just be behavior of the superresolution model, but it is interesting.\\n\\nIn particular, signal reconstruction from heavily aliased signals (such as what sp1/6 and sp1/8 appear to be) would be useful, but different from what is normally considered superresolution. The issue with msp 2x seems troubling, but may be a result of the procedure to give samples in the repo. \\n\\nThis could also be an issue with our visualization or analysis (plotting script in Python is included in the repo), but given the differences we were hoping the authors could clarify how the downsampled signals were created, along with some of the other questions.\"}",
"{\"title\": \"significant flaws in model design and evaluation\", \"rating\": \"3: Clear rejection\", \"review\": [\"The paper proposes a neural network approach to audio upsampling, or equivalently, reconstructing high-frequency signal components from low-frequency components. Most of the paper is pretty clearly written, but it has some significant flaws in model design and evaluation.\", \"The introduction refers to Zhang et al. (2017) for speech recognition on raw audio, instead of much earlier work by e.g. Sainath et al., Jaitly et al. and Hoshen et al.\", \"The statement in section 2 that \\\"very little work has been done on audio signals\\\" is unnecessarily broad and ignores a large body of prior work on using neural networks for e.g. source separation.\", \"All citations seem to be in-text (using \\\\citet), but this is not appropriate in most cases.\", \"The definition of R in section 2 is wrong. It is supposed to represent the sample rate, but following this definition it actually represents the total number of samples (which is only equal to the sample rate in Hz if the audio signal is exactly one second long). This is confusing.\", \"The model described in section 3 / figure 1 has some unusual architectural properties that aren't really justified anywhere. The residual blocks only seem to contain a single ReLU activation layer, so that means the network has multiple linear layers directly following each other in many places, which is redundant. If this is not the case, the model should be described more clearly.\", \"Why use filter size 9? The motivation for using large filters is not explained. It would also be useful to discuss the use of dilated convolutions instead (and why the authors chose not to use them).\", \"The downsampling procedure (\\\"... produce corresponding low-resolution patches by recording every r-th position in j...\\\") is improper: an anti-aliasing low-pass filter needs to be applied before decimation. (Unless all input signals are expected to be obtained by downsampling high-resolution signals this way, but that would defeat the point.)\", \"The description of magnatagatune in section 4 is wrong: the 188 tags do not correspond to genres, and they are not mutually exclusive. The validation split used is also non-standard and random, making reproduction more difficult.\", \"Figure 2 is difficult to interpret. The spectrogram for the low-resolution signal seems to have been produced with an FFT that assumes the full samplerate, as it stands this shows the spectrogram of a signal that was sped up 4x, so it's unclear how this should actually be interpreted. Comparing the first and 3rd spectrogram, it looks like it features some spurious harmonics (e.g. around timestep 300). It would be useful to provide the corresponding audio samples somewhere.\", \"PSNR in the time domain is a poor error measure for this task, because it does not take into account human perception: logarithmic loudness perception, frequency masking, frequency sensitivity (Fletcher-Munson), phase insensitivity etc. It is difficult to find the right way to measure this quantitatively, but providing more than one measure would be much more informative.\", \"The paper is missing baselines. Even measurements for a few simple upsampling strategies (nearest-neighbour, bilinear, ...) would be very informative. As it stands the numbers are difficult to interpret.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We are releasing more samples, as requested by the reviewer, and would be grateful if they could consider them in their review\", \"comment\": \"We have just added a dozen more samples to our project webpage:\", \"https\": \"//kuleshov.github.io/audio-super-res/\\n\\nThese include piano samples, as well as samples from Speaker 1 from a retrained model that gets rid of the slight background noise.\\n\\nThe main concern of Reviewer 1 was that the L2 loss cannot lead to good reconstructions, as it would overly smooth the signal. Our samples demonstrate that this is clearly not the case, and the L2 loss works very well.\\n\\nWe would like to ask the reviewer to please update their review in light of this additional material. Tonight appears to be the last date for doing this.\"}",
"{\"title\": \"We kindly ask AnonReviewer2 to review the latest version of manuscript, which we uploaded about 2-3 weeks ago.\", \"comment\": \"We thank the reviewer very much for their detailed and careful review.\\n\\nPlease note that we have uploaded an updated version of our paper about 2-3 weeks ago. It includes an updated architecture, a comparison to two baselines, new metrics, and many other improvements. We apologize for the confusion, and we would like to kindly ask the reviewer to evaluate the most current version of our paper.\", \"note_also_that_we_are_releasing_samples_from_our_method_here\": [\"https://kuleshov.github.io/audio-super-res/\", \"----\", \"Here is our response to the reviewer's comments on the first version of our paper:\", \"We included Zhang et al. (2017) to highlight that models over raw audio are an active area of research; we will certainly include the earlier work on raw audio as well.\", \"We apologize for the errors in the formatting of the citations; we have already corrected this bug in the current version.\", \"We thank the reviewer for catching this typo: the sentence should read \\\"R/T is the sampling rate of the signal\\\" (instead of \\\"R is the sampling rate\\u2026\\\")\", \"Note that we have updated our model architecture, hence this issue doesn't apply anymore. But we would like to point out that our residual model uses a popular version of a residual block that contains only one ReLU. See e.g. this blog post for a comparison of our residual block design to others (look at the third figure in particular): http://torch.ch/blog/2016/02/04/resnets.html\", \"See also the reference tensorflow implementation of a ResNet, which uses the same design us us (look at the 2 sub layer version): https://github.com/tensorflow/models/blob/master/resnet/resnet_model.py\", \"Also, note that we modeled our architecture based on the SRResNet of Ledig et al. (2016), which uses the same design for the residual block.\", \"Our choice of a length-9 filters comes from the Resnet architecture for images, which uses 3x3 filters. Smaller sizes did not work as well, and larger sizes did not add improvements. We did not try dilated convolutions, but we have found that strided convolutions improved performance, and we are using them the latest version of the paper.\", \"We applied a low-pass filter before subsampling the signal, and will make this clear in the final version. We also trained our method on low-resolution input without the low-pass filter, and the results were essentially identical. Interestingly, a method trained on filtered results seemed to introduce noise when it was run on non-filtered data (and vice versa).\", \"Indeed, the 188 tags are not mutually exclusive, and we will make this more clear. Note that we do not use these tags in our experiments. We did not realize at first that there is an official split, hence we created our own. Note that we focus on a different music dataset in the latest version of the manuscript.\", \"In the latest version of our paper, we use two objective metrics: SNR and log-distortion. We are happy to add any additional metrics as well.\", \"We are using two baselines in the most current version of our manuscript: cubic splines and the deep neural network method of Li and Lee (2015), which seems to be among the most deep learning approaches in the bandwidth extension literature.\"]}",
"{\"title\": \"Thanks for your feedback!\", \"comment\": \"Thank you so much for your interest in our paper.\\n\\nFirst, we're sorry for the inconsistencies in our samples. We generated them in several passes (and we were also in a bit of a rush), and so we inadvertently mixed together results from several experiments.\", \"now_on_to_your_questions\": \"Yes, the high resolution samplerate is always 16kHz, and the 2, 4, 6, and 8x tasks correspond to 8, 4, 2.666, and 2kHz input with Nyquist frequencies of 4kHz , 2kHz, 1.333kHz, and 1kHz.\\n\\nEach sample indeed corresponds to 0.375s. We used the full dataset and did not filter silent frames.\\n\\nAll of the samples are from the test set, except the very last one, where we demonstrate how the system can sometimes hallucinate sounds.\\n\\nAs we mentioned, the samples are somewhat inconsistent. We used two methods to generate the downscaled version:\\n\\nA. Low-pass filtering: x_lr = decimate(x, args.scale)\\nB. Naive sub-sampling: x_lr = scipy.decimate(x, args.scale)\\n\\nWe can train super-resolution algorithms that work in both regimes, as the long as the model is trained and tested with the same downsampling procedure.\\n\\nWe initially didn't pay much attention to the choice of downscaling technique, because it didn't affect very much our objective performance measures (SNR, PSD). But after listening more carefully to both types of samples, we realize that super-resolving low-pass filtered speech is more challenging, in the sense that all the methods (ours + baselines) don't recover as much higher frequencies as when the input is aliased (like the single-speaker samples that you mention). Interestingly, that doesn't seem to be the case for the piano samples, which we looked at first.\\n\\nWe have uploaded new single-speaker samples for which the input is not aliased.\\nRegarding the MSP-2 samples, we really don't know what happened. We are going to post new samples as soon as the model finishes retraining.\\n\\nThe banding artifacts are an issue that we are aware of. On some samples, the network creates artifacts that decompose into a sequence of bands at multiples of the same frequency, and are therefore clearly visible in the spectrogram. They are heard as a background buzz. Since we know precisely their frequencies, we added a post-processing heuristic that suppresses these bands. It doesn't affect sound quality but gets rid of some of the noise. This is a temporary hack until we figure out the true cause of the problem.\\n\\nThanks again for your interest, and let us know if you have any more questions!\"}",
"{\"title\": \"Samples from the method + Explanation for why L2 loss works in our case\", \"comment\": \"We thank the reviewer very much for their detailed and careful review.\\n\\nThe reviewer's main concern is that our loss function is not correlated with perceptual quality, and hence would not produce good results.\\n\\nFirst, we are releasing samples for our method and a cubic baseline (the DNN baseline will be added shortly). Our method achieves good reconstruction quality and outperforms the baseline.\", \"https\": \"//kuleshov.github.io/audio-super-res/\\n\\nWhy does our technique produce good quality samples? While we agree that the L2 loss is not perfectly correlated with audio quality, in practice, the correlation seems to be high enough to produce good results. Note also that most papers in the image super-resolution literature (including most papers published in the last year) report excellent results using the L2 loss, e.g.:\\n\\nDong et al., \\\"Image Super-Resolution Using Deep Convolutional Networks\\\" (2014)\\nKim et al., \\\"Accurate Image Super-Resolution Using Very Deep Convolutional Neural Networks\\\" (2016)\\nShi et al., \\\"Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network\\\" (2016)\\n\\nWhile the L2 loss may not allow us to recover unvoiced sounds, our receptive field (~0.25s) is large enough so that neighboring (voiced) phonemes may be used to recover these unvoiced sounds. Most importantly, many phonemes seem to retain enough low-frequency content to enable their recovery. \\n\\nConsider, for example, Sample 2 from the Single Speaker dataset. The \\\"K\\\" in \\\"RISK\\\" (the last word in the utterance) is mostly lost at r=6 and higher. Nonetheless, our method recovers the lost phoneme at r=6,8. See the very last sample on the page for an even more interesting example of this. Overall, the reconstructed samples still sound more \\\"dull\\\" than the original, lacking the full range of the true high frequencies. That is precisely due to the phenomenon described by the reviewer: the L2 loss \\\"smoothens\\\" the time-domain waveform, flattening some of the high frequencies. However, the phenomenon is not nearly as severe as described by the reviewer.\\n\\nWe tried to combine our method with perceptual losses based on features derived from randomly initialized neural networks, as well as GAN-based objectives. The perceptual losses did not significantly improve audio quality. The GANs added additional high frequencies, but also introduced many artifacts, which made the overall quality less pleasant than that of the L2-trained model. We plan to explore perceptual and adversarial losses for audio in follow-up work.\", \"these_are_our_responses_to_the_remaining_concerns_of_the_reviewer\": [\"In addition to SNR, our paper also reports log-distortion (LSD) values. We found that LSD correlates better with quality (e.g. the spline baseline obtains good SNR, but the output still sounds \\\"dull\\\", whereas LSD correlates much better with our perception of quality). We tried computing PESQ scores using the standard implementation from https://github.com/dennisguse/ITU-T_pesq, but found that it crashed on more than half of our samples. We will be very thankful if the reviewers can suggest better implementations or other important metrics besides SNR, LSD, and PESQ.\", \"Our implementation of Li et al. is 2-10x faster than that of our method, depending on the sample. Note that the network of Li et al. operates over the full Fourier representation of the down-sampled signal, hence the dimensionality of its input is not significantly smaller than that of our method (i.e. less than an order of magnitude). Both methods can perform inference faster than real-time, hence speed is not the main constrain for real-world deployment. Both methods could also be significantly optimized for speed.\", \"Our main reason for first up-sampling the signal is that it allows us to introduce a residual connection between the up-sampled input and the output of the model. This in turn makes training much faster (since the network effectively starts from the cubic interpolation), and this is very useful on larger datasets like VCTK.\"]}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"re-review\", \"comment\": [\"I was quite annoyed to learn that I spent time reviewing the wrong version of the paper. If the option is given to the authors to submit updated versions of their manuscripts after the submission deadline has passed, this should be communicated to the reviewers so they are always aware of the latest version. As it stands, I printed the manuscripts assigned to me when I received the assignment, and didn't check afterwards. Consider this an OpenReview feature request :)\", \"That said, I'm not sure I agree with the policy of letting authors upload significantly different versions of their papers after the deadline has passed. It would seem more fair to me to hold all authors to the same deadline, and only allow modifications after that deadline in exceptional circumstances. This would avoid situations like this. As it stands, papers that get assigned very eager reviewers who submit their reviews the next day are at a disadvantage, because the authors would have less time beyond the deadline to update their manuscripts.\", \"Anyway, none of this is the authors' fault of course. Many of my criticisms of the first version of the paper no longer apply, so I took a look at the new version. It looks like I am no longer able to update my rating for the original review, so I'll just include a new rating in this comment and I suppose it will be up to the conference organisers to decide what they do with that.\", \"The new version actually addresses my primary concerns: it includes a baseline and provides multiple evaluation metrics.\", \"Note that my remark about the definition of R does not apply to a single sentence. 'R' seems to be consistently referred to as a \\\"sampling rate\\\" throughout the paper, so either the definition needs to be changed (which would make the most sense to me) or it needs to be replaced by R/T throughout the paper.\", \"I hadn't seen the single-ReLU residual block before, but it seems to be more widely used than I thought. I don't really understand the advantage of it over the \\\"full preactivation\\\" block defined in https://arxiv.org/abs/1603.05027 though (Figure 4e). This seems to be the standard way of doing things nowadays, and is arguably a better way to avoid having nonlinearities in the residual pathways of the network, because it doesn't cost you a nonlinearity and avoids stacking multiple linear layers on top of each other.\", \"My remark about the spectrogram figure seems to have been incorrect, I was confused because the frequency axis is horizontal and the time axis is vertical, which is nonstandard. To avoid this confusion, it might be a good idea to transpose these figures.\", \"From the authors' response, the subsampling procedure also seems to be correct after all. However, the new version of the paper still doesn't mention the low pass filtering and the 2nd paragraph of Section 2 still implies that a subset of the original signal samples is taken without filtering. This would need to be fixed.\", \"The description of Magnatagatune would still need to be fixed, and a random split is really unfortunate with reproducibility in mind, but I guess this is not a huge deal.\", \"Overall, the new version is a significant step up which I would rate 7: accept.\"]}"
]
} |
|
S1SED1MYe | Adversarial Examples for Semantic Image Segmentation | [
"Volker Fischer",
"Mummadi Chaithanya Kumar",
"Jan Hendrik Metzen",
"Thomas Brox"
] | Machine learning methods in general and Deep Neural Networks in particular have shown to be vulnerable to adversarial perturbations. So far this phenomenon has mainly been studied in the context of whole-image classification. In this contribution, we analyse how adversarial perturbations can affect the task of semantic segmentation. We show how existing adversarial attackers can be transferred to this task and that it is possible to create imperceptible adversarial perturbations
that lead a deep network to misclassify almost all pixels of a chosen class while leaving network prediction nearly unchanged outside this class. | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=S1SED1MYe | https://openreview.net/forum?id=S1SED1MYe | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Hk2DDpace",
"HylBfOtajg",
"SJakYXGql",
"ryDLHjA9l"
],
"note_type": [
"official_review",
"comment",
"official_review",
"comment"
],
"note_created": [
1488996195763,
1490028557407,
1488234724958,
1489053007502
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper26/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper26/AnonReviewer1"
],
[
"~Volker_Fischer1"
]
],
"structured_content_str": [
"{\"title\": \"Interesting addition to adversarial examples\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents and interesting twist to adversarial examples: given a trained neural segmentation model, with slight changes to the input image that given a segmentation model, it is possible to change the segmentation prediction of certain classes or even individual objects without significantly changing the predictions to anything else. This is useful direction to explore and understand better.\", \"weaknesses\": \"the methods utilized are relatively similar to earlier works on adversarial examples, so the novelty of the approach is not very high. However The main weakness of the work is that the demonstration comes without any meaningful quantitative comparisons.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Review For Adversarial Examples For Semantic Image Segmentation\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors demonstrate how to build adversarial examples for pixel segmentations.\", \"pros\": [\"Authors convincingly show how they can minimally distort an image so that it is perceptually identical yet the segmentation for person is entirely obliterated.\", \"Authors show a single example that clearly demonstrates the effect where the person is entirely removed from the prediction. Additionally, the authors show some nice summary statistics where the person can be selectively removed quite reliably from the segmentation.\", \"First paper to approach adversarial example generation for pixel segmentation.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Re: Interesting addition to adversarial examples\", \"comment\": \"Thank you very much for your review and feedback.\", \"we_would_like_to_address_the_point_of_meaningful_quantitative_comparisons\": [\"One main reason we used no other adversarial attacks in comparison was that we were restricted to targeted attacks (in order to achieve a specific target segmentation) and related work is typically non-targeted.\", \"We validated our findings statistically on a subset of the cityscapes validation dataset (images containing enough pixels of person class, which were over half the validation images) also comparing the influence of different noise-sizes \\\\epsison (please also compare Fig. 2).\", \"We plan to give a more sophisticated analysis / comparison in future work which we were not able due to the 3 page restriction for workshop submissions.\", \"We want to thank the review again for their comments and hope this response clarifies the main point of concern.\"]}"
]
} |
|
S15PPJStl | Loss is its own Reward: Self-Supervision for Reinforcement Learning | [
"Evan Shelhamer",
"Parsa Mahmoudieh",
"Max Argus",
"Trevor Darrell"
] | Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a noisy and impoverished signal for end-to-end optimization. To augment reward, we consider self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. Self-supervised pre-training improves the data efficiency and returns of end-to-end reinforcement learning on Atari. | [
"Deep learning",
"Unsupervised Learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=S15PPJStl | https://openreview.net/forum?id=S15PPJStl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"H1emyzvsx",
"BJZHFNtjl",
"By1GA0Eig",
"SJD_neMix",
"HkxPIOtasl"
],
"note_type": [
"official_review",
"comment",
"comment",
"official_review",
"comment"
],
"note_created": [
1489604375656,
1489746233257,
1489460743390,
1489271918565,
1490028623388
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper137/AnonReviewer3"
],
[
"~Evan_G_Shelhamer1"
],
[
"~Evan_G_Shelhamer1"
],
[
"ICLR.cc/2017/workshop/paper137/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes to use self-supervision as surrogate rewards for deep reinforcement learning. Such self-supervision includes reward by itself, dynamics (predicting the next state) and inverse dynamics (predicting the action given start/end state) and reconstruction (with VAE, etc).\\n\\nThe paper shows some promising results. In particular, the sample efficiency is higher than the baseline (although it is not clear what the baseline is from the paper...). It seems magic to me why some auxiliary tasks unrelated to the true reward can lead to high score. Is that because we could learn the low-level feature better with auxiliary tasks? Obviously there is much more to experiment and explain.\\n\\nFor novelty, there are already many previous works on auxiliary tasks. So the idea is not new (except for reconstruction of the state via VAE). How the auxiliary tasks are done is also quite standard (one network with multiple heads to predict them).\", \"pros\": \"Some promising results. Paper is easy to read.\", \"cons\": \"Ideas are not novel. Key performance gain in the paper are not explained well. More experiments are needed.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Authors' Response\", \"comment\": \"Thank you for the review.\\n\\n> It seems magic to me why some auxiliary tasks unrelated to the true reward can lead to high score.\\n> there are already many previous works on auxiliary tasks\\n\\nWe agree that self-supervision for RL deserves further attention and explanation. This is why we carry out a fair comparison of various auxiliary losses including novel, discriminative forms. While there are other works on auxiliary tasks for RL, the union of the approaches and results is more informative as each sheds light on different aspects.\\n\\n> self-supervision as surrogate rewards\\n\\nOur self-supervisory signals are formulated as losses, unlike the surrogate/pseudo rewards or measurements of the ICLR17 papers by Mnih et al. and Dosovitskiy et al. On every transition our auxiliary tasks yield instantaneous gradients for representation learning (with lower variance and faster convergence than longer horizon optimization). The losses we define are more universal than the spatial navigation losses of Mirowski et al. in ICLR17.\\n\\n> it is not clear what the baseline is\\n\\nThe baseline is vanilla A3C (Mnih et al. 2016), as our self-supervised policies are instantiated as variations of A3C that take the actor-critic as an encoder to which each task attaches its own decoder. A3C achieves reasonable scores on these environments (which we inherit from the original paper).\\n\\n> dynamics (predicting the next state)\\n\\nOur dynamics task is not next state prediction, but verification of true/corrupted timesteps (it's a classification, not reconstruction). A trend throughout our results is that discriminative losses help more than reconstruction losses.\\n\\n> we could learn the low-level feature better with auxiliary tasks?\\n\\nWe have already done experiments to investigate the cause(s) of improvement. That comparable improvement can be observed when transferring only the conv. parameters and reinitializing the fully-connected layer shared by actor-critic and auxiliary heads suggests this is the case. Furthermore, decoding from the (fixed) RL representation to our auxiliary tasks shows a certain degree of commonality. Lastly, the lower improvement for data-dependent init. relative to self-supervision suggests that the effect is not purely from better conditioning of the weights.\\n\\nTo reproduce and further explore our results we will make our self-supervision and policy optimization code public.\\n\\nFor more detail please refer to our arxiv (which more thoroughly explains the tasks and reports policy decoding results): https://arxiv.org/abs/1612.07307\"}",
"{\"title\": \"Authors' Response\", \"comment\": \"Thank you for the review.\\n\\nWith respect to originality, our work experimentally demonstrates a representation bottleneck, introduces new auxiliary losses, and shows that self-supervised pre-training alone can improve data efficiency.\\n\\nTo further situate our work in the context of the other ICLR submissions mentioned, we consider the following contrasts:\\n\\n- Our self-supervised tasks do not require additional privileged information. Dosovitskiy et al. encode game quantities like health and ammunition while Mirowski et al. require depth input and perfect odometry. Our losses augment standard policy optimization without needing any further supervisory signals.\\n- We focus on discriminative formulations of auxiliary losses. The dynamics verification task (recognizing true successors) and inverse dynamics task (recognizing the effects of actions) are novel auxiliary tasks for representation learning. These are distinct from the the tasks in related works and reconstructive/generative approaches.\\n- Our auxiliary tasks are framed as direct losses and not control. Mnih et al. define pseudo-rewards that are then optimized by off-policy Q-learning while Dosovitskiy et al. define measurements and act according to their temporal differences across multiple long horizons. Our work shows that auxiliary losses without control suffice to improve optimization.\\n\\nFor more detail please refer to our arxiv (first uploaded in December): https://arxiv.org/abs/1612.07307\"}",
"{\"title\": \"Minor, but timely, contribution\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The submission uses a number of different self-supervised losses to improve data efficiency on Atari games. The authors experiment with pre-training the net using reward prediction, state or action prediction, and reconstruction, and obtain a moderate speed up on the subsequent RL task.\\n\\nAlthough the quality and clarity of the submission is high, it is hard to see any originality here, given the recent papers from Mnih et al, Mirowski et al, and Dosovitskiy et al. The authors state that their paper is 'concurrent' with these other papers, but this is misleading - the other three papers were published well before the workshop deadline and at least the first two were presented at NIPS workshops in December.\", \"pros\": \"the paper is well written and motivated, with clear results. This is an active area of research in deep RL, so even incremental results are of interest.\", \"cons\": \"the only really new result is the use of the VAE for pretraining, which was a negative result (it hurt the RL performance). In my view, workshop papers should really have more novelty.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
HJ4-rAVtl | Understanding intermediate layers using linear classifier probes | [
"Guillaume Alain",
"Yoshua Bengio"
] | Neural network models have a reputation for being black boxes.
We propose a new method to better understand the roles and dynamics
of the intermediate layers.
Our method uses linear classifiers, referred to as "probes",
where a probe can only use the hidden units of a given intermediate layer
as discriminating features.
Moreover, these probes cannot affect the training phase of a model,
and they are generally added after training.
We demonstrate how this can be used to develop a better intuition
about models and to diagnose potential problems. | [
"Deep learning",
"Supervised Learning",
"Theory"
] | https://openreview.net/pdf?id=HJ4-rAVtl | https://openreview.net/forum?id=HJ4-rAVtl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"BJvipYxse",
"SkIxjLljg",
"rk9SuYTsg"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489178015190,
1489165038242,
1490028610068
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper117/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper117/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Interesting work, but the potential impact is not clear\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes using linear classifiers (probes) as a measure of semantic information in learned representations. If a linear classifier operating on a combined representation A+B operates just as well as on A alone then representation B is in some sense useless; B carries none of this heuristic type of semantic information. This is applied to Inception v3, where linear probes increase in accuracy with depth, except when applied to Inception v3's auxiliary head. The method is also applied to a pathologically deep MNIST network with a skip connection in order to show that the first part of the network does not support accurate classifier probes.\\n\\nIt's not clear how to connect the sense in which representation B is useless to specific changes that might improve the capability of non-pathological networks or our understanding of when a network does or does not work. However, this work adds to the set of intuitions we have network internals, so it is likely to contribute to such advancements and should be discussed at the ICLR workshop.\\n\\n\\nI love the lion example.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This work presents the idea of linear probes, i.e., linear classifiers that measure the class predictability of hidden layers of images. One of their results is that demonstrating the usefulness of the probes in monitoring the behaviour of skip-connections.\\n\\nAs a note, you never really explain in the paper what is the linear classifier classifying, so I had to check the long submission. \\n\\nOverall, I'm in favor of the idea of probes. Personally, I'm not so socked that probes are linear, as long as one is careful regarding the conclusions drawn. On this point, I have some reservations in terms of what do the probes really tell us other than \\\"how well I can linearly predict the target class from the hidden layer\\\", which is I think rather limiting in terms of how information like this can be used to improve networks. For this reason, it might be interesting to maybe come up with more than one classifiers (e.g., size of object, color of object, location of object, occluded object, number of objects in the image etc etc) and use these multiple probes as a way to quantify what each layer is better at capturing. This could complement work in understanding neural networks in a quantitative way, which to the best of my knowledge relies mostly on visualization (e.g., Zeiler and Fergus, 2014).\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
ryZkZmNte | Adapting distance | [
"woojin lee"
] | domain adaptation | [
"distance",
"distance domain adaptation"
] | https://openreview.net/pdf?id=ryZkZmNte | https://openreview.net/forum?id=ryZkZmNte | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"B1WNhmNoe",
"ry9mOK6se",
"SkJuMDQjx"
],
"note_type": [
"official_review",
"comment",
"official_review"
],
"note_created": [
1489415209124,
1490028577599,
1489363559041
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper64/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper64/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"This paper is on domain adaptation for sentiment analysis. In principle, domain adaptation is an important area, but I don't believe that this paper makes an important and well explained contribution.\", \"rating\": \"3: Clear rejection\", \"review\": \"I had to read the paper several times to understand the usage of the term \\\"source\\\" and \\\"target\\\" in section 2. Normally, those terms are used for the input and output of the classifier, but in this paper, it seems that they refer to source and target DOMAIN.\\n\\nWhat are \\\"N\\\" and \\\"M\\\" is Eqn (2)-(4), the size of the source and target domain training corpora ?\\nIf this is the case, the loss (2) is computationally expensive !\\n\\nIn Eqn (3), how the inputs are represented ? binary unigram vector of bag-of-words (+SVD) ?\\n\\nOverall, the results are not well enough described:\\n - many details on the experimental settings are missing\\n + what is the dimension of SVD ?\\n + how many train and test examples ares used ?\\n + why do you balance your training data ?\\n - the presentation of the results is not clear\\n + what are the labels \\\"B->D\\\", \\\"D->B\\\" etc in Fig 1 ?\\n + what is displayed - accuracy ?\\n - no comparative results are provided\\n (I guess that the \\\"Amazon review data\\\" is a public corpus for which results are available)\\n it's even not clear which corpus was used !\\n\\nThe page limit is 3 pages excluding references. The authors could have used all the available space to better explain their work.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"weak paper\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper proposes an approach for unsupervised domain adaptation (only unlabeled data in target domain). It essentially works by assigning a pseudo-label to each target example based on the labels of its neighbors in the source domain. Eq 2 can be rearranged to make this come out more clearly -- the label assigned to a target example x is 1/N\\\\sum_{i=1}^N y_i * k(x, x_i), where (x_i, y_i) are labeled examples in source domain, and k() is taken to be rbf kernel in the paper. A common classifier is then learned on both source examples and target examples (after this pseudo-labeling).\\n\\nThe paper doesn't provide an adequate overview of the prior work which makes it hard to judge the contribution and novelty. The proposed approach is also not convincing to me -- it assumes access to a universal representation which is good for both source and target. It combines both parametric (the classifier f is parametric) and nonparametric (1/N\\\\sum_{i=1}^N y_i * k(x, x_i)) methods to estimate 'f' which operates on the given representation. The method seems like a restricted version of some earlier works (e.g. Geodesic Flow Kernel for Unsupervised Domain Adaptation, 2012) which also adapt the representation before building a nearest-neighbor based predictor. It also doesn't discuss density-ratio based methods for DA which are commonly used for instance based DA. The method is also not scalable since it involves computing the kernel on all pairs of source-target examples. Again, it doesn't provide any discussion on it.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
H1XLbXEtg | Online Multi-Task Learning Using Active Sampling | [
"Sahil Sharma",
"Balaraman Ravindran"
] | One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential tasks has been in the form of distillation based learning wherein a single student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large task-specific (expert) networks which require extensive training.
We propose a simple yet efficient multi-task learning framework which solves multiple goal-directed tasks in an online or active learning setup without the need for expert supervision.
| [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=H1XLbXEtg | https://openreview.net/forum?id=H1XLbXEtg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SkAZMGzjg",
"BJai2Ugse",
"SkxZAuGse",
"SJx8Rufox",
"HyVKjdzsl",
"SJcm_FTie",
"ByresFPje"
],
"note_type": [
"official_review",
"official_review",
"comment",
"comment",
"comment",
"comment",
"comment"
],
"note_created": [
1489277446299,
1489165476970,
1489305080062,
1489305160143,
1489304444183,
1490028578400,
1489636077518
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper65/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper65/AnonReviewer2"
],
[
"~Sahil_Sharma1"
],
[
"~Sahil_Sharma1"
],
[
"~Sahil_Sharma1"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Sahil_Sharma1"
]
],
"structured_content_str": [
"{\"title\": \"Simple but effective - final review\", \"rating\": \"7: Good paper, accept\", \"review\": \"I have updated my rating to a 7 after reading the authors' response.\\n\\nThis paper uses active sampling to select tasks to train on in a multi-task, deep RL setting. Their multi-task baseline, in comparison, chooses the next task using uniform sampling. The active sampling is done by comparing the current score on each task with its 'aspirational' high score - a score that could come from human performance or from single-task training or from published results. Tasks that are underperforming with respect to the aspirational high score are more likely to be sampled. \\n\\nCompared to the uniform sampling, active sampling yields much higher scores. In fact, uniform sampling causes some tasks to not get off the ground at all, where as all are able to learn with active sampling. This demonstrates the challenge of multi-task deep RL, where tasks may be adversarial and thus prevent others from learning.\", \"pros\": \"Active sampling is a well-known approach, but it has not been tried for multi-task deep RL, which is a much harder problem than, e.g., supervised learning. This paper is simple, but the approach is quite effective. It will be of interest to those in the community that are studying continual or transfer learning in RL.\", \"cons\": \"The results are limited. The authors only used 6 Atari games, and this type of result is highly variable and might not hold for a different set. However, for a workshop paper, it is understandable. More sophisticated methods, such as a multi-armed bandit formulation, might have been tried. My one complaint is that the authors did not show how often the different games were chosen, nor show any visualisation of the game selection over time. This would have been quite interesting.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Raising the score to 6 after considering the authors' response\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes an online method for training reinforcement learning agents to perform several tasks at once. The method samples a task to train on after each episode. The main idea is to assume that a reference score is available for each task and to use the agent's relative performance with respect to the reference score to determine the probability of sampling the task. The result is that tasks with lower relative performance are sampled more frequently.\\n\\nThe experiments show that the proposed strategy outperforms uniform sampling of tasks during online training. While the idea is simple it is very ad hoc and has a number of unexplored parameters (temperature, number of recent scores, number of training steps with uniform sampling). There are more principled and well understood methods that could be applied to this problem. For example, why not apply a UCB-style non-stationary bandit algorithm that attempts to balance exploration and exploitation? Such a method would keep periodically going back to tasks with high relative performance, which could help mitigate catastrophic forgetting. The proposed method seems to ignore such potential interference between tasks.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Thank you for the informative review\", \"comment\": \"[Part 1/2]\\nThank you for the very informative review. We agree with your general point that Multi-Arm bandits is certainly an algorithm that we should try out in this multi-task setting of active sampling. In fact we are running experiments on this formulation right now (with Discounted-UCB-tuned) and will try to include the experiments in this workshop paper, if the paper gets accepted. Since this comment is kind of long, i am breaking it into two parts.\", \"we_would_like_to_clarify_some_of_the_points_raised_by_you\": \"---- While the idea is simple it is very ad hoc\\n\\nWe would like to point out that active sampling is a very active area of research in machine learning. What we have presented is the \\u201csimplest\\u201d form of active sampling. While multi arm bandit formulations of our idea are certainly more sophisticated, they also fall in the same domain of active sampling. The reason we were so excited by this simple idea is because the improvements we were able to get over the baselines are staggering. With just a simple change in sampling frequency, performance improves from close to 30% to close to 80%. \\n\\n---- and has a number of unexplored parameters (temperature, number of recent scores, number of training steps with uniform sampling).\\n\\nWe would like to make three important points here. They are related to the basic idea that some of the hyper-parameters in a model are perhaps not very important whereas others are. \\n1- We did hyper-parameter tune for the temperature hyper-parameter. Apologies for not including this information in the paper because of the 3 page limit. We have described the complete experimental setup in detail in the arxiv version of this paper : https://arxiv.org/abs/1702.06053 \\n2- We believe that the other hyper-parameters the reviewer has mentioned are not important for the performance. One could make the same argument against almost any state of the art method. Many deep RL algorithms have frame stacking (to convert a POMDP problem into a more markovian version) replay memory (for breaking correlations in updates) and convolutional layers (for doing well on visual control problems). There are usually at least several dozen hyper-parameter choices one must make including size of convolutional filters, stride sizes, number of filters, number of layers for the convolutions, the number of frames to be stacked, or the size of replay memory.I think the reason most DRL algorithms rightly choose to make arbitrary choices in such situations is that it isn't necessary to fine tune every hyper-parameter in the model, only the important ones are tuned. It is for this reason that we decided not to tune for number of training steps with uniform sampling and number of recent scores for average calculation. \\n3- While UCB style non stationary bandit algorithms are certainly more sophisticated than our simple active sampling method, they do have more tunable important hyper-parameters. This makes the task of hyper-parameter tuning much harder. What we have shown in our work is that a much simpler method with fewer important tunable hyper-parameters (i) performs exceedingly well in the online multi task setting. The q_am metric performance on MT1 is at 0.80 and is comparable to actor mimic performance. (ii) We are able to match offline expert-supervision based performance with only half the data which is required for training the experts. (iii) the multi tasking algorithm with hyper-parameters tuned on one multi-tasking instance works well on other multi tasking instances as well. (iv) We have also run experiments on 12-task instances and we observe similar performance levels (To MT2 and MT3) on the 12-task instance as well. The reason we haven\\u2019t included it in manuscript yet is that we are still running the baseline uniform sampling method on it.\"}",
"{\"title\": \"Part 2 of the above comment\", \"comment\": \"[Part 2/2]\\n\\n---- why not apply a UCB-style non-stationary bandit algorithm that attempts to balance exploration and exploitation?\\n\\nWe agree that it makes a lot of sense to experiment with UCB-style non stationary algorithms in this setting. We are currently running experiments for discounted-UCB-tuned but have not achieved very promising results. This is the reason that we haven't included it in the manuscript yet. We will definitely include UCB baselines in the final version of the paper if it gets accepted. In fact we are also in parallel working on an MDP formulation of the meta-learner which decides the next task to train on. Experiments are in a very preliminary stage, however.\\n\\n---- Such a method would keep periodically going back to tasks with high relative performance, which could help mitigate catastrophic forgetting. The proposed method seems to ignore such potential interference between tasks.\\n\\nThis is not true. In fact our method is \\u201cdesigned\\u201d to beat catastrophic forgetting/destructive interference. If the learning of task 1 causes task 2 performance to fall down, we would immediately start actively sampling task 2 more and get better at it. \\n\\nIn conclusion I\\u2019d like to say that while other approaches to the problem of multi-task learning using active sampling: like UCB are valid; it does not invalidate simpler approaches, especially since these simpler approaches work well on a wide variety of different multi-tasking instances. The main contributions of this work are:\\na) We have shown that a simple active sampling approach in and of itself is a valid technique for multi-task learning.\\nb) We show results on 3 sets of Multi Tasking instances. Previous works in the area show results on at most one instance.\\nc) Hyper-Parameter tuning was done on only one instance (MT1). We demonstrate how the method generalizes to two other MT instances and thus show robustness.\\nd) We propose sensible evaluation metrics for the multi tasking problem which help identify the good multi tasking algorithms from those that are great on a narrow set of tasks but do not perform well on others.\\ne) Previous works in the area do not perform any analyses on why their methods work well vis-a-vis baselines. We also perform analyses on why we think our method performs much better than baseline methods. We apologize for not including these analyses in the workshop paper; it was hard given the 3 page limit. The arxiv version of the paper (https://arxiv.org/pdf/1702.06053.pdf) contains all of these analyses. We will definitely present these analyses and all the experimental results at ICLR, if our paper were to get accepted.\\nf) While previous approaches perform experiments only on a single multi-task instance with up to 8 games, we perform experiments on 12-game multi-task instances as well. We have obtained very promising results. Same hyper-parameters found by tuning on MT1 were used for these experiments. No additional hyper-parameter tuning was done.\"}",
"{\"title\": \"Thanks for the review\", \"comment\": \"Thanks for the largely positive review! We would like to point out some clarifying points regarding the cons:\\n\\n---- Results are limited. The authors only used 6 Atari games.\\nWe would like to make two important points here.\\n 1. This is not the case. We have demonstrated results on 3 different sets of 6 games. In fact our paper is the first to report performance of a multi-tasking algorithm with hyper-parameters tuned on one multi-tasking instance (MT1 in our paper) on other multi-tasking instances (MT2 and MT3). The details are indeed scanty in this manuscript and we apologize for that. The 3 page limit really limits the amount of material which can be presented. Please have a look at the expanded version of the paper if you\\u2019d like to : https://arxiv.org/abs/1702.06053 We have presented results on 3 different multi-tasking instances as well as two different sensible architectures. We have also reported on what happens when we double the \\u201cbaseline\\u201d scores used by our method. \\n 2. We have also run experiments on 12-task instances and we observe similar performance levels (Similar to MT2 and MT3) on 12-task instance as well. The reason we haven\\u2019t included it in manuscript yet is that we are still running the baseline uniform sampling method on this instance. Note that the highest number of dis-similar tasks on which multi-tasking instance results have been published (without increasing the network size significantly) is 8 by actor mimic. This is a 50% improvement on the number of games and also consumes 50% lesser data and compute (since we train for only half the time required to train all the experts).\\n\\n---- and this type of result is highly variable and might not hold for a different set\\n\\nWe agree that this is indeed a problem with Multi Tasking learning literature. To address this, we tuned the hyper-parameters on MT1 and then also demonstrated performance on MT2 and MT3. \\n\\n---- More sophisticated methods, such as a multi-armed bandit formulation, might have been tried\\n\\nThis is something Reviewer2 pointed out as well. We agree that this is the logical second step. We are currently trying out Discounted-UCB-tuned on the multi-tasking instance MT1. Our preliminary experiments indicate that the bandit formulation isn't able to perform as well as the active sampling formulation. But we do agree that this is the logical next step and are actively pursuing it. In fact we are also in parallel pursuing the meta-learning problem of learning an actor critic as the meta-agent which dictates the next game to train on. \\n\\n---- did not show how often the different games were chosen, nor show any visualisation of the game selection over time\\n\\nApologies for not including it in this manuscript. The 3 page limit really restricts what we could and could not include in this paper. We chose to include the full learning algorithm so that the active sampling procedure is clear. We have already included all of these and many more interesting analyses in the arxiv version of the paper : https://arxiv.org/abs/1702.06053 which is slightly longer at 13 pages. Among the analyses included are those of the hidden neuron activations. We have shown that our method learns more task-agnostic neurons and this is perhaps the reason that it does so well on the multi-tasking problem.\", \"in_conclusion_the_main_contributions_of_this_work_are\": \"a) We have shown that a simple active sampling approach in and of itself is a valid technique for multi-task learning.\\nb) We show results on 3 sets of Multi Tasking instances. Previous works in the area show results on at most one instance.\\nc) Hyper-Parameter tuning was done on only one instance (MT1). We demonstrate how the method generalizes to two other MT instances and thus show robustness.\\nd) We propose sensible evaluation metrics for the multi tasking problem which help in separating the good multi tasking algorithms from those that are great on a narrow set of tasks but do not perform well on others.\\ne) Previous works in the area do not perform any analyses on why their methods work well vis-a-vis baselines. We also perform analyses on why we think our method performs much better than baseline methods. We apologize for not including these analyses in the workshop paper; it was hard given the 3 page limit. The arxiv version of the paper (https://arxiv.org/pdf/1702.06053.pdf) contains all of these analyses. We will definitely present these analyses and all the experimental results at ICLR, if our paper gets accepted. \\nf) While previous approaches perform experiments only on a single multi-task instance with up to 8 games, we perform experiments on 12-game multi-task instances as well. We have obtained very promising results. Same hyper-parameters found by tuning on MT1 were used for these experiments.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"UCB experiments\", \"comment\": \"Thanks to AnonReviewer 2 again, for pointing out the missing comparisons with UCB-style algorithms. We are currently running the discounted-UCB-tuned experiments on multi tasking instance MT1. One of the UCB agents has been trained for 126 million steps (close to half the training time).\\n\\nWhile the q_am performance of this UCB agent matches that of our A3CSH agent (both are near 0.4), the UCB-based approach almost completely ignores 2 out of the 6 games. As a result of this the q_hm performance of UCB agent is at 0.124.\\nIn comparison, after the same number of training steps, performance (q_hm) of our A3CSH agent was at 0.349. While the results are definitely preliminary, this close to triple initial performance exhibited by A3CSH supports our general belief that A3CSH agents are better suited for multi-tasking than UCB-based approaches.\"}"
]
} |
|
S1AtgaPug | Adjusting for Dropout Variance in Batch Normalization and Weight Initialization | [
"Dan Hendrycks",
"Kevin Gimpel"
] | We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by correcting for the influence of dropout rates and an arbitrary nonlinearity's influence on variance through simple corrective scalars. Since Batch Normalization trained with dropout estimates the variance of a layer's incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normalization's variance moving averages with dropout off and obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation. | [
"Deep learning"
] | https://openreview.net/pdf?id=S1AtgaPug | https://openreview.net/forum?id=S1AtgaPug | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"HJgfOKpjl",
"SJGSdGPil",
"B18AkFOix",
"r1oaoKese",
"SyYeOfvsx",
"BkPoOGvog",
"HkuEuBQig"
],
"note_type": [
"comment",
"comment",
"official_comment",
"official_review",
"comment",
"comment",
"official_review"
],
"note_created": [
1490028551760,
1489606713571,
1489698766503,
1489177538775,
1489606640759,
1489606815169,
1489356848536
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"~Dan_Hendrycks1"
],
[
"ICLR.cc/2017/workshop/paper8/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper8/AnonReviewer1"
],
[
"~Dan_Hendrycks1"
],
[
"~Dan_Hendrycks1"
],
[
"ICLR.cc/2017/workshop/paper8/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your analysis of our paper.\\n\\n> The proposed method is fairly incremental.\\n\\nLeCun initialization is where one multiplies the weights matrix by 1/sqrt(n) to stabilize variance.\\nAs you know, He initialization noted that for ReLUs we should instead multiply by 1/sqrt(0.5*n).\\nIn our view, He initialization was not fairly incremental, given its widespread use.\\nHe et al. showed that a simple corrective scalar can mean the difference between convergence and divergence.\\nWe generalize their idea to arbitrary nonlinearities like the ELU and show that dropout needs an adjustment too,\\nall while preserving the simplicity of simple initializations.\\nMoreover, we beat state of the art by a 1.25% accuracy difference with no architecture changes\\nwith a simple and highly general technique.\\n\\n> Formatting\\n\\nI agree entirely. This was an oversight and a byproduct of compressing a conference submission into three pages.\\nThe symbols which were previously undefined in this shortened version are now defined in the main text. Thank you!\"}",
"{\"title\": \"Updated score\", \"comment\": \"The updated paper fixes some of the concerns raised and I have updated my score.\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors propose a correction for the statistics of BatchNormalization under dropout for the test phase. The proposed method is fairly incremental. While the results indicate some improvement, they do not seem too encouraging.\\n\\nAlthough this is a workshop paper, I do not think that this is an excuse for leaving out crucial details in the main text. The main text should be self-contained\\u2013but the main method is impossible to understand without referencing to the appendix: it is presented as a formula containing various undefined quantities.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your analysis of our paper.\\n\\n> Section 2.1 uses notation without introducing it before.\\nI agree entirely. This was an oversight and a byproduct of compressing a conference submission into three pages.\\nThe symbols which were previously undefined in this shortened version are now defined in the main text. Thank you!\\n\\n> The idea seems very incremental\\n\\nLeCun initialization is where one multiplies the weights matrix by 1/sqrt(n) to stabilize variance.\\nAs you know, He initialization noted that for ReLUs we should instead multiply by 1/sqrt(0.5*n).\\nIn our view, He initialization was not fairly incremental, given its widespread use.\\nHe et al. showed that a simple corrective scalar can mean the difference between convergence and divergence.\\nWe generalize their idea to arbitrary nonlinearities like the ELU and show that dropout needs an adjustment too,\\nall while preserving the simplicity of simple initializations.\\nMoreover, we beat state of the art by a 1.25% accuracy difference with no architecture changes\\nwith a simple and highly general technique.\\n\\n> It is a little surprising if this is not already being used to initialize weight scales\\n\\nWe have not found anyone proposing how to generally adjust for an arbitrary nonlinearity or\\nrecommending that they adjust for dropout variance in weight initialization of batch normalization estimates.\\nIn the conference review cycle, no one voiced that this was already used in practice, and the same idea is true for the following comment.\\n\\n> Ioffe and Szegedy does recommend re-estimating the parameters.\\n\\nForemost, we found that re-estimating the mean and variance was overall somewhat harmful in early experiments, so we only re-estimated the variance.\\nSecondly, their recommendation does not specify to turn off dropout.\\nThird, although batch normalization is prominent, subsequent re-estimation is not prominent because it is thought either pointless, cumbersome, harmful or all three.\\nConsequently, Ioffe and Szegedy may recommend to re-estimate parameters, but few think to do so (e.g., DenseNet's creators).\\nIf someone thinks to re-estimate parameters, they must to think to turn off dropout too, and to make re-estimation beneficial, they must also discover not to re-estimate the mean parameters. \\n\\n> It is also not mentioned in which layers is dropout being done in the models used for the experiments.\\n\\nWe note in the body of the main paper that this is a VGGNet and we also note that the interested reader can find the dropout rates,\\nrandom hyperparameter search procedure, and more in the appendix.\\n\\n> It is not clear how the goals are met by adding the variances.\\n\\nGlorot et al. adjust for variances by averaging variances instead of adding variances, which is less mathematically natural to us. We found that adding worked better, and if we add the variances, Xavier initialization is a special case of our initialization if we used a Uniform distribution and used a ReLU nonlinearity while not using dropout. Similarly, the He initialization is a special case of our initialization if we use the ReLU and do not train with dropout and omit backpropagation's variance.\"}",
"{\"title\": \"Paper Update\", \"comment\": \"We have updated the paper to define a few symbols in the main text, thanks to the reviewers' comments. I am sorry for the oversight and hope that the paper is reconsidered in view of this change.\\n\\nThis paper is a compressed conference submission to ICLR, which under a heavier review process received 7,5,6 initial reviews, and the area chair described it as a \\\"borderline paper\\\" for conference. For this workshop track, our reviewers have starkly lower estimates, and we hope that the fixes to previously confusing notation and clarifications make this paper acceptable in their eyes.\"}",
"{\"title\": \"A useful idea to emphasize, but not a significant contribution\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Paper summary -\\nSeveral weight initialization schemes for deep neural networks aim to maintain\\nthe variance of each layer's output to be one, in order to prevent an\\nexponential blow-up or decay with increasing depth. This paper describes a way\\nto adjust the scale of the weight initialization to take into account the fact\\nthat dropout contributes some extra variance. The proposed solution is to\\nmultiply the variance of the initialization distribution by the keep-probability\\np. The paper also proposes that Batch Normalization parameters should be\\nre-estimated with dropout turned off, as opposed to using the moving average\\nones inferred during training (where dropout is turned on).\", \"pros\": \"- The paper shows some improvements over a strong DenseNet baseline using the proposed\\n adjustment.\\n- The paper highlights a simple adjustment that becomes important when using\\n dropout in all (or a large number of) layers of a very deep network.\", \"cons\": \"- The paper is poorly written. In particular, Section 2.1 uses notation without\\n introducing it before. This makes it hard to follow the arguments being made\\nin this paper. Appendix A.1, which has the main derivations, is not referenced\\nanywhere in the text of the paper. Also there are two Appendix As.\\n\\n- The idea seems very incremental. It is a little surprising if this is not\\n already being used to initialize weight scales. It might be that because\\ndropout is typically only used for the last layer in most applications, this\\nadjustment has not been emphasized before. However, it is easy to see that this will\\nbecome more important when dropout is used throughout a very deep network.\\n\\n- Regarding re-estimating the Batch norm parameters, the original paper from\\n Ioffe and Szegedy does recommend re-estimating the parameters. Turning\\ndropout off for this is the natural thing to do since the idea is to prepare the\\nnetwork to be run at test time. So, it is not immediately clear what additional\\ninsight is being presented in this paper.\\n\\n- Several details are missing. For example, the other initializations being\\n compared to (Xavier, He) should be clearly described. It is also not mentioned\\nin which layers is dropout being done in the models used for the experiments.\\n\\n- \\\"To meet these different goals, we can initialize our weights by adding these\\n variances.\\\" - It is not clear how the goals are met by adding the variances.\\nPlease explain.\\n\\nOverall, while the paper emphasizes an important adjustment which should be kept in\\nmind when using dropout across the depth of a deep network, it is a somewhat\\nobvious thing to do and not significantly interesting as a work of research.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HkO-PCmYl | Shake-Shake regularization of 3-branch residual networks | [
"Xavier Gastaldi"
] | The method introduced in this paper aims at helping computer vision practitioners faced with an overfit problem. The idea is to replace, in a 3-branch ResNet, the standard summation of residual branches by a stochastic affine combination. The largest tested model improves on the best single shot published result on CIFAR-10 by reaching 2.86% test error. Code is available at https://github.com/xgastaldi/shake-shake | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=HkO-PCmYl | https://openreview.net/forum?id=HkO-PCmYl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"rkAlp4Ctl",
"S1k_e3xsx",
"BJqlLcZ9g",
"BJUpXfu9l",
"SkDXsNBjl",
"BJYsCOb9l",
"S176mJG9e",
"Bk3ZctHol",
"H1UCZH_ol",
"SJHSoFFoe",
"Byrdq4Bjl",
"H1Cwd3Ssl",
"SyfBn3Hqg",
"SkGgS3gsx",
"ByLYmqf9e",
"SyXEvdQql",
"SygHXOtaoe",
"BkKoO3_qg",
"HJQygWOjl",
"S1qjrYIse",
"rJaR2S4jg",
"HJQvs4Ssx",
"B1WUweOje",
"BJsmmYaKg"
],
"note_type": [
"comment",
"official_review",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"official_review",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment"
],
"note_created": [
1487977717857,
1489186918980,
1488197106419,
1488622525901,
1489484574823,
1488191137000,
1488217019176,
1489504772010,
1489682893874,
1489767228558,
1489484396638,
1489516645595,
1488469049791,
1489188074065,
1488262014137,
1488320299117,
1490028573426,
1488664737365,
1489666011401,
1489569186465,
1489423573540,
1489484635423,
1489663817161,
1487930147254
],
"note_signatures": [
[
"~Xavier_Gastaldi1"
],
[
"ICLR.cc/2017/workshop/paper55/AnonReviewer2"
],
[
"~Xavier_Gastaldi1"
],
[
"~Raanan_Hadar1"
],
[
"~Xavier_Gastaldi1"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"~Xavier_Gastaldi1"
],
[
"~Xavier_Gastaldi1"
],
[
"~Xavier_Gastaldi1"
],
[
"ICLR.cc/2017/workshop/paper55/AnonReviewer1"
],
[
"~Xavier_Gastaldi1"
],
[
"ICLR.cc/2017/workshop/paper55/AnonReviewer1"
],
[
"~Xavier_Gastaldi1"
],
[
"~Xavier_Gastaldi1"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Xavier_Gastaldi1"
],
[
"~Xavier_Gastaldi1"
],
[
"~Xavier_Gastaldi1"
],
[
"(anonymous)"
],
[
"~Xavier_Gastaldi1"
],
[
"~Xavier_Gastaldi1"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Shake-Shake vs Shakeout\", \"comment\": \"It is probably a fair question given that paper\\u2019s name :).\", \"similarities\": \"Both methods use the idea of replacing 0s and 1s by scaling coefficients.\", \"differences\": \"1. Starting points: The starting point for Shakeout is Dropout while the starting point for Shake-Shake is a mix of FractalNet drop-path and stochastic depth (if you imagine applying drop-path to a 3 branch ResNet where the skip connection is never dropped). Dropping a path is equivalent to setting alpha_i to 0 or 1 in the Shake-Shake paper.\\n2. Multiplications: Both Dropout and Shakeout perform an element-wise multiplication between 2 tensors. In the case of Dropout, the usual steps are to: 1. Create a tensor (let\\u2019s call it self.noise) of the same size as the input tensor. 2. Fill self.noise with 0s or 1s taken from a Bernoulli distribution. 3. Perform an element-wise multiplication between self.noise and the original input (see https://github.com/torch/nn/blob/master/Dropout.lua Lns 25 26 and 30). In the case of Shakeout the Bernoulli distribution is replaced by eq (1) in the Shakeout paper. Shake-Shake, on the other hand, multiplies the whole mini-batch tensor with just one scalar alpha_i (or 1-alpha_i). Applying Shake-Shake regularization at the \\u00ab\\u00a0Image\\u00a0\\u00bb level is slightly more complex but follows the same logic. Let\\u2019s imagine that the original input mini-batch is a 128x3x32x32 tensor. The first dimension \\u00ab\\u00a0stacks\\u00a0\\u00bb 128 images of dimensions 3x32x32. Inside the second stage of a 26 2x32d model, this tensor has been transformed into a 128x64x16x16 tensor. Applying Shake-Shake regularization at the \\u00ab\\u00a0Image\\u00a0\\u00bb level means slicing this tensor along the first dimension and, for each of the 128 slices, multiplying the jth slice (of dimensions 64x16x16) with a scalar alpha_i_j (or 1-alpha_i_j).\\n3. Forward - Backward: Shakeout keeps the same coefficients between the Forward and Backward passes whereas Shake-Shake updates them before each pass (Forward and Backward)\\n4. Number of flows: Shake-Shake regularization works by summing up 2 residual flows plus a skip connection whereas Shakeout only needs one flow\", \"question_for_the_reviewer\": \"I found out about Shakeout after submitting the extended abstract. If possible, I would like to ask the reviewer for his opinion on whether this paper must be added to the relevant work section. While I think it shares the idea of replacing bernoulli variables with scaling coefficients, the challenge I have is simply that the 3 pages limit makes it very difficult to add new information without removing text somewhere else...\"}",
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Approach: Have two residual pathways instead of one, and randomly average them. Backproping through them can use a different mixing coefficient.\", \"pros\": [\"Code is available\", \"Very strong results for CIFAR-10. However, I'm unsure if getting SOTA on CIFAR-10 means anything anymore.\"], \"cons\": [\"Unclear motivation, other than the desire to add noise. Especially with regards to the different backprop procedure (Shake-Shake)\", \"The method is not too novel, and just feels like another variation on Resnets.\", \"I think the results are good, but I'm hesitant to strongly endorse it because of the lack of motivation and minimal novelty.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Re:\", \"comment\": \"Thank you! I will do that.\"}",
"{\"title\": \"My comment exactly\", \"comment\": \"Thank you for addressing this. This will make the paper much more viable, as this is a common point one would ask.\\n\\nI still do want to ask if you have any explanation as to why choosing different probabilities for the backward pass (shake shake) results in better performance than the shake-keep setting. If this is the case, shouldn't we attempt to apply this technique on other 'pseudo ensemble' techniques such as classic dropout, stochastic depth and expect improved performance as well?\"}",
"{\"title\": \"Re: Good results, but the explanation is lacking\", \"comment\": \"Redundancy:\\nYou will find below the link to a table presenting the Mean Square Error between the weights of a convolutional layer in one branch and the weights of the same convolutional layer in the other branch. For readability, the MSEs were multiplied by 10E4. These results are for 2x32d E-E-B and 2x32d S-S-I models. As you can see the redundancy between the 2 branches is reduced by the introduced stochasticity.\", \"http\": \"//bit.ly/2nzjDyZ\", \"table_1\": \"If time allows, I propose to run 1 instance of each of the missing 2x64d models to make sure that there is no trend change. I will share them in this thread when available.\"}",
"{\"title\": \"Re:\", \"comment\": \"I am not assigned to this paper but I would suggest to mention the Shakeout work in one line/sentence and refer to the appendix where differences/similarities with the Shakeout are analysed.\"}",
"{\"title\": \"Shake - Keep - Image?\", \"comment\": \"I would like to see numbers for the currently missing from the table 'Shake - Keep - Image' experiment, which is a natural one to include and would directly address one of the main claims of the paper, namely that resampling the gating variable is useful and necessary for the increased test error performance.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your confirmation! It is good to fix the problem before it propagates.\\nAs I said, the paper is great for the workshop.\"}",
"{\"title\": \"Re: Re: Re: Good results, but the explanation is lacking\", \"comment\": \"I thought about this for a while and ran the following test:\\n\\nFor each residual block, forward x_i through the residual branch 1 (ReLU-Conv3x3-BN-ReLU-Conv3x3-BN-Mul(0.5)) and store the output tensor in b1_i. Do the same for residual branch 2 and store the output in b2_i. Flatten these 2 tensors into vectors flat1_i and flat2_i. Calculate the covariance between each corresponding item in the 2 vectors using an online version of the covariance algorithm (see the last algorithm on this page https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Covariance). Repeat until all the images in the test set have been forwarded.\", \"the_results_can_be_found_here\": \"\", \"http\": \"//bit.ly/2myxMwe\"}",
"{\"title\": \"Table 1 - Missing 2x64d models\", \"comment\": \"The tests of 2 out of the 4 missing models are completed and you will find the results below (error rates at the last epoch):\", \"26_2x64d_s_k_b\": \"3.62%\", \"26_2x64d_e_s_i\": \"4.07%\\nThese results should be compared to a 26 2x64d E-E-B which obtains 3.76%.\\nWith the caveat that these are single tests, we can see a confirmation that Even-Shake doesn't work and that Shake-Keep produces a small improvement.\"}",
"{\"title\": \"Motivation and additional references\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your comments. I updated the paper and made the following changes:\\n1. Added a reference to the papers mentioned by Reviewer1\\n2. Moved the implementation details to the appendix\\n3. Added a section on motivation\\n\\nIf you wonder why these 3 papers were not included, the simple answer is that I did not know about them when I wrote the extended abstract. With hindsight, I understand why someone would draw a parallel to noise injection, but since I was looking for a more global effect (by global I mean \\u201cImage level\\u201d perturbations vs noise which is more local as it creates individual weight or feature perturbations), I never really explored the extensive dropout literature as much as I probably should have\\u2026 \\n\\nThe absence of a motivation section was simply due to the 3p constraint. I hope that moving the implementation details to the appendix is an acceptable fix.\\n\\nFor your information, I am currently running a couple of tests that should provide further hints as to what is happening under the hood. They should be completed within the next 48 hours.\"}",
"{\"title\": \"Re: Re: Good results, but the explanation is lacking\", \"comment\": \"Thanks for this reply, the effort generally applied to this work, and your honesty!\\n\\nI don't think the proposed experiment is a good measure of redundancy between representations. It could just be that adding stochasticity encourages larger weights (and thus larger MSE). Furthermore, a matching problem needs to be solved before it even makes sense to compare weights across different networks (see https://arxiv.org/pdf/1511.07543.pdf). I think covariance between pairs of neurons in a layer estimated on a sufficiently large dataset (1000 is probably enough) is a reasonable measure of the redundancy in one hidden representation. This is also nice because covariance can be compared between networks.\"}",
"{\"title\": \"Updated document\", \"comment\": \"The extended abstract was updated following the comments received. I added a placeholder for the other Image level tests. They should be completed within the next 2 weeks.\"}",
"{\"title\": \"Good results, but the explanation is lacking\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper introduces a particular type of randomness into ResNets activations and shows that this simple modification achieves state of the art performance on CIFAR-10. The idea is to replace each residual function with two residual functions then take a random convex combination of these representations. Doing this for each image during both the forward and the backward stages (using a newly sampled combination of representations for the backward stage) leads to a fairly small ResNet (26 layers) which achieves 2.71% error on CIFAR-10 and reduces the gap between train and test performance.\\n\\nThe idea is simple and produces nice, though somewhat incomplete results on CIFAR-10 (it would be nice to see Table 1 filled in completely). However, the motivation for this particular type of stochasticity is not well explained. It is related to similar work like Shakeout and Dropout, but it also leaves out other work which has generally shown that adding noise to activations or weights reduces overfitting [1, 2, 3]. Why should this particular type of noise be better than alternatives?\\n\\nOn one hand, it is good to continue the discussion about noise injection given novel performance. This paper adds one particularly effective instance of noise injection to that discussion, but it is not well motivated or understood. The ICLR workshop is a good opportunity to discuss reasons for this method's success.\", \"additional_question\": \"* How redundant are the two residual representations? Is redundancy increased or decreased by the additional stocasticity?\\n\\n\\n[1] An, Guozhong. \\\"The effects of adding noise during backpropagation training on a generalization performance.\\\" Neural computation 8.3 (1996): 643-674.\\n\\n[2] Blundell, Charles, et al. \\\"Weight Uncertainty in Neural Network.\\\" Proceedings of The 32nd International Conference on Machine Learning. 2015.\\n\\n[3] Neelakantan, Arvind, et al. \\\"Adding gradient noise improves learning for very deep networks.\\\" arXiv preprint arXiv:1511.06807 (2015).\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Re: Shake - Keep - Image?\", \"comment\": \"Thank you for your interest. I will definitely add these experiments.\\nI would have liked to duplicate all the tests done at the Batch level before the submission deadline but the idea to apply this method at the Image level occured to me too late for that.\"}",
"{\"title\": \"26 2x32d S-K-I\", \"comment\": \"I tested one 26 2x32d \\\"Shake-Keep-Image\\\" and the error rate for this model is 4.06%. This is basically the same as for a 26 2x32d \\\"Shake-Keep-Batch\\\".\", \"links_to_the_training_curves\": \"\", \"26_2x32d_s_k_i_vs_26_2x32d_s_k_b\": \"http://bit.ly/S-K-I_vs_S-K-B\", \"26_2x32d_s_k_i_vs_26_2x32d_s_s_i\": \"http://bit.ly/S-K-I_vs_S-S-I\\n\\nI will update the paper once the other 2 runs are complete.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Re:\", \"comment\": \"It has been argued that residual blocks refine/improve on their inputs (\\\"Highway and Residual Networks learn Unrolled Iterative Estimation\\\" Greff et al. (2016)). I like this view and my feeling is that the residual blocks provide \\\"light touches\\\" rather than \\\"heavy duty corrections\\\". If this idea is correct, then altering residual branches will only have a small impact (if done properly). If alpha_i = 0.3, the output is not the same as if alpha_i = 0.5 but it is probably not too far off and the network is still able to learn.\\nIt looks like the same concept applies for the backward pass. By that I mean that the gradients are slightly modified but are still \\\"plausible\\\" gradients.\\nWrt other regularization techniques, someone would simply have to try.\"}",
"{\"title\": \"Re: Review\", \"comment\": \"Thank you for your review.\\nI hope that the motivation section I added alleviates some of your concerns.\"}",
"{\"title\": \"Re: Re: Re: Good results, but the explanation is lacking\", \"comment\": \"Thank you for your help and advice. I will try to measure this before the Friday deadline.\"}",
"{\"title\": \"Top 1 error or best top 1 error?\", \"comment\": \"Please clarify whether the results given in Table 1 correspond to Top 1 of the last epoch or best top 1 error.\\nThe reason why I am asking is \\nprint(string.format(' * Finished top1: %6.3f top5: %6.3f', bestTop1, bestTop5)) \\nin the end of the main.lua file\\nThe difference between the two can be in order of 0.2% or so. \\nThe best top 1 error cannot be used because one cannot select networks based on the *test* set.\\n\\nIf it is the case, then the camera-ready version should fix it.\\nI believe that the paper is the best fit for the workshop track.\"}",
"{\"title\": \"Re: Top 1 error or best top 1 error?\", \"comment\": \"Thank you for spotting this. You are right, this line of code is from fb.resnet.torch and since fb.resnet.torch is the official ResNet implementation, I (wrongly) thought that this was the way it was calculated in the ResNet papers. I think that this section of the code was designed for the Imagenet experiments not the CIFAR ones. Changing from the best error rate to the last error rate moves the average of the 2x96d S-S-I models from 2.72% to 2.86%. It also moves the average of the 2x96d E-E-B models from 3.44% to 3.58%. The delta between the largest S-S-I and E-E-B models stays at 0.72%. I will update all the numbers in Table 1 in the next couple of days.\"}",
"{\"title\": \"Additional tests\", \"comment\": \"As mentioned in my previous comment, you will find below a couple of additional tests results.\\nAll tests below were performed on 26 2x32d models at the Image level and are compared to a 26 2x32d Shake-Keep-Image model.\\n\\nThe first test (method 1) is to set beta_i_j = 1 - alpha_i_j. As you can see in this figure, the effect is quite drastic and the training error stays really high. Something seems to prevent the network from converging.\", \"http\": \"//bit.ly/2m4mgfY\\n\\nJust as in the paper, training curves use a dark shade and test curves use a light shade\", \"method_2\": \"If alpha_i_j < 0.5, beta_i_j = rand(0,1)*alpha_i_j. If alpha_i_j >= 0.5, beta_i_j = rand(0,1)*(1-alpha_i_j) + alpha_i_j\", \"method_3\": \"If alpha_i_j < 0.5, beta_i_j = rand(0,1)*(0.5-alpha_i_j) + alpha_i_j. If alpha_i_j >= 0.5, beta_i_j = rand(0,1)*(alpha_i_j-0.5) + 0.5\", \"method_4\": \"If alpha_i_j < 0.5, beta_i_j = rand(0,1)*(0.5-alpha_i_j) + 0.5. If alpha_i_j >= 0.5, beta_i_j = rand(0,1)*(0.5 - (1-alpha_i_j)) + (1 - alpha_i_j)\", \"method_5\": \"If alpha_i_j < 0.5, beta_i_j = rand(0,1)*alpha_i_j + (1-alpha_i_j). If alpha_i_j >= 0.5, beta_i_j = rand(0,1)*(1-alpha_i_j)\", \"a_graphical_illustration_is_probably_easier_to_understand_and_you_can_find_one_here\": \"\", \"you_can_find_the_training_curves_here\": \"\", \"as_well_as_a_focus_on_the_lower_section_here\": \"\", \"what_can_be_seen_is_that\": \"1. The regularization effect seems to be linked to the relative position of beta_i_j compared to alpha_i_j\\n2. The further away beta_i_j is from alpha_i_j, the stronger the regularization effect\\n3. There seems to be a jump in regularization strength when 0.5 is crossed\\n4. Even if the training curves of Method 2 and of a S-K-I model are different, their test curves overlap perfectly. The training curves actually reach zero around the same time. The training curve of Method 3 is lower for a long time but reaches 0 later which in turn produces a better test error. It would be interesting to understand what leads to this inversion.\\n\\nI find this new information interesting as it could help adjust the strength of the effect and perhaps improve the error rate further.\"}",
"{\"title\": \"Shakeout\", \"comment\": \"The results are impressive!\\nPlease discuss how the proposed approach is similar/different to \\n\\\"Shakeout: A New Regularized Deep Neural Network Training Scheme\\\", AAAI-2016, by Guoliang Kang, Jun Li, Dacheng Tao\", \"http\": \"//www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11840/11800\"}"
]
} |
|
BJMO1grtl | Neural Expectation Maximization | [
"Klaus Greff",
"Sjoerd van Steenkiste",
"Jürgen Schmidhuber"
] | We introduce a novel framework for clustering that combines generalized EM
with neural networks and can be implemented as an end-to-end differentiable
recurrent neural network. It learns its statistical model directly from the data and
can represent complex non-linear dependencies between inputs. We apply our
framework to a perceptual grouping task and empirically verify that it yields the
intended behavior as a proof of concept. | [
"Theory",
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=BJMO1grtl | https://openreview.net/forum?id=BJMO1grtl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"rkzvuFpjl",
"Hy4PvEkjg",
"SyQ7U_Uil",
"Sk7a55gil"
],
"note_type": [
"comment",
"official_review",
"comment",
"official_review"
],
"note_created": [
1490028634503,
1489090396211,
1489565211178,
1489181371397
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper152/AnonReviewer2"
],
[
"~Klaus_Greff1"
],
[
"ICLR.cc/2017/workshop/paper152/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Nice idea but details are missing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose a clustering method based on a mixture distribution of components which are parameterized by neural networks. The system finds a clustering using generalized EM updates. The parameters are trained by propagating the likelihood gradient through these EM updates. The authors also propose a less principled but potentially more flexible version of the model in which the EM updates are replaced with more conventional recurrent neural network state updates.\\n\\nTo my knowledge, the proposed combination of models and trained inference method are new and an interesting approach to the grouping problem. The idea to backpropagate through inference updates is not new by itself and it would have been nice to see some references to earlier work in this area but obviously there was not much space available for that. \\n\\nI find it very unfortunate that there are not more details about the empirical work. It is not clear to me how many EM steps are used. While an extended abstract may not need to be detailed enough for an exact replication of the research, details like this are in my opinion necessary to at least get a rough idea about the complexity and practical value of the method.\", \"pros\": [\"Nice new approach to the grouping problem.\", \"Results seem to compare favorably to prior work.\"], \"cons\": [\"Lack of implementation details.\", \"Limited evaluation and analysis of the methods.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Added details\", \"comment\": \"Thank you for the feedback. We agree on your criticism and have now added a section in the appendix that describes the experimental setup in more detail.\\nIn particular, we've used K=3 and 10 EM steps for all experiments.\"}",
"{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": [\"The paper concerns the task of perceptual grouping (essentially clustering pixels in an image; pixels in the same cluster are meant to form meaningful perceptual constructs, hopefully relevant shapes; a kind of unsupervised segmentation task). The paper proposes a generative model for images in which a neural network function F encodes the dependencies between pixel. Each cluster has a different latent variable theta, such that a mixture of F(theta) with Gaussian noise makes up the final image. The important part in this generative model (relevant to the task) is to be able to infer to which clusters the pixels should be affected and that the clusters be meaningful. The model is trained with a loose approximation of EM which takes the form of backdrop through time in a recurrent neural network.\", \"An assessment of novelty, clarity, significance, and quality.\", \"the method seems novel\", \"the paper is not very clear and must be read many times to gather information about what the model is which is all over the place. Event then a lot remains unclear.\", \"negative results on real dataset suggest the method is not significant in its current form\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": [\"the method seems new\", \"implementation of a loose EM approximation with a RNN seems new and somewhat interesting\", \"cons\", \"not clear but maybe this cannot be helped due to the 3 page limit.\", \"results seem encouraging on toy dataset (method seems to cluster pixels based on shape) but not on real dataset (pixels are clustered based on colour which is exactly what the authors did not want). This undermines the whole point of the method which is to capture dependency between pixels. This makes one doubt the applicability of the method to pixel grouping let alone applicability to anything else.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
r1HUjsVFg | Consistent Alignment of Word Embedding Models | [
"Cem Safak Sahin",
"Rajmonda S. Caceres",
"Brandon Oselio",
"William M. Campbell"
] | Word embedding models offer continuous vector representations that can capture rich contextual semantics based on their word co-occurrence patterns. While these word vectors can provide very effective features used in many NLP tasks such as clustering similar words and inferring learning relationships, many challenges and open research questions remain. In this paper, we propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional latent space leveraging carefully generated synthetic data points. This generative process is inspired by the observation that a variety of linguistic relationships is captured by simple linear operations in embedded space. We demonstrate that our approach can lead to substantial improvements in recovering embeddings of local neighborhoods. | [
"Natural language processing",
"Transfer Learning",
"Unsupervised Learning",
"Applications"
] | https://openreview.net/pdf?id=r1HUjsVFg | https://openreview.net/forum?id=r1HUjsVFg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Skl9lqSjg",
"Syg6VOYpje",
"ByGNizfig",
"B1VajElil"
],
"note_type": [
"official_review",
"comment",
"comment",
"official_review"
],
"note_created": [
1489506440077,
1490028597413,
1489279786249,
1489157052110
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper98/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Cem_Safak_Sahin1"
],
[
"ICLR.cc/2017/workshop/paper98/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Interesting idea, lacking in clarity\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper introduces a method for aligning the representations of different word embedding models by leveraging synthetic data points. Even after reading the paper several times, and reading the authors' response to another reviewer's questions, I still struggle to say how exactly the authors achieve this alignment and precisely what problem the technique is supposed to resolve.\\n\\nIf I understand correctly, synthetic data points are generated by combining (with coefficient +/- 1) embeddings from within the same neighborhood, and if the result also falls within the neighborhood it is retained as a \\\"latent word\\\". These latent words help anchor the alignment process. Intuitively, having more points to anchor a neighborhood makes some sense, but I don't understand the details of how the alignment is actually being implemented, so it is hard to say anything more concrete.\\n\\nIt seems like the major novelty of the proposed approach is in the generation of the latent words. So, in addition to a more detailed explanation of the alignment process, I would have liked to see more analysis or discussion related to the methodology for choosing the latent words. In particular, an experiment in which the latent words are just random points within the neighborhood (rather than linear combinations of existing embeddings) would be very important.\\n\\nOverall, I think this paper needs considerable work to improve the clarity of exposition, and could use some additional experiments to support the proposed method for choosing latent words.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks for your review and your time. Our\\u00a0response\\u00a0for the\\u00a0reviewer\\u2019s\\u00a0comments is below. In general, we agree that the concepts questioned by the reviewer could be explained better in plain english (rather\\u00a0than defined as\\u00a0equations). However, the 3-page limit made it a\\u00a0challenging task to include both. We would be happy to offer a revised version.\", \"a1\": \"The reviewer expressed confusion about our definition of \\u00a0\\u201cunstable\\u201d / \\u201cinconsistent\\u201d word2vec models. What we mean by unstable or inconsistent is that we observe word2vec models to not always preserve local neighborhood structure even in learning scenarios when we should expect them to. For some given parameter k, that controls what we consider as local, we expect k semantically similar words to be consistently embedded close to each other in high dimensional space. As presented in Fig. 1, word2vec can generate inconsistent embeddings of similar words \\u00a0even for the same input data and the same model with the same training parameters. Note that this inconsistency can be due to various reasons such as usage of more\\u00a0than one workers\\u00a0(i.e., multi-threading)\\u00a0during modeling (e.g., , down sampling, size of the input dataset etc). We wanted to highlight in a quantitative way how different these embeddings can be. Furthermore, the inconsistency might be a really important issue if a downstream task requires\\u00a0real-time stream output from the word embedding model (e.g., collecting data from chat rooms in\\u00a0real time and training word2vec models for some specific down stream task.)\\n\\nA2. We agree with the reviewer that the definitions for the\\u00a0\\u201cfixed local neighborhood\\u201d\\u00a0and\\u00a0\\u201cneighborhood overlap\\u201d\\u00a0could be explained in a much simpler way than the definitions we give at\\u00a0the end of page 2. We would be happy to do so in a revised version of the paper. By a fixed local neighborhood we mean the word and the k-nearest neighboring words in the representation generated by the word2vec model. Note that k is a given parameter. By neighborhood overlap we mean the fraction of common words between the neighborhoods of the same word in two different representations generated by two word2vec models.\\n\\nA3. We agree with the reviewer about the large body of work around bilingual / multilingual word embedding and their alignments. Our motivation for this paper is not to introduce manifold alignment as a new methodology for improving word embedding problems. Our contribution is to suggest a data imputation technique that together with alignment techniques leads to word representations where similar words are consistently placed near each other. \\u00a0Because of this, we believe that our method can further improve the alignment quality and other downstream tasks.\\u00a0Another important distinction of our method is that our method is unsupervised in contrast with other existing methods that are either supervised or semi-supervised.\\u00a0\\n\\nA4. The reviewer expressed confusion about the definition of latent words. We define a latent word as synthetically generated points in high dimensional space (page 1). \\u00a0We use the coordinates of existing data points (words) to generate the coordinates of these synthetic data points. The formal definition of a latent word is given at page 3 as \\u201cA latent word $w^i_\\uff0a$ can be generated by $w^i_\\uff0a$ = P(\\u03b1_n \\u00d7 w^i_{r_n} )$, where $\\u03b1_n$ is a randomly chosen integer from $[-1, +1]$ and $w^i_{r_n}\\u00a0\\u2208\\u00a0n^i_{\\\\epsilon} |_{w^i_l}$ .\\u201d We use these synthetically generated words to increase the quality of alignment. We agree that using \\u201clatent\\u201d might be confusing and would be happy to change this reference in a revised version of our paper.\\n\\nA5. Reviewer asks about the goal of our paper. Our goal is to improve the quality of alignment for two word embedding models by carefully injecting words in a similar fashion that statistical data imputation techniques handle missing data\\u00a0under sampled datasets. We provide \\u00a0initial, very promising results that such an approach can improve the quality of alignment of different models and because of this we believe that our method would also improve other down stream tasks.\"}",
"{\"title\": \"Not sure what is being proposed\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper argues that word embedding algorithms are \\\"unstable\\\" and \\\"inconsistent\\\", although I am not sure what is meant by these terms, and proposes an alignment solution, although I'm not sure how this solution really works. The authors state:\\n\\n\\\"For a fixed local neighborhood size, we re-train the same model, using the same parameters, on the same training dataset. We then measure model stability as a function of neighborhood overlap across consecutive re-trained model instances.\\\"\\n\\nI'm not sure what is meant by a \\\"fixed local neighbourhood\\\" (do you mean a fixed region in the induced embedding space? how is this chosen?). Nor do I understand what \\\"neighborhood overlap\\\" means (do you mean the fraction of overlapping words found in the same fixed regions in the two different embedding spaces?) \\n\\nMy best guess is that the authors train two word embedding models initialized differently on the same data, and then measure the fraction of common words that are embedded in the same \\\"local neighborhood\\\", i.e. same volume of the induced embedding space.\\n\\nThat different models learn different embeddings which are not aligned is not at all a surprising finding. Furthermore, that these embedding spaces of different models can be aligned to transfer information from one space to the other is also not surprising. This has motivated a large amount of work on bilingual / multilingual word embeddings (see [1] for just one example), and semi-supervised word embeddings (see [2] for one example).\\n\\nI was hoping that the mention of \\\"latent words\\\" meant that the authors had some kind of latent-variable approach to automatically learn the alignments, but I couldn't understand how the \\\"latent words\\\" are generated nor how they are used in the alignment process?\\n\\nOverall, I am not sure what problem the authors are solving and I'm not sure what their proposed solution really involves, and lastly, I'm not sure in what way the intrinsic evaluation results shown are meant to be interpreted as a measure of success (does the proposed method actually improve results in a real task?). \\n\\n[1] \\\"BilBOWA: Fast Bilingual Distributed Word Representations without Word Alignments\\\", Gouws et al., ICML 2014, https://arxiv.org/pdf/1410.2455.pdf.\\n[2] \\\"Retrofitting Word Vectors to Semantic Lexicons\\\", Faruqui et al., 2014, https://arxiv.org/pdf/1411.4166.pdf.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJNcU0VYx | Multimodal Noise and Covering Initializations for GANs | [
"David Lopez-Paz",
"Maxime Oquab"
] | This note describes two simple techniques to stabilize the training of Generative Adversarial Networks (GANs) on multimodal data. First, we propose a covering initialization for the generator. This initialization pre-trains the generator to match the empirical mean and covariance of its samples with those of the real training data. Second, we propose using multimodal input noise distributions. Our experiments reveal that the joint use of these two simple techniques stabilizes GAN training, and produces generators with a richer diversity of samples. Our code is available at http://pastebin.com/GmHxL0e8. | [
"Deep learning",
"Unsupervised Learning",
"Optimization"
] | https://openreview.net/pdf?id=HJNcU0VYx | https://openreview.net/forum?id=HJNcU0VYx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"ryhrdY6ix",
"SyT6fBXjg",
"r1hROL7jl"
],
"note_type": [
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028611632,
1489355461357,
1489361108237
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper120/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper120/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper presents two tricks to prevent the mode-missing behavior of GANs. The first trick is to use a multimodal noise at the input in order to make it easier for the generator to generate multi-modal data. The second trick is to initialize the generator with a \\\"covering initialization\\\" so that the mean and covariance of the generated and real data match. One of the reasons that GAN training fails is that typically the support of the real and generated distributions are disjoint, in which case there will be a perfect discriminator between them which makes the training dynamic unstable. It is argued in the paper that the second trick addresses this problem. It is further shown that both of these tricks are necessary for covering all the modes of a mixture of Gaussian toy dataset with a GAN.\\n\\nWhile both of these tricks do make sense, I am not convinced that these trick will actually resolve the mode-missing behavior of GANs in high-dimensional data. The GAN training dynamic of a 2D toy dataset is very different from that of a 1000 dimensional dataset, and it is likely that heuristic tricks like these do not generalize to high dimensional datasets. I think the quality of this paper can be substantially improved, if the authors show that these trick will actually help on a more realistic dataset.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary: This paper proposed two techniques to help train GANs, in particular, by improving their stability. Multimodal noise essentially replaces the typically Standard Normal distributions with a multimodal distribution. Although theoretically unimodal noise can allow modeling any data distribution, but it's believed that it would be easier to model real-world data using multimodal noise distributions since real-world data also resembles islands separated by low-probability regions. Covering initialization pretrains the generator such that the mean and covariance of the model distribution and data distribution matches. This avoids mode collapse by avoiding initializing the generator as a many-to-one function (which is harmful). The experiments are shown on a low-dimensional toy dataset.\", \"novelty\": \"The ideas described have not been described in prior work, AFAIK. But, the ideas have been well-known by practitioners of GANs, but never studied properly, or found to be significantly useful in large-dimensional datasets and tasks.\", \"clarity\": \"The paper is written clearly.\", \"significance\": \"Improving techniques for training GANs is a significant problem.\", \"quality\": \"Overall, the lack of experimentation on a high-dimensional dataset (even MNIST) hurts the quality of the paper.\", \"pros\": \"The main reason for accepting the paper would be to have a written description of these two techniques, so that these ideas can be discussed and cited.\", \"cons\": \"The paper provides little to no evidence on whether these techniques are actually useful in real high-dimensional tasks.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1rqJyHKg | Efficient Sparse-Winograd Convolutional Neural Networks | [
"Xingyu Liu",
"Song Han",
"Huizi Mao",
"William J. Dally"
] | Convolutional Neural Networks (CNNs) are compute intensive which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd’s minimal filtering algorithm (Lavin (2015)) and network pruning (Han et al. (2015)) reduce the operation count. Unfortunately, these two methods cannot be combined—because applying theWinograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity. First, we prune the weights in the ”Winograd domain” (after the transform) to exploit static weight sparsity. Second, we move the ReLU operation into the ”Winograd domain” to improve the sparsity of the transformed activations. On CIFAR-10, our method reduces the number of multiplications in the VGG-nagadomi model by 10.2x with no loss of accuracy. | [
"Deep learning"
] | https://openreview.net/pdf?id=r1rqJyHKg | https://openreview.net/forum?id=r1rqJyHKg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"H1x-pNoOf",
"HyZUuKTjg",
"B1k94_A5l",
"BkMwvOejg"
],
"note_type": [
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1520286968176,
1490028617160,
1489040518875,
1489172314244
],
"note_signatures": [
[
"~Xingyu_Liu1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper129/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper129/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"New Version\", \"comment\": \"The new version of our paper was accepted into ICLR 2018. It can be found at https://openreview.net/forum?id=HJzgZ3JCW. We also open-source our code at https://github.com/xingyul/Sparse-Winograd-CNN. The arXiv version is at https://arxiv.org/abs/1802.06367.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"simple yet quite useful approach for reducing the number of multiplications required in a CNN.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"Introducing the advantages of weight pruning and sparsity, in decreasing number of multiplications, into the Winograd domain results in a larger decrease in number of multiplications required per forward pass.\\nThe authors propose training the kernel in transform domain and also pruning the weights in the same thereby enabling Winograd transform to be used with sparse weights. \\nThe Relu is now applied after Winograd transform for not loosing the advantage of operating on sparse activations.\\n\\nExperimental validation supports the authors claim of decrease in number of multiplications by using proposed approach.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice trick to reduce the number of multiplications required to evaluate a CNN\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper presents two tricks to reduce the number of multiplications needed to compute convolutions with small kernels. First, the trainable filters are stored directly in the Winograd domain, rather than stored in feature domain and then transformed. This directly impacts the number of multiplications that need to be performed. Second, the Winograd transformation and ReLU are swapped for network activations bringing even more sparsity into the Winograd domain. Oddly enough this change doesn't affect the performance of the networks, even though they no longer perform convolutions followed by ReLUs. The two tricks reduce the number of multiplications 2.2 times over a pruned network, while losing little accuracy.\\n\\nThe paper is a nice starting point for investigation of possivle speedups and efficient implementation of pruned networks. The empirical evaluation is slightly underwhelming, but sufficient for a workshop paper. I would like to see more evidence that the ReLU trick works (it drastically changes the semantics of the network). I would also want to see baselines such as smaller pruned networks.\", \"nit\": \"I found Figure 1 hard to read and would prefer to have it accompanied by the equations for the computations performed.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJFpDxfFl | Deep Adversarial Gaussian Mixture Auto-Encoder for Clustering | [
"Warith Harchaoui",
"Pierre-Alexandre Mattei",
"Charles Bouveyron"
] | Feature representation for clustering purposes consists in building an explicit or implicit mapping of the input space onto a feature space that is easier to cluster or classify. This paper relies upon an adversarial auto-encoder as a means of building a code space of low dimensionality suitable for clustering. We impose a tunable Gaussian mixture prior over that space allowing for a simultaneous optimization scheme. We arrive at competitive unsupervised classification results on hand-written digits images (MNIST) that is customarily classified within a supervised framework. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=rJFpDxfFl | https://openreview.net/forum?id=rJFpDxfFl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"r1LfdKaoe",
"S1zTPj5qx",
"rk_MvSwql",
"SkP4puSix"
],
"note_type": [
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028558259,
1488791482279,
1488570127858,
1489501486688
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"~Pierre-Alexandre_Mattei1"
],
[
"ICLR.cc/2017/workshop/paper27/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper27/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Answer to AnonReviewer2's review\", \"comment\": \"Thank you for your careful reading of our manuscript.\\n\\nAs you point out, our main contribution is to provide a simple way to automatically choose the prior distribution of an adversarial auto-encoder (AAE) for the purpose of clustering.\\n\\nHowever, we want to stress that, within the original AAE framework of Makhzani et al., no automatic way of tuning this prior is provided. Indeed, the prior is arbitrarily designed to take into account the problem at hand. For example, it is assumed that, for MNIST, the prior is composed of a standard multivariate Gaussian modeling the style of the digits concatenated with a fixed categorical distribution modeling the classes.\\n\\nWe believe that prior design is a difficult task and that automatic procedures should be studied. Our work, by simply assuming that the prior belongs to a clustering-oriented parametric family (GMM), is a step in that direction.\\n\\nNote that Makhzani et al. also cluster MNIST. They select 16 clusters (which should lead to a smaller clustering error than choosing 10 clusters) and obtain an accuracy of 90.45%, which is substantially smaller than our approach. This suggests the potential superiority of automatic priors over handcrafted priors.\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"CONTRIBUTIONS\\n\\nThis paper applies adversarial auto-encoders to unsupervised classification of MNIST digits.\\n\\nThe proposed model, named DAC for \\\"deep adversarial clustering\\\", optimizes a loss with three components:\\n- A reconstruction term encourages reconstructions to be close to the original.\\n- A log-likelihood term on the latent representation's prior distribution forces the prior to track the encoder's distribution.\\n- An adversarial loss forces the encoder to have a distribution close to the prior distribution.\\n\\nThe paper claims state-of-the-art on the unsupervised MNIST clustering task.\\n\\nNOVELTY, CLARITY, SIGNIFICANCE, QUALITY\\n\\nThe proposed method is well-explained.\\n\\nIt seems to me that the proposed loss is almost identical to an adversarial auto-encoder loss where the prior would be learned, the difference being that here the prior is learned through maximum likelihood as opposed to using the adversarial term to guide the prior. Is there any reason to choose one over the other?\\n\\nIn my opinion, this work lacks the novelty sought by this workshop: unless something escapes me, the main point of the paper is that a very straightforward adaptation of adversarial auto-encoders performs well at clustering MNIST digits.\\n\\nPROS (+), CONS (-)\\n\\n+ Proposed method is well-explained\\n- Paper offers little novelty\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes to use adversarial autoencoders with a Gaussian mixture prior on the latent space for unsupervised clustering. The parameters of the Gaussian mixture are also optimized by maximum likelihood. Training the prior by maximum likelihood while training the rest of the model with the AAE objective appears to be the main novelty of the work.\\n\\nThe use of a tuned prior appears to give significantly better results on unsupervised clustering than the previous work of Makhzani et al on adversarial autoencoders. Training the prior by maximum likelihood (rather than using the adversarial loss) might be helpful in avoiding optimization difficulties, but there are no experiments exploring this.\\n\\nThe paper shows good results on unsupervised clustering of MNIST digits, but the proposed algorithm is a standard adversarial autoencoder with a learned prior term. The main concern is that the algorithm does not seem sufficiently novel to justify inclusion in the workshop.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
Bk9mxlSFx | Neural Combinatorial Optimization with Reinforcement Learning | [
"Irwan Bello",
"Hieu Pham",
"Quoc Le",
"Mohammad Norouzi",
"Samy Bengio"
] | We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city \mbox{coordinates}, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to $100$ nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems. | [
"neural combinatorial optimization",
"reinforcement",
"combinatorial optimization problems",
"neural networks",
"recurrent neural network",
"framework",
"reinforcement learning",
"salesman problem",
"tsp",
"set"
] | https://openreview.net/pdf?id=Bk9mxlSFx | https://openreview.net/forum?id=Bk9mxlSFx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Bybm2sljl",
"By8B65xjx",
"rk7Put6jx"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489185817030,
1489182014475,
1490028635251
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper153/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper153/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Neural networks for NP-hard problems\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper is basically pointer networks but with reinforcement learning. The reward is the negative tour length.\\n\\nI think this approach is superior to the original pointer networks, which assumed a supervised signal. Since we're working with NP-hard problems, this is difficult to obtain.\", \"pros\": [\"Good ablation study. I particularly liked the Active Learning ablation, which tries to train a network from scratch for one instance of the problem.\", \"Results on toy TSP problems (lengths 25, 50, 100) are near optimal.\", \"Authors recognize the limitations of their approach w.r.t. order of magnitude worse running time\"], \"cons\": [\"Unclear how well this approach will scale up to non-toy scales\", \"Would like to have seen other NP-hard problems\", \"This paper is a good fit for the workshop.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This is a well written and interesting paper.\\nTo carry out the inference for TSP problem by REINFORCE algorithm make a lot more sense.\\nHowever, I feel to apply LSTM as the encoder is still not perfect.\\nAs the extension of pointer network on this problem, the paper could have more explorations on the model design or combination of heuristics instead of simply reusing the ptrNet structure with REINFORCE algorithm. \\nThough I would be conservative about using vanilla pointer network on combinatorial optimization problems, the paper gave enough insights and experiments showing the significance of applying neural network on the classical problems.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
Skc-Fo4Yg | Variational Intrinsic Control | [
"Karol Gregor",
"Danilo Jimenez Rezende",
"Daan Wierstra"
] | We introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. Both algorithms also yield a tractable and explicit empowerment measure, which is useful for empowerment maximizing agents. Furthermore, they scale well with function approximation and we demonstrate their applicability on a range of tasks. | [
"Unsupervised Learning",
"Deep learning"
] | https://openreview.net/pdf?id=Skc-Fo4Yg | https://openreview.net/forum?id=Skc-Fo4Yg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"H1pE_tpjx",
"Hk-UEczjl",
"HyrZDnR5g"
],
"note_type": [
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028596653,
1489310793056,
1489057533458
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper97/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper97/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Great approach, promising results, space too limited for proper presentation\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper presents an interesting variational bound on the computationally intractable empowerment criterion.\\nIt then gives a simple algorithm for learning options to maximize meta-controller empowerment, in the absence of an external reward signal.\\nThis approach seems novel, insightful and likely impactful, and the results look promising.\\n\\nThe first part of the paper is very clear, principled and promising.\\nStarting from the right part of Figure 1, through Algorithm 2, to some of the results in Figure 2, the descriptions are vague or missing.\\n\\nWhile the lower bound is computationally tractable to update iteratively, the convergence properties are unclear. Algorithm 1 is attempting to estimate an informational quantity, and these require notoriously high sample complexity (Paninski, 2003).\", \"pros\": [\"Principled approach\", \"Computationally tractable lower bound\", \"Promising results\"], \"cons\": [\"Unclear convergence properties\", \"Figure 1 (right part) and Algorithm 2 are inscrutable (with undefined notation)\", \"Important details are missing of the experiments in Figure 2 (architectures, number of iterations, etc.)\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Tractable objective, options embeddings\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This paper tackles the options discovery problem through the notion of\\n\\\"empowerment\\\", formalized in information-theoretic terms. The paper builds \\non similar recent ideas, such as Mohamed & Rezende (2015), but proposes\", \"a_tractable_objective\": \"a variational lower bound on the\\nmutual information between options and the set of reachable states upon termination \\nof that option. A policy gradient approach is used to optimize this objective. \\n\\nWhile the idea of using policy gradient methods for options discovery has \\nrecently been explored by other authors, one of the contribution of this particular\\npaper is the definition of a new objective, beyond the usual expected sum of \\ndiscounted rewards. In fact, as shown in Bacon & al. (2017), the expected discounted\\nreturn is not sufficient to guarantee temporally extension and regularization\\nmight have to be employed. The proposed variational lower bound could provide \\nsuch regularization. \\n\\nAnother contribution of this paper is the idea of generalizing options \\nfrom discrete entities to \\\"embeddings\\\" aka. families of options. This is a \\ngeneralization which some other authors have also recently started adopting\\n(cf. Schaul & al (2015), Borsa & al. (2016), Oh & al. (2017), Vezhnevets & al. (2017))\\nbut I appreciate that this paper develops it more clearly in the text.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkaxnKEYg | Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols | [
"Serhii Havrylov",
"Ivan Titov"
] | Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. We require that messages they exchange, both at train and test time, are in the form of a language (i.e. sequences of discrete symbols). As the ultimate goal is to ensure that communication is accomplished in natural language, we perform preliminary experiments where we inject prior information about natural language into our model and study properties of the resulting protocol. | [
"Natural language processing",
"Deep learning",
"Multi-modal learning",
"Games"
] | https://openreview.net/pdf?id=SkaxnKEYg | https://openreview.net/forum?id=SkaxnKEYg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"H19SnLWse",
"HkdzZ_Hsx",
"HyLswwKjl",
"BJ54dtpix",
"BJn1-sgje",
"ByDGJgEil",
"SyHbXgGig"
],
"note_type": [
"comment",
"comment",
"comment",
"comment",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1489230913981,
1489498383931,
1489758110082,
1490028593579,
1489182947956,
1489399566616,
1489269500893
],
"note_signatures": [
[
"~Serhii_Havrylov1"
],
[
"~Serhii_Havrylov1"
],
[
"~Serhii_Havrylov1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper92/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper92/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper92/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Re: No Title\", \"comment\": \"Thank you very much for your review and feedback. We would like to comment on a couple of your remarks and clarify points which we think were misunderstood.\\n\\n>> The fact that a very similar paper has already been accepted in the same venue takes away some of the novelty points. \\n\\nIndeed, the setup is inspired by Lazaridou et al., 2017. Though conceptually it seems natural to go from symbols to sequences of symbols, in practice, it is not straightforward to make such an approach scalable and efficient. In fact, though several neural network approaches have been proposed for inducing protocols consisting of single symbols (including Lazaridou et al; see the paper for reference), we believe we are the first to generalize the set-up to sequences of symbols and also to show that using sequences results in more efficient communication than using single symbols. Also, apart from the setting, our method and that of Lazaridou et al. are very different. \\n\\n\\n>> the authors switch from using 1-hot symbols (and thus RL) to using Gumbel-softmax distribution\\n\\nWe would like to emphasize that in the proposed method one-hot symbols are still used during training and testing phases. During training and testing, symbols in messages are generated from Categorical distribution (Gumbel-argmax). The fact that we have used neither continuous messages nor RL is another aspect which differentiates us from previous work on multi-agent protocol induction.\\n\\n>> (at this point it's not entirely clear to me whether the authors trained the sender on caption generation or the receiver on caption retrieval)\\n\\nIndeed, this kind of grounding is possible, but, in the proposed approach, we neither trained the Sender for the caption generation task nor trained the Receiver for caption retrieval. Our grounding process consists of imposing a prior on the communication protocol q(m|t). Minimizing the Kullback-Leibler divergence from the natural language to the learned protocol KL[q(m|t)||p_NL(m)] favors generated messages to have a high probability according to the distribution p_NL(m) (natural language) but at the same time should have high entropy. \\n\\nIn other words, though the word \\u2018red\\u2019 may not refer to \\u2018red\\u2019 in the protocol, the goal was to ensure that statistical properties of the protocol are similar to these of the natural language, and see what effect it would have on the communicative success. In hindsight, maybe we should not have referred to it as \\u2018grounding\\u2019, as we now realized it is potentially misleading.\\n\\n>> Moreover, the fact that the communicative success with the grounding task decreases so much hints that the proposed way of grounding is not an effective one ... Perhaps, the authors need to look into the strength of the regularizer as it seems to be taking over.\\n\\nIn our case, we tested a hypothesis whether favoring \\u201cnaturalness\\u201d of the protocol makes it more efficient. It turns out the answer is no. Also, we tested, whether agents would start using, e.g., nouns as nouns, adjectives as adjectives, without us imposing stricter forms of supervision (e.g. training the Sender for caption generation).\\n\\nIt worth mentioning that Imaginet establishes an upper bound for any communication protocol that looks like a language from the MSCOCO dataset. Any protocol that will obey the proposed KL constraint (will look like MSCOCO language) will have worse performance than the Imaginet model or equal. Proposed model has comparable performance to the upper bound (Imaginet). \\n\\nBy decreasing the impact of the KL regularizer, the communication protocol will less resemble natural language, and that contradicts the goal of the grounding process. Also, one should bear in mind that MSCOCO descriptions were not generated for the referential game, that is why they can be not very discriminative.\\n\\n\\n>> ... the KL is measured between the probability of the messages as produced by the sender and some unspecified p_w(m) distribution)\\n\\nAs we mentioned in the extended abstract, we used an estimated language model p_\\u03c9(m). We implemented p_\\u03c9(m) as an LSTM language model. We used image captions of randomly selected (50%) images from the training set to estimate parameters of the language model. It is worth mentioning that these images were not used for training the Sender and the Receiver. Unfortunately, given the 3-page constraint on the extended abstract, we could not describe all the details of the set-up.\"}",
"{\"title\": \"Re: Regarding grounding\", \"comment\": \"Thank you for your feedback again.\\n\\n>> Given the authors comments about the discriminativeness of MSCOCO, wouldn't it be possible to induce a language model from a different dataset (say Wikipedia) and see how this compares to the one induced from MSCOCO?\\n\\nIt is possible to use a different dataset for learning language model, but we leave this experiment for future work.\\n\\n\\n>> ... was there really a need to test the hypothesis of naturalness in the proposed way?\\n\\nAt first, we tested proposed model with KL regularization. It had worse performance in comparison to unregularized model. To understand why it is the case, we trained Imaginet model (is it an intrinsic property of language or proposed method just failed to discover right protocol?). Without this benchmark, it is hard to determine the reason for the lower performance. \\n\\nAssuming that it is easier to acquire unannotated sentences we believe that proposed regularization still does make sense. We suppose that with this indirect form of supervision model will require a smaller amount of data for direct supervision.\"}",
"{\"title\": \"Re: No Title\", \"comment\": \"Thank you very much for your comments.\", \"we_updated_the_paper_and_made_the_following_changes\": \"1. We added clarification regarding grounding process.\\n2. We added more examples of qualitative analysis into the appendix for images from the different domain.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"rating\": \"7: Good paper, accept\", \"review\": \"This work presents an extension of Lazaridou et al., 2017 (another ICLR submission) to communication between agents with sequence of symbols. Due to the complexity of the problem (generating a sequence of symbols rather than a single symbol), the authors switch from using 1-hot symbols (and thus RL) to using Gumbel-softmax distribution, thus allowing for training the agents in an end-to-end fashion by backprop. Similar to Lazaridou et al., they attempt grounding the communication protocol to natural language (at this point it's not entirely clear to me whether the authors trained the sender on caption generation or the receiver on caption retrieval). Interestingly, when this happens, the induced communication protocol reflects properties of natural language (as measured by the omission score) while at the same time decreasing the agents' communication performance (from 95% to 52%).\\n\\nThe fact that a very similar paper has already been accepted in the same venue takes away some of the novelty points. Moreover, the fact that the communicative success with the grounding task decreases so much hints that the proposed way of grounding is not an effective one (also from the text it's not crustal clear how is the grounding achieved as the KL is measured between the probability of the messages as produced by the sender and some unspecified p_w(m) distribution). Perhaps, the authors need to look into the strength of the regularizer as it seems to be taking over. The analysis of the appendix is really interesting, as well as the point about hierarchical coding.\\n\\nOverall, this is a very intriguing line of research and, as the authors point in the conclusions, many open questions remain. That being said, the current work does feel a bit rushed; many parts are not clear (specifically details regarding the grounding part) and the proposed grounding approach doesn't seem to be effective in terms of communication.\", \"pros\": [\"extending the rather limited setup of Lazaridou et al. to sequences of symbols, resembling more natural language\", \"to the best of my knowledge, the use of Gumbel-distribution for text generation is novel\"], \"cons\": [\"lack of clarity, especially in the section about grounding\", \"proposed grounding method is not effective with regards to communicative success\", \"rather limited novelty (given the emphasis of the ICLR workshop) as work is direct extension of previous work\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Regarding grounding\", \"comment\": \"I would like to thanks the authors for their comments. I have now raised my score to 7 as I had completely misunderstood the part about grounding (as a note, I hope some more information would find their way into the camera ready, even if at the Appendix).\\n\\nThe way of performing grounding uses a less direct supervision than what I initially thought. While the overall approach hammers communicative success, I find KL regularization more interesting than just a fully supervised task. Given the authors comments about the discriminativeness of MSCOCO, wouldn't it be possible to induce a language model from a different dataset (say Wikipedia) and see how this compares to the one induced from MSCOCO?\\n\\nAt the same time, I was puzzled by their authors' comment \\\"it worth mentioning that Imaginet establishes an upper bound for any communication protocol that looks like a language from the MSCOCO dataset.\\\" If that is the case, then was there really a need to test the hypothesis of naturalness in the proposed way? Wasn't the proposed grounding \\\"doomed to fail\\\" (in terms of not achieving high communicative success)?\"}",
"{\"rating\": \"7: Good paper, accept\", \"review\": \"Applying the straight-through Gumbel-softmax estimator for end to end differentiable sequence generation is likely an early instance of a trick we will see used more broadly. This has also not been applied to this task before with previous approaches either being continuous or RL. That the process is the same during training and testing is a welcome benefit.\\n\\nThe analysis into the \\\"language\\\" of the agents has fascinating qualities but is also likely representative of encoding the pre-existing knowledge of the VGG-19. With that said, the hierarchical aspect that is captured would not have been carried through the pre-existing knowledge and is a promising insight.\\n\\nI'd have been interested in seeing one more example of qualitative analysis beyond 5747 simply due to curiousity and whether the \\\"animal\\\" aggregation would continue cleanly to other categories.\\n\\nAs noted by the other author, additional clarification regarding the grounding towards natural language would be welcome. For future work, other aspects of grounding are possible, such as penalizing longer resulting sequences, given that \\\"redundancy in encodings\\\" as noted in the paper doesn't seem a favourable trait for natural language.\\n\\nWhile many questions are left open for future exploration I believe this work is interesting and introduces some new insights into communication between intelligent agents in such a setting.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1hLmF4Fx | Synthetic Gradient Methods with Virtual Forward-Backward Networks | [
"Takeru Miyato",
"Daisuke Okanohara",
"Shin-ichi Maeda",
"Masanori Koyama"
] | The concept of synthetic gradient introduced by Jaderberg et al. (2016) provides an avant-garde framework for asynchronous learning of neural network.
Their model, however, has a weakness in its construction, because the structure of their synthetic gradient has little relation to the objective function of the target task.
In this paper we introduce virtual forward-backward networks (VFBN).
VFBN is a model that produces synthetic gradient whose structure is analogous to the actual gradient of the objective function.
VFBN is the first of its kind that succeeds in decoupling deep networks like ResNet-110 (He et al., 2016) without compromising its performance. | [
"Deep learning",
"Optimization"
] | https://openreview.net/pdf?id=H1hLmF4Fx | https://openreview.net/forum?id=H1hLmF4Fx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"r1PEdKaie",
"HJ8sVb-ox",
"BJtyz-kse",
"BJhCWigoe"
],
"note_type": [
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028591089,
1489208477786,
1489076705420,
1489183188309
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"~Takeru_Miyato1"
],
[
"ICLR.cc/2017/workshop/paper88/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper88/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Response to reviewers\", \"comment\": \"We thank the reviewers for reading our manuscript and for comments and suggestions.\\nAs for the quality of the writing, we proofread the manuscript again and corrected grammatical mistakes, and uploaded the revised manuscript. Also, the details of the architecture of the proposed model and the baseline can be found in Appendix section.\\n\\nMeanwhile, we completely agree with both reviewers on the point that we need to more quantitatively evaluate the computation times of the algorithms. As of now, we are performing additional experiments to clock the runtime of the algorithms, and we plan to present the result of this comparative study at the workshop. \\n\\nStill yet, we would like to emphasize that the structure of VFBN is not so complicated despite the appearance of its formulation. \\nOur VFBN and Jaderberg\\u2019s model differs only in the construction of $\\\\delta$, which is a function of $h$ and $y$. Jaderberg\\u2019s model computes $\\\\delta$ with a function that does not impose any specific relation between hidden variable $h$ and the output $y$. Our VFBN, on the other hand, respects the relation that $h$ is an intermediary input and $y$ is the output. \\n\\nThe computation time largely depends on how we define the VFBN, and the choice of its structure is up to the user. Needless to say, however, in order to enjoy the merit of decoupling, we need to make VFBN very simple relative to the target network we try to optimize. For our experiment with 110 layer-Resnet, we used 5 layer-virtual forward network and its corresponding backward network. We would like to note in advance that the computational cost of training these mini networks are negligible when compared to the training of the full network. \\n\\nWe also agree with the reviewer that we need to experiment with decouplings at multiple locations as well. Unfortunately, however, we could not conduct this additional experiment in the given time frame, and we plan to conduct additional experiments in the future study.\"}",
"{\"title\": \"More details should be described,\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper is well-motivated, which improve Jaderberg's by exploiting the structure of gradients. The experimental results are promising. However, the cons of the paper are:\\n1. The English of the paper is not good. \\n2. The paper should provides more details of the experiments, such as the configuration of the proposed model and the baseline. Moreover, the paper should also provide the time used to train the model, since Jaderberg's model is more simple.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"Summary:\\n\\nThis work extends the work of Jaderberg et al (2016) on synthetic gradients,\\nby replacing their gradient regressors with a mini-neural networks (called\\nVirtual Forward Backward Networks or, VFBN), which predict the final label\\n(instead of the gradients), and are trained by minimizing the l2-loss between\\ngradients backpropagated to the layer from these mini-networks and the\\nactual gradients from the full model.\\n\\nThese VFBNS can be seen as implementing \\\"auxiliary losses\\\", but are notably not\\ntrained by miniming the label-prediction loss.\", \"cons\": \"1. The overall presentation is clear, however there are a few typos.\\n2. The exact details of how the various components were updated are missing.\\n3. Only one layer was decoupled. More experiments on decoupling the whole \\n network would be insightful.\\n4. These \\\"mini (or virtual) networks incur additional costs; experiments \\n which control for computational costs should be insightful.\", \"pros\": \"1. Potentially interesting insight into training with synthetic gradients, but\\n requires further investigation.\\n2. Impressive performance on CIFAR-10.\\n\\nThe work aligns well with Workshop Track's objective to \\\"stimulate discussion of new ideas and directions\\\".\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rySCp-1Yg | Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters | [
"Joan Serrà",
"Alexandros Karatzoglou"
] | The size of neural network models that deal with sparse inputs and outputs is often dominated by the dimensionality of those inputs and outputs. Large models with high-dimensional inputs and outputs are difficult to train due to the limited memory of graphical processing units, and difficult to deploy on mobile devices with limited hardware. To address these difficulties, we propose Bloom embeddings, a compression technique that can be applied to the input and output of neural network models dealing with sparse high-dimensional binary-coded instances. Bloom embeddings are computationally efficient, and do not seriously compromise the accuracy of the model up to 1/5 compression ratios. In some cases, they even improve over the original accuracy, with relative increases up to 12%. We evaluate Bloom embeddings on 7 data sets and compare it against 4 alternative methods, obtaining favorable results. | [
"outputs",
"inputs",
"bloom embeddings",
"bloom filters",
"neural network models",
"difficult",
"compact embedding",
"size",
"deal",
"sparse inputs"
] | https://openreview.net/pdf?id=rySCp-1Yg | https://openreview.net/forum?id=rySCp-1Yg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"S1zz_Fasl",
"H1w0zeejg",
"HJFov2a9g",
"B1ZjGlxox",
"rkQWTQJsl"
],
"note_type": [
"comment",
"comment",
"official_review",
"comment",
"official_review"
],
"note_created": [
1490028554194,
1489138383106,
1488992161185,
1489138328707,
1489087738577
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"~Joan_Serrà1"
],
[
"ICLR.cc/2017/workshop/paper18/AnonReviewer2"
],
[
"~Joan_Serrà1"
],
[
"ICLR.cc/2017/workshop/paper18/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Answer to \\\"Neat idea\\\"\", \"comment\": \"Thank you very much for your comments and feedback. As mentioned to the other reviewer, in the paper we now clearly state that there is an extended version of the paper, with full experimental section, that was previously submitted to the main conference track.\\nRegarding the false positives idea, it somehow overlaps with the co-occurrence-based Bloom embedding that we propose in the Appendix of the extended version. However, it may be not exactly the same. We can refine that in future experiments. Thanks!\"}",
"{\"title\": \"more details?\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes the use of bloom filters to generate embeddings\\nsuitable for representing very sparse data for either input or output.\\nThe authors experiment on a set of tasks and show that they can reduce the\\nrepresentation size by a factor of 5 without significant performance\\nreduction, and still recover calibrated probabilities.\\n\\nI am not sure I understood how much time needs to be traded for space.\\n(maybe none? that would be good!) both for training and testing.\\n\\nThe experiment section is very compact, so it is hard to verify if the\\nbaseline models are reasonable and compare to the state-of-the-art on\\nthe selected 7 tasks (a lot of details are omitted, and I understand this\\nis because it has to fit in 3 pages but still).\\n\\nIt would have also been interesting to compare to other related approaches\\n(like hierarchical representations). I'm also not sure I understood how it\\nrelates to previous attempts at using Bloom filters (like Cisse et al).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Answer to \\\"More details?\\\"\", \"comment\": \"Thank you for your comments.\\n\\nThe Workshop submission contains a subset of the experimental results we presented to the main conference track (https://openreview.net/forum?id=rkKCdAdgx). We updated the PDF of the workshop manuscript to mention and link the extended version explicitly.\\n\\n1) Training time: Performing a Bloom embedding is a fast operation of O(ck), where c is the number of ones in the one-hot encoded instance (c<<d, d being the original instance dimensionality) and k is the number of hash functions used (typically k<10, and in our experiments k<=5). Running a hash function is O(1) and very fast, as it only involves a couple of calls to a uniform random number generator and a couple multiplications/divisions (furthermore, hash function results can be pre-calculated, so that it is then basically a single read operation). Testing time: Recovering the original ranking of elements is an O(dk) operation. Thus, only a marginal overhead of k reads is introduced. Notice furthermore that the embedding operation needs to be performed only once (offline) for both training and testing. We now include part of this discussion at the end of the results section.\\n\\n2) Details regarding the baseline models can be found in the extended version of the paper. The 3-page format of the workshop is certainly limiting.\\n\\n3) We are not sure about the mentioned hierarchical representations. If they refer to the hierarchical softmax, it should be noted that it improves the speed of operations but not the space to store the layer and, therefore, the network. Space is the main focus of our work, and becomes critical when dealing with million-sized one-hot encodings in both inputs and outputs. Regarding previous attempts, we are only aware of one work using Bloom filters, the cited Cisse et al work. In that work, as far as we understand, the output of a Bloom filter is used to define m independent binary classification tasks, the results of which are then cleverly combined to produce a more robust classification. In our work, we directly train the neural network model with the full output of the Bloom filter (computing gradients with respect to the full representation), and not separately in a binary classification scheme (note that learning d binary classifiers when d is in the range of thousands or millions is not feasible). Working with the compact full representation allows to tackle the network size problem we want to address in the first place. The accuracy gains in some data sets are a nice by-product of using the full output of the Bloom filter to compute the gradients (k times more ones in the instance).\"}",
"{\"title\": \"Neat idea\", \"rating\": \"7: Good paper, accept\", \"review\": \"I quite like the idea, and I think it has some merit, but as the other reviewer mentioned it is hard to judge by the experimental section.\\n\\nI think it would be good to attach a story to this idea. You are already mentioning recommender systems; in those systems being able to identify false positives is often less important than false negatives. One wouldn't want to mix up Star Wars and Finding Nemo, but a false positive on Star Wars vs Star Trek wouldn't be so bad. In this sense having non-random (but overall uniformly distributed) hashes could be a nice addition/future work.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1cYxlSFx | Robustness to Adversarial Examples through an Ensemble of Specialists | [
"Mahdieh Abbasi",
"Christian Gagne"
] | We are proposing to use an ensemble of diverse specialists, where speciality is defined according to the confusion matrix. Indeed, we observed that for adversarial instances originating from a given class, labeling tend to be done into a small subset of (incorrect) classes. Therefore, we argue that an ensemble of specialists should be better able to identify and reject fooling instances, with a high entropy (i.e., disagreement) over the decisions in the presence of adversaries. Experimental results obtained confirm that interpretation, opening a way to make the system more robust to adversarial examples through a rejection mechanism, rather than trying to classify them properly at any cost. | [
"ensemble",
"adversarial examples",
"specialists",
"robustness",
"diverse specialists",
"speciality",
"confusion matrix",
"adversarial instances",
"class",
"tend"
] | https://openreview.net/pdf?id=S1cYxlSFx | https://openreview.net/forum?id=S1cYxlSFx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SJtP95gjx",
"ryEw_K6oe",
"HJctWWRql",
"BJFMziJsx",
"S1CDZ4m9e",
"HJPhye09g",
"HkqZ_u2qg"
],
"note_type": [
"official_review",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_review"
],
"note_created": [
1489181281287,
1490028635975,
1489011073808,
1489117713160,
1488302438038,
1489006510988,
1488910338075
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper154/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper154/AnonReviewer2"
],
[
"~Christian_Gagné1"
],
[
"~mahdieh_abbasi1"
],
[
"~Christian_Gagné1"
],
[
"ICLR.cc/2017/workshop/paper154/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Structure from confusion\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The idea of using the confusion matrix to augment the structure of the network is not new. This shown by Hinton et al. \\\"Distilling the Knowledge in a Neural Network\\\" as well as by Farley et al. \\\"Self-informed neural network structure learning.\\\"\\nThe extraction of structure from the confusion matrix itself is rather weak. The proposed method of splitting categories into confused and not super classes isn't robust as a given class c can end up in multiple super classes. Furthermore the decision of creating super classes is done locally (per row) without regard of the global structure of the problem. Employing sprectral graph theory to reason about the structure of the confusion matrix, its affinity counterpart actually, is much more principled.\\n\\nThe application of the ensemble of specialists to detecting adversarial examples is novel. However it is not very different from just adding a \\\"unknown\\\" class to a regular CNN model with multiple specialists.\\n\\nI recommend the authors to pursue this area of research and find provably optimal specialist selection criteria. Furthermore, the proposed approach, leads to significant redundancy of computation as the lower layers across specialists and generalist are likely very similar if not identical.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Further Feedback\", \"comment\": \"Happy to provide further comments and thanks for your thoughtful response. Firstly, I agree that your approach is novel and apologize if my initial review was taken to imply otherwise. Moreover, as I said in the review, I think this is an interesting direction for work and encourage you to develop it further.\\n\\nI set the paper at 'marginally below acceptance threshold' primarily because I do not feel a convincing argument has been made for the proposed method within the short-abstract, i.e. ignoring the supplementary material and judging the submission purely on the merits of the first 3 of 9 pages. Some thoughts about the manuscript under this view:\\n + The introduction and algorithm explanation were sufficiently detailed and could be followed well.\\n + The discussion of model confidence and the accompanying Figure 2 were interesting and point to the approach working well; however, the confidence distribution is not put into the context of the baseline models so it is difficult to really evaluate the approach.\\n - Despite spending a large chunk of the paper defining the evaluation criteria, no numerical results were presented for E_A or E_D in the main paper. They are referenced but only with respect to relative performance so it is hard to evaluate how well your method is doing and what the trade-offs are between avoiding adversarial examples and getting 'regular' images correct. \\n\\nOf course, many of these problems are resolved in the additional details and figures in the supplementary material but the role of a supplement is not to fill gaps in the core argument of the main submission.\\n\\nI hope you find this feedback useful.\"}",
"{\"title\": \"Paper updated following detailed feedback\", \"comment\": \"Thank you very much for your feedback, this is greatly appreciated. Your explanations are clear and helpful, we are happy that you took some time to develop with more details your appreciation of the paper.\\n\\nFollowing this, we updated the paper, with the following changes:\\n- Thorough proofreading by a native English speaker, to detect and correct all grammar mistakes.\\n- Evaluation criteria along equations 1 to 3 moved in the Appendix.\\n- More results on confidence densities, including baseline models, added in page 3, along some extra analysis of the results.\\n\\nThis should make the first 3 pages of the submission more complete and self-contained. We hope you will find this update satisfying regarding the issues you have raised.\"}",
"{\"title\": \"The updated paper\", \"comment\": \"A new version of the paper has submitted, with experimentations using ensembles that are not including the GA-CNN (i.e., the CNN used to generate adversarial examples). We figured out that this is a better methodology than the one used in the previous version of the paper. All reported results have been updated accordingly.\"}",
"{\"title\": \"Comment on review\", \"comment\": \"Thank you for your review and your time. We've got from the review text that there were several grammatical errors and that putting the results in the supplement is not ideal (sorry, it was difficult to do otherwise). However, we cannot really see from the text what the main issues are with the paper. We understand that papers submitted to the workshop track are lightly reviewed, but still can you provide us with more specific and informative feedback, in order to move in the right direction?\\n\\nMoreover, although we agree that the use of specialists build on class subsets is similar to 'Distilling the knowledge in a neural network', but their use for dealing with adversarial examples is completely novel. The contribution of our paper lies in allowing to reject samples suspected to be adversarial, by looking at the ambiguity on the specialists outputs.\"}",
"{\"title\": \"A Promising Start\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This work proposes a novel specialist+1 ensemble which seems to provide robust confidence estimations for deep networks in the presence of adversarial examples. This is an important and interesting task and the results look promising, though I found the presentation lacking.\\n\\nThe technique presented is sufficiently novel, but draws significant motivation from 'Distilling the knowledge in a neural network' as the authors note. The paper has many grammatical errors throughout, but the overall narrative is reasonable. \\n\\nOverall the approach and results seem promising; however, much of the critical experimental setup and results are excused to the supplement (which itself is as long as the initial submission). I think this idea could spark interesting discussion in the community but I would lean towards rejection for now and encourage the authors to develop this work further.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJOuEn7Fx | Evaluating Dimensionality Reduction of 2D Histogram Data from Truck On-board Sensors | [
"Evaldas Vaiciukynas",
"Matej Ulicny",
"Sepideh Pashami",
"Slawomir Nowaczyk"
] | This work presents evaluation of several approaches for unsupervised mapping of raw sensor data from Volvo trucks into low-dimensional representation. The overall goal is to extract general features which are suitable for more than one task. Comparison of techniques based on t-distributed stochastic neighbor embedding (t-SNE) and convolutional autoencoders (CAE) is performed in a supervised fashion over 74 different 1-vs-Rest tasks using random forest. Multiple distance metrics for t-SNE and multiple architectures for CAE were considered. The results show that t-SNE is most effective for 2D and 3D, while CAE could be recommended for 10D representations. Fine-tuning the best convolutional architecture improved low-dimensional representation to the point where it slightly outperformed the original data representation. | [
"Unsupervised Learning",
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=HJOuEn7Fx | https://openreview.net/forum?id=HJOuEn7Fx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SkeHq-gje",
"rymQ_KTje",
"ByLptmx9l"
],
"note_type": [
"official_review",
"comment",
"official_review"
],
"note_created": [
1489144376259,
1490028571015,
1488103869628
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper49/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper49/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Empirical paper on dimensionality reduction\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper proposes an empirical evaluation of some unsupervised dimensionality reduction methods like t-sne, CAE\\u2026 The paper does not introduce any novelty either any particular conclusion could be extracted from such us limited empirical evaluation over a limited data set from truck sensors.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting application, but out-of-scope for ICLR Workshops\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper presents an application of convolutional autoencoders and t-SNE on engine data from Volvo trucks. Whilst this is a fine application of modern machine-learning techniques, I don't think it is suitable for the ICLR workshops: the goal of the workshop is to present new, not fully tested ideas. This submission does not present such ideas. I believe it would be more suitable for venues with a stronger focus on applications, such as KDD or IEEE Intelligent Transportation Systems.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJvPIReKx | Generalization to new compositions of known entities in image understanding | [
"Yuval Atzmon",
"Jonathan Berant",
"Amir Globerson",
"Vahid Kazemi",
"Gal Chechik"
] | Recurrent neural networks can be trained to describe images with natural language, but it has been observed that they generalize poorly to new scenes at test time.
Here we provide an experimental framework to quantify their generalization to unseen compositions. By describing images using short structured representations, we tease apart and evaluate separately two types of generalization: (1) generalization to new images of similar scenes, and (2) generalization to unseen compositions of known entities. We quantify these two types of generalization by a large-scale experiment on the MS-COCO dataset with a state-of-the-art recurrent network, and compare to a baseline structured prediction model on top of a deep network. We find that a state-of-the-art image captioning approach is largely "blind" to new combinations of known entities (~2.3% precision@1), and achieves statistically similar precision@1 to that of a considerably simpler structured-prediction model with much smaller capacity. We therefore advocate using compositional generalization metrics to evaluate vision and language models, since generalizing to new combinations of known entities is key for understanding complex real data. | [
"Computer vision",
"Natural language processing",
"Deep learning",
"Supervised Learning",
"Transfer Learning",
"Multi-modal learning",
"Structured prediction"
] | https://openreview.net/pdf?id=rJvPIReKx | https://openreview.net/forum?id=rJvPIReKx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"r1bMGYNjl",
"r1HGdKajg",
"Bk1rV6J5e"
],
"note_type": [
"official_review",
"comment",
"official_review"
],
"note_created": [
1489437193312,
1490028556609,
1488077879322
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper23/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper23/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"A good target for further study but limited execution\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper takes the MSCOCO captioning dataset, heuristically parses the sentences into Subject Relation Object (SRO) triplets similar to Farhadi et al 2010, and then evaluates the Show Attend and Tell and some baselines on two splits of the data: 1) a split where the Test set has the same SRO distribution but new images (standard setting) and 2) a split where the Test set contains SRO triplets where the individual pieces were seen in the training data, but the particular composition wasn\\u2019t. The paper shows that the generalization performance is very poor on (2), almost as bad as a relatively weak SSVM baseline.\\n\\nThe paper is fairly well written, modulo some odd quirks like undefined terms, though I have some larger recommendations. First, this is not a model paper, it\\u2019s more of a dataset/evaluation paper. Therefore, the related work should focus specifically on this aspect instead of broadly discussing the modeling approaches used in image captioning. Second, the description of the main SA&T model is slightly too short and it\\u2019s not clear how the SRO triplets are predicted. Is the LSTM emitting S,R,O in that order, pretending these are sentences of 3 words? The decoders are also using soft attention. I think I know what that means, but this could be clarified.\\n\\nIn terms of the results, at some level I\\u2019m not too surprised about the results because the LSTM is trained to model the joint distribution over SRO triplets, so it might be reluctant to predict e.g. an O given S,R that never occurred in the training data. Of course, one would like the model to learn the true underlying function from images to labels without relying on the more shallow dependencies of the labels, and it appears that the model struggles here, for this many datapoints. This has been qualitatively noted in the literature before, but I\\u2019m not aware of anyone who studied this more quantitatively and the related work doesn\\u2019t cover this well. For example, VQA also struggles with this problem, which also partly motivated VQA 2.0. I\\u2019m also aware of CLEVR, which proposed to study these problems in a synthetic setting, etc. Another interesting baseline here might be the same SA&T model, but feeding in all zero images, which would be a more controlled comparison, to see just how much the images are used, and to what extent it\\u2019s just the joint distribution modeling.\\n\\nIn conclusions, this is one approach to measuring a problem that has been noted by the community. The SA&T model and a few very simple baselines are evaluated, but the conclusion isn\\u2019t particularly striking and the analysis isn\\u2019t particularly deep and therefore there is not too much to take away.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting direction, but limited novelty and conclusion\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper studies how much compositionality approaches for image captioning have. The experiments compare two scenarios: on the standard COCO test split (which has similar compositions, but novel images), and on a new split which does not contain known components in a novel composition. Specifically, the paper parses coco captions into triplets of subject \\u2013 relation \\u2013 object and evaluates these rather than actual captions.\", \"strength\": [\"The paper has a clear exposition and motivation.\", \"The paper studies an interesting scenario of how compositional approaches are for image captioning.\"], \"main_weaknesses\": \"1.\\tThe result suggest that the studied approaches/baselines cannot handle the scenario of novel triplets, i.e. have low compositionality. According to the authors expositions this was already known beforehand.\\n2.\\tThe specific experimental setup does not allow to study how this affects the actual generation of sentence, e.g. there might be some approaches which are very good at novel triplet prediction, but do not solve the problem of sentence generation.\\n3.\\tThe paper does not propose any novel approach and only studies *one* captioning approach (Show attend and tell), although the text sometimes suggests that there are multiple.\\n4.\\tFrom a learning perspective the studied model, show-attend-and-tell, does exactly what it is trained for: It predicts very low probability for tuples which have very low joint probability according to the training data. The LSTM learns exactly this aspect [similar the pair-wise terms in the SSVM model]. If one does not want to exploit the joint probabilities, one should just look at the unaries.\\n5.\\tThe paper misses to provide any insight how this know problem could be approached.\\n6.\\tPlease also provide precision@k for each of the components of, i.e. separately for S, R, and O to understand better where the problem of joint task originates from (in both settings: coco split and compositional split).\\n\\n\\n\\nFurther Weaknesses\\n7.\\tIt would be interesting and important to know, how well a model does which does not have pair-wise probabilities, e.g. only f_S, f_O, and f_R, but not f_SR and f_RO.\\n8.\\tPlease provide a better definition how precision@k is computed: does it mean if for @1 that it is on one if the highest ranked triplet matches *any* of the ground truth triplets?\\n9.\\tSpace can probably be saved by removing Figure 2, and only reporting @1, @5, @10 in a table. \\n10.\\tWould the precision@k evaluation not be better to be cumulative?\\n11.\\tWhat does the \\u201cConv\\u201d stand for in SSVM/Conv?\\n12.\\tLast sentence Sec. 3.3: Does this really apply only to SSVM or to all models?\\n13.\\tWhy has the MF 0% on the coco split? What percentage does it have on the training coco split? If it really is always very low, maybe this can be removed to save space, and only mention this at one point in the text.\\n14.\\tSection 1: what is \\u201copen-IE\\u201d?\\n15. Please cite the actual publications not the arXives, whenever available.\", \"conclusion\": \"While the approached problem is interesting and relevant, the paper does not propose any novel approach, but rather examines only a single captioning approach with a negative result and no conclusion where to go from here.\\nCombined with many unclarities mentioned above, I lean towards rejecting this paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
B1dexpDug | Early Methods for Detecting Adversarial Images | [
"Dan Hendrycks",
"Kevin Gimpel"
] | Many machine learning classifiers are vulnerable to adversarial perturbations. An adversarial perturbation modifies an input to change a classifier's prediction without causing the input to seem substantially different to human perception. We deploy three methods to detect adversarial images. Adversaries trying to bypass our detectors must make the adversarial image less pathological or they will fail trying. Our best detection method reveals that adversarial images place abnormal emphasis on the lower-ranked principal components from PCA. Other detectors and a colorful saliency map are in an appendix. | [
"Computer vision",
"Deep learning"
] | https://openreview.net/pdf?id=B1dexpDug | https://openreview.net/forum?id=B1dexpDug | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"HygmALVsg",
"HJkfuFajl",
"Skw4FzWix",
"r1b2oMDjl"
],
"note_type": [
"official_review",
"comment",
"official_review",
"comment"
],
"note_created": [
1489427992325,
1490028550981,
1489213743268,
1489607592948
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper7/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper7/AnonReviewer1"
],
[
"~Dan_Hendrycks1"
]
],
"structured_content_str": [
"{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Authors provide a method to detect if an image has been modified using one of standard algorithms for generating adversarial images.\\n\\nLooking at representation of an image in a whitened representation can provide information whether the image has been modified. This is expected for any sort of linear transformation of the images.\\n\\nAn important missing piece of information is how matrices U/V are generated. It would be a stronger result if matrices were generated using subset of the data, and then evaluated on the remaining set.\\n\\nIt is not clear that adversarial generation method couldn't be modified to satisfy the spectrum constraints. However, the logarithmic barrier experiment on MNIST provides some evidence that this method is hard to circumvent.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents a method to distinguish images that are perturbed to adversarially affect the classification output of a network. The main method relies on observation that PCA coefficients of adversarial examples are larger and higher variance for high-frequency components. Images are classified as clean vs adversarial by fitting two gaussians to use in a likelihood comparison, one for clean examples and another for adversarial.\\n\\nAn immediate question is whether the approach can be defeated by constraining image perturbation method not to cause the image to fall outside the bounds of the detector. The authors address this by adding log barriers to the loss, and find that 92% of cifar images could not be perturbed to satisfy the new constraints (meaning, 8% can). Unfortunately, this experiment was not performed on the tiny-imagenet benchmark.\\n\\nAnother question I have is whether this method might be applied to larger images, perhaps in a convolutional fashion?\\n\\nThere are some rough bits to the paper -- for example, the introduction mentions three methods, while only one is presented in the main text (another two are described in the appendix, though they are evaluated in less detail, and do no include tiny-imagenet here).\\n\\nAppendix C (Saliency Map) looks irrelevant to this paper, as well: I don't see how it relates to the goal of detecting adversarial manipulations.\\n\\nOverall, this paper shows a basic method that appears to work well in some limited settings. However, the one method presented feels a bit light, even for a workshop paper, particularly since it is evaluated only on relatively small images.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your analysis of our paper.\\n\\n> The authors address this by adding log barriers to the loss, and find that 92% of cifar images could not be perturbed to satisfy the new constraints (meaning, 8% can).\\n\\nWe should also note that we had 100% of images perturbed by the fast gradient sign method _fail_ to satisfy the constraints, which is one of the only few adversarial image generation techniques that works in the physical world.\\n\\n> particularly since it is evaluated only on relatively small images\\nWe consider 64x64x3 Tiny-ImageNet images in two experiments in addition to CIFAR-10 and MNIST, and this differs from much adversarial images research which at most uses CIFAR-10.\", \"examples\": \"\", \"https\": \"//arxiv.org/pdf/1412.5068.pdf (seminal)\\nPerhaps this concern is related to your question about convolution, but I do not understand that question.\"}"
]
} |
|
SyhSiq7te | Class-based Prediction Errors to Categorize Text with Out-of-vocabulary Words | [
"Joan Serrà",
"Ilias Leontiadis",
"Dimitris Spathis",
"Gianluca Stringhini",
"Jeremy Blackburn"
] | Common approaches to text categorization essentially rely either on n-gram counts or on word embeddings. This presents important difficulties in highly dynamic or quickly-interacting environments, where the appearance of new words and/or varied misspellings is the norm. To better deal with these issues, we propose to use the error signal of class-based language models as input to text classification algorithms. In particular, we train a next-character prediction model for any given class, and then exploit the error of such class-based models to inform a neural network classifier. This way, we shift from the 'ability to describe' seen documents to the 'ability to predict' unseen content. Preliminary studies using out-of-vocabulary splits from abusive tweet data show promising results, outperforming competitive text categorization strategies by 4-11%. | [
"Natural language processing",
"Applications"
] | https://openreview.net/pdf?id=SyhSiq7te | https://openreview.net/forum?id=SyhSiq7te | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"rJJ9HJ_jx",
"BJOLIjF9e",
"Bye1uljcx",
"HJ1B2tSsg",
"HkWmuK6og"
],
"note_type": [
"comment",
"official_review",
"comment",
"official_review",
"comment"
],
"note_created": [
1489659270688,
1488725584111,
1488811992369,
1489505335083,
1490028569405
],
"note_signatures": [
[
"~Joan_Serrà1"
],
[
"ICLR.cc/2017/workshop/paper47/AnonReviewer2"
],
[
"~Joan_Serrà1"
],
[
"ICLR.cc/2017/workshop/paper47/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Answer to \\\"review\\\"\", \"comment\": \"Thanks for your answer and the \\\"pros\\\" mentioned. We now contrast the \\\"cons\\\".\\n\\n1) We did not run that test, but it is not clear why we need a single LM in the first place. Our intuition is that we would see much less discrimination capability with a single LM trained on the whole corpus.\\n\\n2) The number of OOV words in the two classes in the Hard data set is similar by construction of the data set. We would be very grateful if the reviewer could point us to such aforementioned baselines that take into account OOV words.\\n\\n3) We need at least one layer of an another neural network to form the binary classification (after the language model we have as many errors as characters). A second layer is added to allow the model to perform nonlinear classification based on the error sequences. Regarding normalization, our initial experiments involved no normalization and the performance was much poorer, to the level of the considered alternatives or slightly below.\\n\\n4) We would be very grateful if the reviewer could point us to such well-established benchmark task explicitly involving OOV words.\"}",
"{\"title\": \"Borderline\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes to do short-text classification by training a character-level RNN language model on each class, and to then use the error rates of each language model in predicting the next character in a test string as a source of features for use when classifying that string.\", \"pros\": \"\\u2013 Empirical results on abusive tweet detection are strong.\", \"cons\": \"\\u2013 It's not clear why this technique works, and with an evaluation on only one somewhat marginal NLP task, it's hard to know if when (if ever) this technique is worth using in practice.\\n\\u2013 This work seems a bit too narrow in scope to count as exciting late-breaking work. I think this idea would be best presented as a short paper at an NLP workshop, or fleshed out on a broader range of tasks, analyzed a bit, and presented as a regular long paper.\", \"questions\": \"\\u2013 High-level: Could you comment on how this fits with the stated goals of the workshop?\\n\\u2013 Detail: If I understand correctly, the input words are embedded, and the embeddings are then fed through a PReLU before being used in the class-conditional GRU RNNs. This seems like an odd use for a nonlinearity\\u2014did adding the PReLU help?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Answer to \\\"Borderline\\\"\", \"comment\": \"Thanks for your comments and your time in reviewing the paper. We now answer to the two specific questions and also relate to the cons above.\\n\\n1) We believe that the idea of training class-based prediction models and using their predictions as part of other networks and tasks is very novel. We decided not to submit the paper to an NLP workshop because the proposed strategy can be relevant across disciplines and not restricted to NLP. Also the out-of-vocabulary problem (which, to the best of our knowledge, has no standard benchmark data sets yet) may present analogies in other domains. We here use data from an important real problem (detecting abusive tweets) and show that the approach is worth using in practice (\\\"easy\\\" data set), outperforming well-known baselines.\\n\\n2) We considered the use of the aforementioned PReLU from the very beginning. Our intuition was that it could not harm the performance of the model, as a linear (non-altering) transform can in principle be learned if that is the real best option. Otherwise, we can only gain performance by learning the non-linearity. If needed, we can re-run the experiments without it and quantitatively evaluate the difference.\"}",
"{\"title\": \"review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes to use character-level language model errors as features for text categorization tasks. While the paper is well-written and the experiments show gains over baseline methods on an abusive language detection task, I don't think this paper fits within the goals of the workshop: it does not contain anything novel on the ML side, and the task is more of interest to the NLP community. I hope the authors can apply their method to different tasks in future versions of the paper, and also try to better motivate their architecture design decisions.\", \"pros\": [\"good result on abusive language detection task compared to baseline models\", \"the paper is well-written, it is clear what the task is and how the model architecture is designed\"], \"cons\": [\"not clear why you need a language model for each class, how does this compare to just training a single LM over the entire dataset and using its error as a feature?\", \"how does this method compare to just counting the number of unknown words per example and using that as a feature? i don't think the current experiments compare the proposed method against appropriate baselines, you need baselines that take into account OOV words\", \"the \\\"instance normalization\\\" is not well-motivated, why use it over just the raw error vector? and why further pass it through another neural network?\", \"the task is very specific and not a standard text categorization problem. it would be nice to demonstrate the method's effectiveness on a variety of well-established benchmark tasks.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
B1Vjl1Stl | Adversarial Discriminative Domain Adaptation (workshop extended abstract) | [
"Eric Tzeng",
"Judy Hoffman",
"Kate Saenko",
"Trevor Darrell"
] | Domain adversarial approaches have been at the core of many recent unsupervised domain adaptation algorithms. However, each new algorithm is presented independently with limited or no connections mentioned across the works. Instead, in this work we propose a unified view of adversarial adaptation methods. We show how to describe a variety of state-of-the-art adaptation methods within our framework and furthermore use our generalized view in order to better understand the similarities and differences between these recent approaches. In turn, this framework facilitates the development of new adaptation methods through modeling choices that combine the desirable properties of multiple existing methods. In this way, we propose a novel adversarial adaptation method that is effective yet considerably simpler than other competing methods. We demonstrate the promise of our approach by achieving state-of-the-art unsupervised adaptation results on the standard Office dataset.
| [
"workshop",
"abstract",
"framework",
"methods",
"adversarial approaches",
"core",
"new algorithm",
"limited"
] | https://openreview.net/pdf?id=B1Vjl1Stl | https://openreview.net/forum?id=B1Vjl1Stl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SJ42itesl",
"SkQ8_Y6jl",
"rJ97hnEsx"
],
"note_type": [
"official_review",
"comment",
"official_review"
],
"note_created": [
1489177515846,
1490028618766,
1489452066510
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper131/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper131/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"A small new spin on adversarial domain adaptation\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper has two parts: in the first, the authors try to unify a large subset of existing domain adaptation techniques into a figure that is claimed to shed more light into how they are similar and how they are different. The powerset of domain adaptation models is roughly described by the base model, weight sharing mechanism and the adversarial loss used (one note, I think CoGANs do actually tie some weights, no?).\\n\\nThe authors note that not all possible combinations have been used and thus try a discriminative model that uses the GAN loss and unshared weights. All in all, this is a reasonable thing to try (though there are clearly other missing combinations!). I'm surprised that the authors chose to fix the source model during training -- curious if that makes a practical difference. The Office results look decent (as much as we can glean from Office results to be honest).\", \"nb\": \"I only read the workshop submission, not the full tech report cited within.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting paper as a novel use case for adversarial training\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposed an interesting application of the adversarial networks in the context of domain adaptation. It gives a unified view of the existing domain transfer work, and provides a neat solution to integrate adversarial training to the DA application. I am not sure about the novelty of the approach as it is a combination of well known technology, but the idea is original.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SyZiHtVFg | Bi-class classification of humpback whale sound units against complex background noise with Deep Convolution Neural Network | [
"Cazau D.",
"Lefort R.",
"Bonnel",
"J.",
"Krywyk",
"J.",
"Zarader JL",
"Adam",
"O."
] | Automatically detecting sound units of humpback whales in complex time-varying background noises is a current challenge for scientists. In this paper, we explore the applicability of Convolution Neural Network (CNN) method for this task. In the evaluation stage, we present 6 bi-class classification experimentations of whale sound detection against different background noise types (e.g., rain, wind). In comparison to classical FFT-based representation like spectrograms, we showed that the use of image-based pretrained CNN features brought higher performance to classify whale sounds and background noise.
| [
"Natural language processing",
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=SyZiHtVFg | https://openreview.net/forum?id=SyZiHtVFg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"HJcdxkWjl",
"rkIQDDgjl",
"rJK4ut6sl"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489199218201,
1489168157995,
1490028592800
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper91/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper91/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Interesting results but not much contribution to the community\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper shows the image recognition CNNs can be used for whale sound detection.\", \"pros\": [\"It's interesting to know the pretrained CNNs work across domains.\"], \"cons\": [\"Experiments were not well designed. No comparisons were made.\", \"The frontend processing is not conventional audio processing steps, no justification was provided why the authors decided to use the presented way. Especially, FFT generated spectrogram has 2048 bins, which are converted to 256x256 pixel images, how? Is the time-frequency structure maintained? If downsampled from 2048 to 256, why not directly output 256 bin FFT?\", \"No discussions on why the pretrained CNNs work for this particular task. Is the performance gained by using CNNs or the pretraining on images?\", \"No sound detection literature was mentioned. The task of whale sound detection may be rare, but there is a huge literature of speech/voice detection, which share the similar processing framework.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Incomplete analysis for a workshop submission\", \"rating\": \"3: Clear rejection\", \"review\": \"This works presents a single pipeline for whale sound classification using an image classification CNN on top of spectrogram, followed by a SVM classifier.\\n\\nUnfortunately the paper does not provide any baseline which would verify usefulness of the image-CNN on a spectrogram (e.g. training the SVM directly on top of the spectrogram). The use of CNN trained for image classification for audio data is rather controversial (e.g. due to different statistics and required invariances of the data) and the work does not provide any proof that it is doing anything more than a random projections.\\nThe work uses several terms incorrectly (detection vs. classification, incorrectly assigning CNN models to \\\"imagenet framework\\\" etc.).\", \"pros\": [\"Interesting dataset which may be useful for future research\", \"Bravery to use image classification network for sound spectrogram classification\"], \"cons\": [\"Lack of any simple baseline which would motivate the use of computationally expensive CNN trained for image classification\", \"Several technical inaccuracies in the text\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
ByxWXyNFg | Factorization tricks for LSTM networks | [
"Oleksii Kuchaiev",
"Boris Ginsburg"
] | Large Long Short-Term Memory (LSTM) networks have tens of millions of parameters and they are very expensive to train. We present two simple ways of reducing the number of parameters in LSTM network: the first one is ”matrix factorization by design” of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the state-of the art perplexity. On the One Billion Word Benchmark we improve single model perplexity down to 24.29. | [
"Natural language processing",
"Deep learning"
] | https://openreview.net/pdf?id=ByxWXyNFg | https://openreview.net/forum?id=ByxWXyNFg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"ByPh0t1il",
"Skv71Dyol",
"ByYdOtEsg",
"rJLXuKaje",
"rk-Nk2gix"
],
"note_type": [
"comment",
"official_review",
"comment",
"comment",
"official_review"
],
"note_created": [
1489112750688,
1489100575545,
1489438833328,
1490028574148,
1489186601217
],
"note_signatures": [
[
"~Oleksii_Kuchaiev1"
],
[
"ICLR.cc/2017/workshop/paper57/AnonReviewer2"
],
[
"~Oleksii_Kuchaiev1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper57/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"response to ICLR 2017 workshop paper57 AnonReviewer2\", \"comment\": \"Thank you very much for the review and feedback.\\n\\n1)\\tLSTM cell with projection, LSTMP, is indeed quite popular, especially for models with large vocabulary, such as OBW. In fact, we use LSTMP as a baseline (BigLSTM by Josefowicz et al). Our cells also have projections. We updated the text to clearly reflect this: \\u201cExperiments\\u201d section explicitly mentions projection size now, and also we did several changes LSTM->LSTMP throughout the text.\\n2)\\tThank you for the pointer to the convolutional LSTM paper. For cases when problem has spatiotemporal correlations, that is indeed conceptually similar to our approach (exploiting possible structure in the input). Hence, we added it to the related work section.\\n3)\\tThe plot in Figure 2 demonstrates both number of steps and training losses for several model over exactly the same period of time (1 week). Our original plot legend did not make it clear, hence we updated it to avoid confusion.\"}",
"{\"title\": \"nice work\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"Comments:\", \"LSTM with projection is quite standard. Some variant (LSTMP) was introduced in Long short-term memory recurrent neural network architectures for large scale acoustic modeling, by Google, in 2014. You don't compare your work to that. I think it is very related and should be compared.\", \"I think Convolutional LSTM Network (https://arxiv.org/abs/1506.04214, from 2015) are also related because you can also see that as kind of grouping.\", \"The plot in Figure 2 of the training loss is very nice. I think, in addition, it would be nice to see the same plot but with the training computation time on the X-axis, so you can better see, e.g. after 1 week of training, where you are with each model. In TensorBoard, I think there is even an option to do that.\"], \"cons\": [\"Lacking related work and comparisons.\", \"Lacking experiments on other tasks.\"], \"pros\": [\"State-of-the-art result on OBW.\", \"Open Source code.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"response to ICLR 2017 workshop paper57 AnonReviewer1\", \"comment\": \"Thank you very much for the review and feedback.\", \"please_find_our_responses_below\": \"a) On \\u201cvanilla\\u201d LSTM comparison. \\nWe used LSTM with projection to fit the model in the GPU DRAM. On the One Billion Word Benchmark, the vocabulary size is around 800K. So, if regular LSTM without projection is used, then the embedding matrix will be 800K times LSTM cell size, and full softmax layer would require another 800K times LSTM cell size parameters. Therefore, for cell size of 8,192 it would result in approximately 6.5 billion non-LSTM parameters. LSTMP with projection size of 1024 will require ~8 times less non-LSTM parameters (0.8B). We found that LSTMP-based BigLSTM by Josefowicz et al. is already close to the single GPU DRAM limit.\\n\\nb) On related work. \\nThank you for providing related references. We found that \\u201cPredicting parameters in deep learning\\u201d by Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas is relevant to F-LSTM and provides further theoretical support for \\u201cfactorization\\u201d tricks, hence we included it in Related work section. \\nWe argue that \\u201cProgressive neural networks\\u201d by Rusu et al. is not directly relevant to our work since it explores the problem of multitask and transfer learning and their \\u201ccolumns\\u201d are introduced to handle additional tasks and not to improve/speedup learning within single task. We\\u2019d also argue that \\u201cOnline stabilization of block-diagonal recurrent neural networks\\u201d is not closely relevant to our work because they don\\u2019t use groups but assume block diagonal structure only on recurrent connection for the purposes of improving BPTT.\\n\\nc) We fixed several typos and added few notation clarifications to Figure 1. Also, per reviewers request, we added second plot to the Appendix (Figure 2:B) with wall clock time on the x-axis.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting paper but needs some more work\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Factorization Tricks for LSTMs\", \"summary\": \"This paper empirically investigates different ways of learning parameters of LSTMs to optimize the computational and the memory efficiency of the models. They use LSTMP model from Sak et al 2014 as their baseline and implement two different tricks to speed up their model. The first trick is to use factorize the weight matrices of the neural network to introduce a lower-dimensional bottleneck and the second approach is to create block structure in They report convincing results on 1-billion word language modeling benchmark. The factorization methods investigated in this paper seems to act like a regularizer and the improve the generalization error as well.\", \"general_comment\": \"It seems like this paper is missing some important references. I would cite [1] for the factorization trick, and progressive networks [2] and block-diagonal recurrent neural networks[3](there have been several other similar papers can be found in the literature, this is the oldest one I could find) for the group structure that you are introducing. The results are interesting.\", \"overall_review\": \"Pros,\\nThe empirical investigation of two different ways to reparametrize LSTM models to speed up and lower the memory consumptions.\\nExperiments are convincing and the results are good\\n\\nCons,\\nI think you are missing an important baseline, a regular LSTM language model without projection(not the LSTMP by Sak et al).\\nThe writing needs more work, the notation used is a bit difficult to parse.\", \"detailed_and_some_minor_comments\": \"On Page 1, \\u201ctransformation 1\\u201d \\u2014> \\\"Equation 1\\u201d\\nFigure 1, needs more description and clarification it is not very clear what d1, d, d2 means. \\nPlease use a more formal notation, use subscript for the weights. There are variables used in the equations without being properly defined.\\nI would like to see a discussion about the computational complexity of those approaches as well.\\nI would like to see the Figure 2 with respect to wall-clock time in the x-axis.\\nAfter some more work, this paper can be made much easier to read. This version of the paper is not very easy to understand, unfortunately.\\nCan you plot Figure 2 in log-scale, the differences between the learning curves are not very clear to me.\\n\\n\\n[1] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, Predicting parameters in deep learning, NIPS 2013.\\n[2] Rusu, Andrei A., Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. \\\"Progressive neural networks.\\\" arXiv preprint arXiv:1606.04671 (2016).\\n[3] Sivakumar, Shyamala C., William Robertson, and William J. Phillips. \\\"Online stabilization of block-diagonal recurrent neural networks.\\\" IEEE Transactions on Neural Networks 10.1 (1999): 167-175.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkwKQJgYg | Neural Style Representations of Fine Art | [
"Jeremiah Johnson"
] | The artistic style of a painting is a subtle aesthetic judgment used by art historians for grouping and classifying artwork. The neural style algorithm introduced by Gatys et. al. (2016) substantially succeeds in image style transfer, the task of merging the style of one image with the content of another. This work investigates the effectiveness of a style representation derived from the neural style algorithm for classifying paintings according to their artistic style. | [
"artistic style",
"neural style algorithm",
"neural style representations",
"fine art",
"painting",
"subtle aesthetic judgment",
"art historians",
"artwork",
"gatys et"
] | https://openreview.net/pdf?id=HkwKQJgYg | https://openreview.net/forum?id=HkwKQJgYg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SyL2y7eox",
"ByBhDRCql",
"Hk4GuFpsg"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489149870355,
1489065901551,
1490028555789
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper20/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper20/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Interesting work but missing analyses.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This submission explores how the Neural Style representation by Gatys et al. can be used to classify the style of paintings.\\nThe authors train a linear classifier which uses the neural style representation (gram matrices of a certain selection of layers) to predict the style. This approach is compared to a random forest classifier on the feature activations, which yields significantly higher performances. Additionally, the authors point out that ResNet can be finetuned to yield even higher performance.\\n\\nSince the Neural Style algorithm works very well at transferring style, I am surprised by the fact that it works that badly for classifying style and would take that to be the main result of the submission. However in my optinion the submission is missing any analysis of why the representation is that much worse than other methods. From examples of correct predictions and wrong predictions, one might get interesting insights in what exactly the style representation is incoding and what it is missing with respect to style classification.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting work but limited novelty and new insights\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This submission classifies painting style from pre-trained CNN feature spaces:\\na) using a linear classifier from the \\u2018style features\\u2019 used for style transfer in Gatys et al. 2016\\nb) using a random forrest classifier from the raw feature representations of different layers of the VGG-19 network\\nc) fine-tuning res-net\\n\\nIt shows that res-net gives best top-1 accuracy followed by mid-level feature representations of the VGG.\\n\\nWhile style classification is an interesting problem, the novelty and technical contribution of this work is limited. Style classification with fine-tuned CNNs as well as from the \\u2018style features\\u2019 has been done before in a more extensive manner [1,2].\\nIn my opinion, this submission does not add any technical novelty or new insights in the nature of style classification.\\nTherefore I cannot recommend acceptance for presentation at the ICLR Workshops.\\n\\n\\n[1] Karayev, Sergey, et al. \\\"Recognizing image style.\\\" arXiv preprint arXiv:1311.3715 (2013).\\n[2] Chu, Wei-Ta, and Yi-Ling Wu. \\\"Deep Correlation Features for Image Style Classification.\\\" Proceedings of the 2016 ACM on Multimedia Conference. ACM, 2016.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
S1dJ1smFg | NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD | [
"Sahil Garg",
"Irina Rish",
"Guillermo Cecchi",
"Aurelie Lozano"
] | We address the problem of online model adaptation when learning representations from non-stationary data streams. For now, we focus on single hidden-layer sparse linear autoencoders (i.e. sparse dictionary learning), although in the future, the proposed approach can be extended naturally to general multi-layer autoencoders and supervised models. We propose a simple but effective online model-selection, based on alternating-minimization scheme, which involves “birth” (addition of new elements) and “death” (removal, via l1/l2 group sparsity) of hidden units representing dictionary elements, in response to changing inputs; we draw inspiration from the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with better adaptation to new environments. Empirical evaluation on both real-life and synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art non-adaptive online sparse coding of Mairal et al. (2009) in the presence of non-stationary data, especially when dictionaries are sparse.
| [
"Computer vision",
"Unsupervised Learning",
"Transfer Learning",
"Applications",
"Optimization"
] | https://openreview.net/pdf?id=S1dJ1smFg | https://openreview.net/forum?id=S1dJ1smFg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"S1OIl5Wol",
"SkX9sc7sx",
"HJMm_FTog",
"BJDzUK_sx",
"rkrLROuil"
],
"note_type": [
"official_review",
"official_review",
"comment",
"comment",
"comment"
],
"note_created": [
1489244240417,
1489378187256,
1490028570189,
1489700367468,
1489698380955
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper48/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper48/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Irina_Rish1"
],
[
"~Irina_Rish1"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper addresses the problem of online sparse dictionary learning with non-stationary streams. The data is presented in episodic way, and the challenge is to adapt the model without forgetting what was learned previously. The main contributions are to propose a method with for adding or removing atoms of the dictionary based on the current performance of the model at representing incoming data. These techniques are inspired by adult neurogenesis which is a process that occurs in the hippocampus (which is involved in handling episodic memory).\", \"pros\": [\"The problem is very relevant, and the paper is very well executed.\", \"The conceptual ideas and their connections with adult neurogenesis are very interesting\"], \"cons\": \"- The proposed ideas, while never used before together, are very related to published work (as stated by the authors)\\n- Experimental evaluation, while very extensive, are rather simplistic. A stronger application would improve the paper significantly.\\n\\nThe problem of adapting the capacity of the model with the complexity of the task is very relevant. The ideas of neural genesis and death seem very natural (and they have been explored before). I appreciate the connection with adult neurogenesis. \\n\\nWhile these ideas are very intuitive and seem very interesting, their implementation seems a bit heuristic (which would be OK with very strong applications). In the neural genesis, atoms are added proportional to the Pearson correlation between the reconstruction obtained by the system and the data itself. How easy is to set these parameters? This seems to be something that needs to be cross-validated for each dataset. \\n\\nThe experimental results show that the method performs better than the non-adaptive online method by Mairal et al. It is probably not the best to use dictionary learning methods with very high dimensional inputs, such as full images. An interesting application would be learning patch-based dictionary where the patches come from individual images (thus patches will be very correlated), having a natural episodic setting.\\n\\nThe authors mention that this method could be extended to use other approaches (such as neural network based auto-encoders), this is not entirely clear, as they lack inference stage (it's just a feed-forward process), hence, adding new random weights could lead to interference of the previous model. I'm not saying it's impossible to divise a method including these ideas (which would certainly be very interesting), but I don't see it as straight forward.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"nice idea!\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents a very nice idea for introducing neurogenesis into sparse coding, which should help in training on non-stationary datasets. However the presentation of the results is not very clear. The main results are shown in Figure 1 and the caption is short and cryptic. In panels b and c for example, why does the correlation coefficient decline for larger k? and what is meant by \\\"final\\\" dictionary size k? Also it appears extreme sparseness was imposed on the dictionary elements themselves, allowing only 5 nonzero elements out of 1024 dimensions. These seems strange and introduces an additional modification of standard sparse coding, making it difficult to appreciate the results.\\n\\nOverall impression is that this paper presents a very nice idea with great potential, but the presentation of results is a bit strange.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"reply to AnonReviewer2\", \"comment\": \"Thank you for valuable comments!\\nRegarding the parameter tuning, there is clearly some room for improvement, such as a more automated procedure for adapting the parameters to the changing datasets, which is the topic of our ongoing work. Interestingly, however, our results were quite robust to the parameter variations, as discussed in detail in section E.9 of the Appendix and shown in Figures 22-27. \\n\\nRegarding full images vs small patches, one of our goals was actually to test the approach on high-dimensional inputs, with further applicability to other types of data beyond just images. Also, rather than explicitly representing an image as a set of patches and then learning a dictionary of dense elements for accurate representation of such patches, a dictionary of full-image-size, but sparse dictionary elements can be used to implicitly represents an image as a linear combination of those elements, with possible overlap of non-zero pixels between elements; the non-zero pixels in a sparse element of a dictionary are learned automatically. However, we agree that exploring the patch-based setting should be added to our evaluation.\\n \\nFinally, when mentioning an extension of this approach to nonlinear autoencoders, such as neural nets, we planned to build upon alternating minimization approaches naturally extending the current dictionary learning method, similar to the work of Carreira-Perpinan (AISTATS 2014) and similar methods, but the details are indeed to be worked out.\"}",
"{\"title\": \"reply to AnonReviewer1\", \"comment\": \"Thank you! We updated the paper trying to improve the clarity as you suggested (though the 3-page constraint made the clarity vs brevity trade-off a bit more challenging); in Fig 1 caption, we now mention that the 'final' dictionary size corresponds to the size of the dictionary learned by our adaptive method, as opposed to the 'initial' dictionary size the method started with; in Fig 1a, we plotted 'learned', or 'final' size, vs the 'initial' size; in Fig 1b-e, x-axis represents the 'learned' ('final') dictionary size for NODL, and the corresponding \\u00a0fixed size used by ODL, while y-axis represents the reconstruction accuracy.\", \"re\": \"sparse dictionaries - indeed, the advantages of adaptive vs nonadaptive scheme are most pronounced in case of sparse dictionaries, while with dense dictionaries the performance is similar (see paragraph 3 in Evaluation section, and Fig. 9 and Fig 26 in the Appendix). Sparse dictionaries are interesting, however, since (1) they resulted in better classification accuracy (paragraph 3 in Evaluation section) and (2) they are more biologically plausible (correspond to sparse connectivity in network representation of sparse coding models). For a detailed discussions on the rationale behind sparse dictionary elements, see also section B.1 of the Appendix.\"}"
]
} |
|
Byx3z64Fl | Generating Conference Call-for-Papers using Stacked Long Short-Term Memory Neural Networks | [
"Bálint Antal",
"Attila Csikász-Nagy",
"Rafael E. Carazo Salas"
] | In this paper, we describe a novel approach to generate conference call-for-papers using Natural Language Processing and Long Short-Term Memory network. The approach has been successfully evaluated on a publicly available dataset. | [
"Natural language processing",
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=Byx3z64Fl | https://openreview.net/forum?id=Byx3z64Fl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SkBrOF6ol",
"H1WqEHesg",
"S1iv63Mjg"
],
"note_type": [
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028604630,
1489159304935,
1489321315487
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper107/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper107/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"straight-forward application, poor writing\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper proposes to train LSTMs to generate call-for-papers conditioned on the topic of the conference. The model is evaluated by comparing the generated calls to the original ones.\\n\\nAs such, this is a rather straight-forward application of LSTM language models. The paper does not justify why this particular application matters. I am not sure how such models could ever help to conference organizers. Besides, comparing the generated calls \\\"on average\\\" with the originals makes little sense: even n-gram models would succeed in such a comparison.\", \"other_issues_include\": [\"the evaluation method is not explained, the paper refers to Latent Semantic Indexing, but there is no explanation in the text what precisely was done\", \"almost the whole paper is written in the present perfect sense, which does not seem grammatically correct\", \"\\\\citet should be replace with \\\\citep almost everywhere\", \"To sum up, the paper is not well-motivated, there are issues in both execution and writing. I do not recommend acceptance.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"review\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper uses LSTM to generate call for papers by training on topics from an LDA model.\\nThe results are evaluated by measuring the similarity of the generated call for papers to their topic models using Latent Semantic Indexing.\\n\\nThis is an application paper since there it does not propose a new method or analyze existing methods. \\nThe main contribution is showing that an LSTM can be used to generate call for papers. \\nThe paper does not elaborate why this particular application is important and/or exciting.\\nThe results are also not convincing, and the evaluation metric is confusing (I do not understand how exactly similarities to topics models are being measured).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1naD1rFx | Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data | [
"Shikhar Sharma",
"Jing He",
"Kaheer Suleman",
"Hannes Schulz",
"Philip Bachman"
] | Natural language generation plays a critical role in spoken dialogue systems. We present a new approach to natural language generation for task-oriented dialogue using recurrent neural networks in an encoder-decoder framework. In contrast to previous work, our model uses both lexicalized and delexicalized components i.e. slot-value pairs for dialogue acts, with slots and corresponding values aligned together. This allows our model to learn from all available data including the slot-value pairing, rather than being restricted to delexicalized slots. We show that this helps our model generate more natural sentences with better grammar. We further improve our model's performance by transferring weights learnt from a pretrained sentence auto-encoder. Human evaluation of our best-performing model indicates that it generates sentences which users find more appealing. | [
"Natural language processing",
"Deep learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=B1naD1rFx | https://openreview.net/forum?id=B1naD1rFx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"H1AmTalsx",
"SJO8_F6jx",
"BJ7itvBjg"
],
"note_type": [
"official_review",
"comment",
"official_review"
],
"note_created": [
1489194278319,
1490028624193,
1489496474766
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper138/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper138/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents a method for incorporating both lexicalized and delexicalized data for dialogue generation. This can be useful, for example if there is some overlapping information between a slot type and its value (for example, if a location of a pizza restaurant in a database is 'near X street', instead of 'X street', you want the dialogue system to learn to say 'the pizza is at X street' rather than 'the pizza is at near X street'). The proposed model seems to do this, and outperforms existing methods (Wen et al., 2015) by a decent margin on two datasets based on human evaluation. While the change is not ground-breaking, I think it's at an appropriate level for an ICLR workshop paper, and should be accepted.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"an incremental improvement\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"In this paper, the authors extend Semantically Conditioned LSTMs [Wen et al, EMNLP`15] for dialogue generation from slot-value pairs. The main extensions are:\\n(a) They allow lexical values in addition to categorical values for slots. They model the lexical values by the average pooling of their word embeddings and concatenate them with the one-hot representation for slot-ids to form the encoder's input.\\n(b) They use pre-trained weights for the decoder LSTM based on an auto-encoder trained on sentences from the same domain.\\n \\nTheir experiments show that both lexical value modeling as well as transfer learning improve performance on all metrics on two different datasets although the improvements are not statistically significant most of the time. Example sentences produced by various models indicate that the proposed models are qualitatively better.\\n\\nThe paper lacks novelty -- it seems an incremental improvement to Semantically Conditioned LSTMs.\", \"minor_suggestion\": \"For reasons of completeness, please show how the dialogue act vector (d_0) influences the decoder-LSTM. Although Wen et al is cited, it still makes sense to display it since it is an important component of the model.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJVJpENFg | Pl@ntNet app in the era of deep learning | [
"Antoine Affouard",
"Hervé Goeau",
"Pierre Bonnet",
"Jean-Christophe Lombardo",
"Alexis Joly"
] | Pl@ntNet is a large-scale participatory platform and information system dedicated to the production of botanical data through image-based plant identification. In June 2015, Pl@ntNet mobile front-ends moved from classical hand-crafted visual features to deep-learning based image representations. This paper gives an overview of today's Pl@ntNet architecture and discusses how the introduction of convolutional neural networks did improve the whole workflow along the years. | [
"Computer vision",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=HJVJpENFg | https://openreview.net/forum?id=HJVJpENFg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SkU7b4tje",
"Sye0XOY6og",
"B1cErnA5e",
"Hkj35uxjl"
],
"note_type": [
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1489744158002,
1490028582337,
1489057074111,
1489173170655
],
"note_signatures": [
[
"~alexis_joly1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper73/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper73/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"What our talk will be about ?\", \"comment\": \"Thanks for your feedback and recommendations (we will add the reference to LeafSnap). Overall, we agree that this is a system paper with no fundamental contribution with regard to machine learning. However, we are convinced that the presentation of Pl@ntNet at the workshop will be of interest to the ICLR crowd. Besides the technical and evaluation aspects described in the (3 pages...) paper , there are several societal and educational aspects that could be of interest for ICLR attendees (e.g. the perception of such tools by teachers and young childrens, the huge collection of geo-localized cannabis plants we collect each saturday evening, the diversion of the application by some artists, etc.). Also, we might discuss the scientific challenges towards covering the whole world's flora (300K species), including strongly imbalanced data issues, ultimately low inter-class variability for some species in the same genus, use of taxonomic regularization, etc.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"A successful collaboration between two worlds\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper describes the \\\"Pl@ntNet\\\" app (available on Android and iOS) and the CBIR system behind it. This application is rightfully nicknamed as \\\"the shazam of plants\\\", as it recognizes the plant (e.g. a flower) that the user is filming with the phone's camera. The application also enables researchers in botany to gather plenty of useful data on plant diversity and such.\\nAfter roughly explaining how the whole system functions and giving a few implementation details, it more specifically highlights how the recognition performance was able to vastly improve after using a CNN to classify images rather than an old fashioned hand-crafted pipeline. Quantitative and qualitative evaluations are provided.\", \"pros\": [\"The results are undeniably satisfying, given the tremendous difficulty of this very fine-grained task (more than 10K classes, some of them very similar, large intra-class variablities). Ranking of the app by users on the App store is proof of it.\", \"Beyond the application working well, this is the perfect illustration of a successful application of CNNs to a difficult classification problem supported by a large-scale participatory platform (3M+ users). It is nice to see that this work allowed different research communities (here, CV and botanists) to collaborate and bond in a win-win situation. I believe that it is in the interest of everybody to advertise and encourage this type of collaborations.\", \"overall, the paper reads well and the goal is clear and interesting\"], \"cons\": [\"it sems that not many implementation choices have been investigated. True, it is not the goal of the paper, but comparing different deep classification architectures (like ResNet) and/or image retrieval approaches would have been a plus.\", \"no real scientific novelty, this is a system paper.\", \"In my opinion this paper makes a good candidate as a workshop paper, not because of its poor scientific content but rather because it exemplifies how two a-priori remote communities can benefit from each other. Thanks to the app, the CV community can gather hundreds of thousands of images of fine-grained classes with detailed annotations for free, very challenging data, and the botanic community also get to form people to recognize plants and retrieve a lot of data on its own. This is worth appearing in a workshop in my opinion.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Shazam your plants\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": [\"A brief summary of the paper's contributions, in the context of prior work.\", \"This paper describes a mobile and web app that identifies plant species using deep learning. The paper describes the system components, and uses an inception CNN architecture for classification. The system also performs efficient similarity search by hashing the last hidden layer feature responses. The paper shows qualitative results and quantitatively compares against a prior system using hand-designed features.\", \"An assessment of novelty, clarity, significance, and quality.\", \"This is a systems paper and has little novelty as far as machine learning goes. The paper is written clearly enough. I\\u2019d recommend adding a reference to the LeafSnap work.\", \"A list of pros and cons (reasons to accept/reject).\"], \"pro\": \"The web app has quite a few users, and seems to work well enough in practice.\", \"cons\": \"There is no technical novelty with respect to machine learning (similar systems have been deployed for large-scale image tagging and retrieval), so I\\u2019m uncertain whether this paper would be of sufficient interest to the ICLR crowd. A computer vision workshop or WACV may be other possible venues for this paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkndY2VYx | Encoding and Decoding Representations with Sum- and Max-Product Networks | [
"Antonio Vergari",
"Robert Peharz",
"Nicola Di Mauro",
"Floriana Esposito"
] | Sum-Product Networks (SPNs) are deep density estimators allowing exact and tractable inference. While up to now SPNs have been employed as black-box inference machines, we exploit them as feature extractors for unsupervised Representation
Learning. Representations learned by SPNs are rich probabilistic and hierarchical part-based features. SPNs converted into Max-Product Networks (MPNs) provide a way to decode these representations back to the original input space. In extensive experiments, SPN and MPN encoding and decoding schemes prove highly competitive for Multi-Label Classification tasks. | [
"Unsupervised Learning",
"Structured prediction"
] | https://openreview.net/pdf?id=rkndY2VYx | https://openreview.net/forum?id=rkndY2VYx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"HykWQwJse",
"ry8W2Bejx",
"HkyQI_gjl",
"BJUswOgig",
"BklBuY6ix"
],
"note_type": [
"official_review",
"official_review",
"comment",
"comment",
"comment"
],
"note_created": [
1489101559506,
1489161213671,
1489171991130,
1489172381920,
1490028600516
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper102/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper102/AnonReviewer2"
],
[
"~antonio_vergari1"
],
[
"~antonio_vergari1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"A promising application of SPNs for learning representations and mapping them back into the input space\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper describes how to use Sum- and Maximum product networks for unsupervised feature learning and decoding and evaluate it within three different learning scenarios by either directly classifying a binary label set based on the original feature space, or by classifying the labels from generated feature encodings or decoding labels from their embedding. The authors further propose a full pipeline that produces feature embeddings and decodes them into the label space.\\n\\nThe paper alone is quite hard to comprehend and as a reader without prior knowledge in SPN/MPNs I had to consult a lot of literature, which however was provided sufficiently in the paper. The authors compare their method to state of the art approaches like RBMs and auto-encoders and show promising results in their framework. Unfortunately the tasks were not described properly and again required to consult further literature. I would recommend putting the evaluations partly into the appendix and to elaborate a little bit on that.\", \"minor_remarks\": [\"Typo in first sentence of section 3: usupervisedly\", \"The change in font size and face on emphasized words makes the general look of the text inconsistent and is quite uncommon\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"good paper, should be accepted\", \"rating\": \"7: Good paper, accept\", \"review\": [\"Summary: the paper proposes to use sum-product networks (SPN) for feature extraction. The embedding of an input is the activations of all the nodes in the network or of only the inner nodes. To learn features in an unsupervised manner like in Autoencoder, a decoding method is introduced using the corresponding max-product network. The experimental results on MNIST show that the proposed method outperforms RBM, CAE, DAE in terms of the quality of embeddings for classification.\", \"Discussion:\", \"The idea of the paper is neat, interesting, and innovative. The paper is well written yet quite brief (but understandable given the page limit). The authors should explain the third paragraph of Section 2 more clearly. In the 4th paragraph also of Section 2, it is unclear what \\\\phi_n(u) is.\", \"The experiment results are quite strong and convincing. However,\", \"1. can the proposed models also outperform the alternatives after fine-grain training (i.e. jointly train the feature extractors with the classifier)?\", \"2. I understood from the paper that for the other networks (RBM, MADE, CAE, DAE) only the activations of the top layer is used. However, because the SPN's embeddings are from all the nodes (or all the inner nodes), have the authors tried using the activations of all the hidden nodes for the other networks?\", \"pros:\", \"the idea is neat, interesting, and innovative\", \"experimental results are good and convincing\", \"cons:\", \"the paper is quite brief and unclear at some points (but this shouldn't be considered as a significantly negative point)\", \"the experiments can be done better\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"updated revision\", \"comment\": \"Dear reviewer,\\n\\nthanks for your time reviewing our work and \\\"imputing\\\" the missing parts, we really appreciate it.\\n\\nWe acknowledge that the current presentation omits several details about the experimental setting. Therefore, we updated the paper by including an appendix comprising the full decoding procedure, some paragraphs about\\ntraining the models employed and finally more experimental results. We also refactored the notation following your suggestions.\\n\\nEven if the time is running out, let us know if other modifications are required.\"}",
"{\"title\": \"clarifications\", \"comment\": \"Dear Reviewer,\\n\\nthank you for your criticisms and appreciation. Definitely, this workshop version omits more details than the conference one. We will try to answer your questions in the following.\\n\\n> The authors should explain the third paragraph of Section 2 more clearly\\n\\nFollowing the other reviewer comments and given the page length format, we updated the version to include the decode procedure listing in the Appendix.\\n\\n> In the 4th paragraph also of Section 2, it is unclear what \\\\phi_n(u) is\\n\\nIt stands for the probability distribution encoded by a leaf, as introduced at the beginning of the Section.\\n\\n> can the proposed models also outperform the alternatives after fine-grain training (i.e. jointly train the feature extractors with the classifier)?\\n\\nIf we are allowed to perform \\\"fine-tuning\\\", it would be fairer to perform that to SPNs/MPNs as well. We are investigating this kind of hybrid training, which, as far as I know, is unusual (hence interesting) for density estimators.\\n\\n> I understood from the paper that for the other networks (RBM, MADE, CAE, DAE) only the activations of the top layer is used. However, because the SPN's embeddings are from all the nodes (or all the inner nodes), have the authors tried using the activations of all the hidden nodes for the other networks?\\n\\nWe used all node activations for RBMs and MADEs, producing longer embeddings that those from SPNs/MPNs, on different datasets (see the structural statistics reported in the updated version).\\nConcerning non-probabilistic autoencoders, we employed only the embeddings from the \\\"compressed\\\" mid-representation layer, finding it useful as stated in the literature.\\n\\nPlease let us know how we could improve this work further.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
SyGmHLfte | Infinite Dimensional Word Embeddings | [
"Eric Nalisnick",
"Sachin Ravi"
] | We describe a method for learning word embeddings with data-dependent dimensionality. Our Infinite Skip-Gram (iSG) and Infinite Continuous Bag-of-Words (iCBOW) are nonparametric analogs of Mikolov et al.'s (2013) well-known 'word2vec' models. Vectors are made infinite dimensional by employing techniques used by Cote & Larochelle (2016) to define a RBM with an infinite number of hidden units. We show qualitatively and quantitatively that the iSG and iCBOW are competitive with their fixed-dimension counterparts while having the ability to infer the appropriate capacity of each word representation. | [
"Natural language processing",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=SyGmHLfte | https://openreview.net/forum?id=SyGmHLfte | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"ry77zkHox",
"ByRrzTMil",
"B19z_Kaig"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489461787146,
1489322566312,
1490028562245
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper33/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper33/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Interesting idea, but not exactly infinite-dimensional\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a method for learning word embeddings with data-dependent dimensionality. Different words have different numbers of non-zero dimensions, which allows for more or less information to be stored in a given word embedding. The authors argue that this is important to reduce various kinds of over-fitting, which seems to make sense, at least intuitively.\\n\\nI like the general motivation and idea, but I don't entirely understand how the setup achieves the desired goal. If I understand correctly, the dot product between two embeddings has non-zero contributions for the first l_min components of the two embeddings, where l_min is the smaller of the two vector dimensionalities. This means that *both* vectors must be high-dimensional in order for the increased capacity of a one of the vectors to manifest. This property seems at odds with the information-storage argument given in the paper, which seems to imply that the dimensionality of a *single* embedding is all that is important. \\n\\nI also think that the title and much of the wording in the paper is misleading insofar as they suggest that the embeddings may be infinite-dimensional. Restricting \\\"the model to grow only one dimension at a time\\\" necessarily enforces that the embeddings remain finite-dimensional (given that there are a finite-number of time steps), so it seems like the entire analysis should be able to be recast in a finite-dimensional setting without loss of generality. In doing so, I think this would eliminate the need to consider how to account for divergences in the infinite-dimensional setting, and largely reduce the novelty of the proposed algorithm.\\n\\nI don't find the empirical evidence particularly convincing of the usefulness of the proposed approach. All-in-all, while the idea is quite interesting, I think this paper lacks the theoretical justification and empirical basis to merit its acceptance as a workshop paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes a method to learn infinite dimensional word embeddings, based on the infinite RMB method of Cote and Larochelle (2016).\\nThe paper applies the technique to learn infinite dimension Continuous Bag of Words and Skip Gram models, and show results on word similarity datasets.\\n\\nI think this is not a trivial extension of the original infinite RMB method.\\nWhile the quantitative results are not very convincing (much worse than 200D CBOW and SG), it is a reasonably good workshop paper that offers interesting insights on how a word embedding model uses its dimensions to represent words with multiple senses.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
Bkiqt3Ntg | Deep Kernel Machines via the Kernel Reparametrization Trick | [
"Jovana Mitrovic",
"Dino Sejdinovic",
"Yee Whye Teh"
] | While deep neural networks have achieved state-of-the-art performance on many tasks across varied domains, they still remain black boxes whose inner workings are hard to interpret and understand. In this paper, we develop a novel method for efficiently capturing the behaviour of deep neural networks using kernels. In particular, we construct a hierarchy of increasingly complex kernels that encode individual hidden layers of the network. Furthermore, we discuss how our framework motivates a novel supervised weight initialization method that discovers highly discriminative features already at initialization.
| [
"Theory",
"Deep learning"
] | https://openreview.net/pdf?id=Bkiqt3Ntg | https://openreview.net/forum?id=Bkiqt3Ntg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Bk4qoCwjx",
"BkbruKTig",
"SkYylTNk-",
"Bkz97slil",
"Sk7MCCPix",
"H12YaKeje"
],
"note_type": [
"comment",
"comment",
"comment",
"official_review",
"comment",
"official_review"
],
"note_created": [
1489656716059,
1490028601292,
1493647329459,
1489183626216,
1489657355052,
1489177988318
],
"note_signatures": [
[
"~Jovana_Mitrovic1"
],
[
"ICLR.cc/2017/pcs"
],
[
"(anonymous)"
],
[
"ICLR.cc/2017/workshop/paper103/AnonReviewer2"
],
[
"~Jovana_Mitrovic1"
],
[
"ICLR.cc/2017/workshop/paper103/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Rebuttal\", \"comment\": \"The method presented in our paper is related to (Hazan and Jaakkola 2015), but is quite different from the one (Daniely at el 2016) present. In particular, the approach proposed in (Hazan and Jaakkola 2015) can only accommodate up to two infinite-width layers, while our approach is the first one, to the best of our knowledge, that enables a construction on arbitrarily deep infinite-width neural networks. In particular, we achieve this with the kernel reparametrization trick. Furthermore, the work on (Daniely et al. 2016) deals with finite-width neural networks and uses kernels to analyze them, while we examine infinite-width neural networks and construct a novel architecture and derive a whole new formalism for reasoning about it.\\n\\nThe advantage of a supervised initialization can be easily seen from the classification accuracies right after initialization. In particular, the very high initial classification accuracy of our method clearly indicates that our initialization method is successful at disentangling factors of variation as it takes into account the structure of the data through the supervised information. In particular, this makes the networks constructed using our initialization method useful already from the initialization stage requiring less training, thus making it useful in settings where we have only a limited computational budget, for example. \\n\\nConcerning the experimental results, these are state-of-the-art results on this architecture when no data augmentation is used. As the experiments were about considering the advantages of our initialization method over standard ones, we isolated the contribution of the initialization to the final classification accuracy by not using any fine-tuning beyond that necessary to make the learning converge on this architecture.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting but short ;)\", \"comment\": \"Interesting topic but a little \\\"short\\\" in the sense of the potential impact of such a full model (in this form) should at least in \\\"theory\\\" be incredible!\\n\\nShould be something that can potentially unify a lot of mathematical approaches and \\\"human/brain inspired behaviour\\\" in ML.\\n\\nDo it exist a \\\"longer\\\" more mathematically version of the paper? Also a picture/sketch of the set-up would further enhance the paper.\"}",
"{\"title\": \"review: interesting but need stronger experimental evidence\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper discusses the distribution of function mappings governed by deep infinitely wide neural networks and, in theory, how to sample the weights using the canonical feature mappings (or the so-called kernel reparameterisation trick) and map the data points to features.\\n\\nThe paper then proposes an initialization scheme for finite-width neural networks, inspired by the above analysis. I don't think I quite understand the procedure you use. Can you please clarify: \\n\\ni) what do you mean by \\\"guide by the idea of ... in a supervised fashion\\\", what is supervised here? Perhaps it's mean you use the inputs to initialise the weights, compared to other techniques that only use the size of the layers?\\n\\nii) the procedure you use to initialise weights, say for a single layer net.\\n\\nWhile the contribution is theoretically interesting, I'm afraid it doesn't reach the level required (i.e. late-breaking work) for the workshop.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Our proposed method is the first approach, to the best of our knowledge, that enables the construction of arbitrarily deep infinite-width neural networks. We use the kernel reparametrization trick to do that. In particular, we sample weights from a Gaussian processes, where the covariance function is constructed using kernels inferred from the infinite-width neural network. In summary, we introduce a new neural network architecture, describe a formalism to reason about it and apply the insights gained from this construction on standard finite-width neural networks in the form of a novel initialization method.\\n\\nIn order to perform initialization of a finite-width neural network with our method, we need to determine for each layer l the points {\\\\xi_{im}^{(l)}}_{m} that are used for constructing the weights as\\n\\nu_{l, i} = \\\\sum_{m = 1}^{M_{l}} \\\\alpha_{im}\\\\hat{k}_{l}(\\\\cdot, \\\\xi_{im}^{(l)}) with \\\\alpha_{i} \\\\sim N(0, \\\\frac{1}{M_{l}} I). \\n\\nWe choose these points from the training set according to supervised signal. In particular, for each neuron i, we assigned it a class and choose the {\\\\xi_{im}^{(l)}}_{m} from that particular class.\"}",
"{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper is an attempt to gain a better understanding of deep learning through kernels. I think the paper does not add much to the previous works such as (Hazan and Jaakkola 2015) and (Daniely et al. 2016). The suggested trick to construct weights is interesting but I don't understand why such initialization should be useful. In particular, what is the value of supervised initialization? The experimental results show that the supervised initialization cannot improve the final results. Moreover, the reported test errors are far worse than what is being reported for single layer network of similar size.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryvlRyBKl | Adversarial Attacks on Neural Network Policies | [
"Sandy Huang",
"Nicolas Papernot",
"Ian Goodfellow",
"Yan Duan",
"Pieter Abbeel"
] | Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at http://rll.berkeley.edu/adversarial . | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=ryvlRyBKl | https://openreview.net/forum?id=ryvlRyBKl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"S1AXiYgie",
"rJjn8vcie",
"Sy-24-Bol",
"BJ-2fO6jg",
"ByDPbPdsg",
"Sk-uonusl",
"SkNPJKYie",
"Sk4c5ELsx",
"r11vdtTog",
"ByJw_Y13e",
"HkzQQV5sl",
"rkt-8Wl2g"
],
"note_type": [
"official_review",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"comment",
"comment",
"official_comment",
"official_comment"
],
"note_created": [
1489177381784,
1489823410658,
1489470633546,
1490023080654,
1489690975303,
1489714024855,
1489764188315,
1489549964175,
1490028630745,
1490159702611,
1489810202125,
1490191872725
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper147/AnonReviewer1"
],
[
"~Sandy_Huang1"
],
[
"~Sandy_Huang1"
],
[
"ICLR.cc/2017/workshop/paper147/AnonReviewer2"
],
[
"~Sandy_Huang1"
],
[
"ICLR.cc/2017/workshop/paper147/AnonReviewer2"
],
[
"~Sandy_Huang1"
],
[
"ICLR.cc/2017/workshop/paper147/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Sandy_Huang1"
],
[
"ICLR.cc/2017/workshop/paper147/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper147/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"# Summary\\nThis paper discusses adversarial examples in the deep RL context. The paper used FGSM method to generate adversarial examples and showed that it is easy to fool an RL policy network by injecting a small noise to the input image. The paper also shows that adversarial attack is possible in a black-box scenario where the adversary does not have complete access to the neural network. Although the paper applied an existing technique (FGSM) rather than proposing a new idea/method, this is the first work that discusses adversarial attacks in RL setting to the best of my knowledge.\\n\\n# Novelty: this paper does not present a new method.\\n# Clarity: the paper is well-written and easy to follow.\\n# Significance: discussion on adversarial examples in RL will be interesting to the research community.\\n# Quality: the method is demonstrated only in Atari games (visual observation with discrete action space). Demonstration on various RL domains (e.g., continuous control problems, robotics domain) could be interesting. More analysis of the behavior of the policy given adversarial examples would be interesting.\\n\\n# Pros\\n- First work that discusses adversarial examples in RL\\n- Empirical results on white-box and black-box attack scenarios are interesting.\\n\\n# Cons\\n- No novel algorithm or method\\n- Empirical study is limited to a single domain with discrete action space (not critical)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Author Clarifications\", \"comment\": \"Thank you for your response - we appreciate this opportunity to clarify. We use the second approach in our work: the adversary\\u2019s policy is trained separately with deep RL, without any interaction with the target policy. So, our results do show that adversarial perturbations designed to fool one policy will often also fool other policies trained for the same task, even if they were trained with a different deep RL algorithm.\\n\\nWe will explain this more clearly in the final version. We agree that it is valuable to describe how adversarial examples in the context of reinforcement learning differ from those in supervised learning, and will also add that to the final version.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your comments and suggestions! We agree it would be interesting to evaluate adversarial examples in the context of continuous control problems; we plan to investigate this in future work and would be happy to present any preliminary results at the workshop.\\n\\nRegarding novelty, machine learning researchers usually must carry out two roles: the role of a scientist discovering and documenting new phenomena, and the role of an engineer leveraging knowledge of scientific phenomena to build new systems. While our submission does not contribute a new algorithm and thus has low novelty from an engineering point of view, we argue that it documents a new phenomenon (adversarial examples for RL policies) and hence has novelty from a scientific point of view.\\n\\nIn particular, we believe that our observations showing that adversarial examples transfer across deep RL training algorithms will be beneficial to the community. This makes adversarial-example attacks against RL agents effective even in black-box scenarios where the adversary does not know which specific algorithm was used to train the target policy (which corresponds to a realistic threat model for real-world applications).\"}",
"{\"title\": \"Re: Author Clarifications\", \"comment\": \"Ok thanks, indeed it would be important to clarify. In particular, can you tell me in a few words how the deep RL algorithm used by the adversary is \\\"different\\\" from the one used by the target policy? If I read 4.2 correctly there is a situation where the only difference is the random initialization (which seems pretty minor to me), but you have a 2nd scenario (\\\"transferability across algorithms\\\") where it is not clear (to me). Thanks!\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your feedback. Although policies are comparable to image classifiers (and can in fact be trained with supervised learning, for instance in behavioral cloning), the key difference is that we consider policies trained with reinforcement learning. For policies trained with RL, the initialization of a policy\\u2019s parameters and the RL algorithm used to train the policy determine which states and actions are explored during training. Thus, two policies trained with different RL algorithms could end up learning different higher-level representations or even different strategies for accomplishing the same task. Based on this, we believe it is indeed surprising that adversarial examples remain effective when the adversary does not have access to the policy used by the agent -- which is likely the case in real-world situations.\\n\\nWe do observe that certain adversarial strategies are more transferable than others in these black-box scenarios. For instance, allowing the adversary to change just a single pixel in the input image is particularly transferable. This has implications for perturbations in the physical world, where an adversary may be able to make a few small changes to physical objects (e.g., strategically add a few dabs of paint to a stop sign) to cause a wide range of trained policies to behave incorrectly. We agree that impactful directions of future work are to develop more robust learning algorithms and discover new adversarial strategies.\"}",
"{\"title\": \"Re: Author Response\", \"comment\": \"Thanks for the clarification. However I'm afraid I fail to see how this is different from an adversary being able to affect a classifier without having access to the exact classification function, which from what I understand by reading this submission, is already known to be possible. I have to admit though that I'm not an expert in the field of adversarial techniques, so I may be missing something here -- I decreased the confidence in my review score by one notch accordingly.\"}",
"{\"title\": \"Differences between our work and previous work on black-box attacks\", \"comment\": \"Thank you for your response. In the supervised learning setting considered by previous work involving black-box attacks, there are two main approaches to the black-box scenario: (1) the adversary\\u2019s classifier is trained by querying the targeted model on a set of algorithmically-chosen inputs [1], and (2) the adversary\\u2019s classifier is trained on a dataset collected independently for the same task [2,3]. The second approach has the benefit of not requiring any interaction with the targeted model before adversarial examples are submitted, but it requires that the adversary be capable of collecting labeled data for the task of interest (which is often expensive). All of the above work has been conducted on image datasets such as MNIST and ImageNet.\\n\\nIn our work, the adversary must find adversarial examples that transfer across policies trained with different deep RL algorithms. It is not obvious a priori that transferability would hold given the different nature of RL applications. Indeed, when the adversary\\u2019s policy is trained with a different deep RL algorithm than the target policy, the two policies encounter different sequences of states and actions during training (analogous to different datasets in supervised learning) in addition to the fact that the two policies\\u2019 parameters are updated differently depending on the training algorithm. Our work shows that despite these additional challenges, transferability attacks still hold, which paves the way for black-box attacks against RL agents.\\n\\n[1] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples. ASIACCS 2017.\\n[2] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. ICLR 2014.\\n[3] N. Papernot, P. McDaniel, and I. Goodfellow. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arXiv 2016.\"}",
"{\"title\": \"Interesting research direction but no significant result yet\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes to apply adversarial attack techniques in order to \\\"fool\\\" agents already trained by deep reinforcement learning techniques. It shows that small modifications of the input image (here the screen of Atari games) may cause significant drops in the agent's performance, even in situations where the adversary does not have access to the exact model (or even learning algorithm) used by the agent.\\n\\nThe paper is clear and the results definitely show there are cases where input alterations invisible to the human eye can completely \\\"break\\\" an agent. Although this is important to know and motivates further research toward more robust reinforcement learning approaches, the contribution of this paper remains very limited. It is a straightforward application of existing adversarial techniques for image classification, taking advantage of the fact that a trained policy is essentially a classifier.\\n\\nThe key result -- that it is possible to alter an agent's policy through adversarial state modification -- is thus not surprising. The potentially more novel and interesting aspect of this research, that is not done here, would be to analyze and understand the differences observed between various learning algorithms and adversarial techniques, and use this understanding to derive more robust algorithms (or new adversarial strategies).\\n\\nI realize that workshop submissions are supposed to be somewhat premature work, but in my opinion in its current state it is still too premature, since there is no significant learning to be gained at this point.\", \"update_after_discussion\": \"I increased my score as this work actually seems more novel than I originally thought, even if it is re-using existing techniques.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Clarification of transferability across algorithms\", \"comment\": \"All the policies we consider are trained with one of three deep RL algorithms: DQN, TRPO, and A3C. For the \\\"transferability across algorithms\\\" scenario, the target policy is trained with one of these three (e.g., DQN), and the adversary\\u2019s policy is trained with one of the other two algorithms (e.g., TRPO or A3C). As a result, the two policies could end up being quite different, for instance in terms of their strategy or the high-level features they extract.\\n\\nYou are correct that the only difference for the \\\"transferability across policies\\u201d scenario is the random initialization. This causes the two policies to encounter different states and actions during training, which may also lead to differences in the fully-trained policies.\"}",
"{\"title\": \"Re: Differences between our work and previous work on black-box attacks\", \"comment\": \"Ok thanks, but in 4.2 you cite only [1], which I guess means you are using the first approach, i.e. the adversary is \\\"querying the targeted model on a set of algorithmically-chosen inputs\\\". The way I read this, the adversary doesn't care that the targeted model represents a policy in a RL setting, from its point of view it's just a classifier on images...\\n\\nI'm afraid it's a bit late to delve into the details (sorry about that, there was a problem with my email and I didn't get the notification that I had a paper to review until after the deadline had passed). I'm going to give an extra point just in case, because it's likely I didn't understand correctly, but if this goes through I hope you can expand a bit the paper to explain more clearly what these adversarial algorithms are doing, and how their application differs from a classification setting.\"}",
"{\"title\": \"Re: Clarification of transferability across algorithms\", \"comment\": \"Thank you, this is much clearer now! I will update my review accordingly (even if it's probably too late)\"}"
]
} |
|
Hy-po5NFx | Efficient variational Bayesian neural network ensembles for outlier detection | [
"Nick Pawlowski",
"Miguel Jaques",
"Ben Glocker"
] | In this work we perform outlier detection using ensembles of neural networks obtained by variational approximation of the posterior in a Bayesian neural network setting. The variational parameters are obtained by sampling from the true posterior by gradient descent. We show our outlier detection results are comparable to those obtained using other efficient ensembling methods. | [
"Deep learning"
] | https://openreview.net/pdf?id=Hy-po5NFx | https://openreview.net/forum?id=Hy-po5NFx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"B1yBQe-og",
"rkx7FzUil",
"ByS0CS65e",
"rk9Edt6ig",
"rJLVvL39l"
],
"note_type": [
"official_review",
"comment",
"comment",
"comment",
"official_review"
],
"note_created": [
1489204022873,
1489541400157,
1488965325454,
1490028594347,
1488901934360
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper94/AnonReviewer1"
],
[
"~Miguel_Jaques1"
],
[
"~Nick_Pawlowski2"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper94/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Interesting idea\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors propose to approximate the posterior over weights by minimizing KL(p||q) where p denotes the true posterior and q denotes a diagonal Gaussian approximation. The authors collect multiple samples from the posterior by running SGLD and learn the variational approximation by updating the mean and variance of the weights in an incremental fashion (which removes the need to store all previous samples). In practice, the authors use Adam + small gradient noise with fixed standard deviation instead of a properly tuned SGLD. Overall, it's a quite simple idea to learn a distribution over weights and looks promising for outlier detection. It under-performs compared to explicit ensemble, however the proposed method is more memory efficient and requires just training 1 model.\", \"questions\": [\"Do you use all the samples or do you have some sort of \\\"burn-in\\\" phase where you ignore the first few samples as they could be quite bad? Did you try \\\"thinning\\\" the samples, i.e. use only every K-th sample or so?\", \"how do you choose d_x in (4)?\", \"There has been some work on expectation propagation for Bayesian neural networks which also minimize KL(p||q). See \\\"Stochastic expectation propagation\\\" by Li et al. 2015 and \\\"Distributed Bayesian learning with stochastic natural-gradient expectation propagation and the posterior server\\\" by Hasenclever et al. 2015. It would be nice to discuss connections to these works.\", \"The snapshot ensemble code is available online at https://github.com/gaohuang/SnapshotEnsemble. It'd be worth understanding why/when it underperforms.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": [\"Thanks for your remarks. Answering each of the points in the same order:\", \"We burn the first 100 iterations. Since all the collected samples are used in the calculation of the Gaussian parameters, including the samples of low likelihood from the beginning of training did result in decreased performance. We tried thinning (between 2 and 20 steps) and it did not yield a significant change in performance, in the cases that the number of samples collected were the same, i.e. collecting 100 samples from 100 steps yields similar performance to collection 100 samples from 200 steps with thinning factor of 2 (assuming the previous burn-in). Given this, we chose to not use thinning and have a smaller number of gradient descent steps.\", \"d_x is defined as the ensemble prediction disagreement for the input x, as per Lakshminarayanan\", \"et al. (2016).\", \"Thanks for the suggestion, we were not aware of that line of work. We are looking into it and will update the paper to include any similarities worth mentioning.\", \"Yes, we are also suspicious about why we couldn't get snapshot ensembles to yield results in the same line of the original paper - we were a bit short on time and didn't manage to figure it out before the deadline. We have looked into the implementation and ours seems to be aligned, but we will keep looking (it is possible that this is a matter of hyperparameter choice, since there are several additional hyperparameters at play in this model).\"]}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Many thanks for the positive feedback and the suggestions for further comparison and related work. We will take this into account and conduct the additional experiments.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Neat idea\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose to approximate inference in Bayesian neural networks by minimising the KL divergence between the true posterior and the approximating distribution KL(p|q) (rather than the usual VI objective KL(q|p)). The objective used (known as EP's KL objective) is known to result in better uncertainty estimates. The authors then propose to approximate the resulting intractable expectation over the posterior by Monte Carlo sampling, with samples produced from SGLD. This neat idea of summarising MCMC approximations using sufficient statistics originating from a variational approximation has some interesting implications, for example avoiding the memory requirements of storing many samples of the model parameters. This technique is then used in outlier detection, and demonstrated to slightly improve over MC sampling with dropout VI, and slightly under-perform compared to full ensembles that replicate many copies of the parameters (requiring much more memory to store the models).\\n\\nMy biggest concern with the paper, though, is that the authors don't actually use SGLD in the experiments since they \\\"found [SGLD hyper parameters] hard to tune\\\", and used the Adam optimiser instead. This means that the samples accumulated are not from the posterior of the model at question, but just *a* collection of network weights, casting a heavy shade on any possible interpretation of the experiment results. \\n\\nI like the ideas presented in the submission, and would encourage the authors to repeat their experimental evaluation, also comparing their method to more sensible baselines such as fully factorised Gaussian VI inference in Bayesian neural networks (which is closely related to the suggested technique).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByD6xlrFe | Hybrid Neural Networks over Time Series for Trend Forecasting | [
"Tao Lin",
"Tian Guo",
"Karl Aberer"
] | The trend of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the trend in time series data play an important role in many real applications, ranging from resource allocation in data centers, load schedule in smart grid and so on. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel hybrid neural network based learning approach over time series and the associated trend sequence. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series and uses a long-short term memory recurrent neural network (LSTM) to capture the sequential dependency in historical trend evolution. Some preliminary experimental results demonstrate the advantage of TreNet over cascade of CNN and LSTM, CNN, LSTM, Hidden Markov Model method and various kernel based
baselines on real datasets. | [
"time series",
"trend",
"trenet",
"lstm",
"hybrid neural networks",
"cnn",
"intermediate upward",
"downward patterns",
"time series data",
"important role"
] | https://openreview.net/pdf?id=ByD6xlrFe | https://openreview.net/forum?id=ByD6xlrFe | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"S13x6Smil",
"SJa3C6msl",
"SkdD_Yaox"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489358067667,
1489391285283,
1490028639965
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper159/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper159/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Paper was not invited for submission to workshop tract\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper was not invited for submission to the workshop track of ICLR and suffers from the same limitations as the main paper submission: ad-hoc local trend computation. The authors merely shifted parts of the paper to an appendix to fit into 3 pages.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"shows improvements, but quite incremental\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper introduces a hybrid model called TreNet for modeling time series, by using a combination of an LSTM to capture long-term sequential dependencies, and a CNN to reason on local data at a particular time point. The outputs of the two components are combined in a feature fusion layer, and a fully connected layer is used to output the final prediction at a time step.\\n\\nThe model is a quite incremental improvement over previous work combining LSTMs and CNNs (e.g. CLSTM), and improvement in performance is also very incremental as a result. I don't feel that the contribution is sufficiently significant in novelty or performance to recommend acceptance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
HJC88BzFl | Precise Recovery of Latent Vectors from Generative Adversarial Networks | [
"Zachary C. Lipton",
"Subarna Tripathi"
] | Generative adversarial networks (GANs) transform latent vectors into visually plausible images. It is generally thought that the original GAN formulation gives no out-of-the-box method to reverse the mapping, projecting images back into latent space. We introduce a simple, gradient-based technique called stochastic clipping. In experiments, for images generated by the GAN, we exactly recover their latent vector pre-images 100% of the time. Additional experiments demonstrate that this method is robust to noise. Finally, we show that even for unseen images, our method appears to recover unique encodings. | [
"Computer vision",
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=HJC88BzFl | https://openreview.net/forum?id=HJC88BzFl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Hk20xYSig",
"HkaS3Woqx",
"HytgSTJog",
"B1YMuKTjl"
],
"note_type": [
"official_review",
"official_review",
"comment",
"comment"
],
"note_created": [
1489502420239,
1488817220803,
1489126640734,
1490028560619
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper31/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper31/AnonReviewer2"
],
[
"~Zachary_Chase_Lipton1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes a method to reconstruct the latent code vector corresponding to a given observation under a GAN generative model. Unlike BiGAN and ALI, this does not require a different training procedure or an additional inference network. Instead, projected gradient descent in code space is used, with an additional heuristic called \\\"stochastic clipping\\\" that randomly resets code components that hit the boundary.\\n\\nThe proposed technique is simple, but the fidelity of the reconstructed images looks good. The \\\"stochastic clipping\\\" technique is interesting, but quite ad-hoc. \\n\\nIf the goal is to simply keep codes away from the boundary, the authors could also try a simple entropy regularization term (map each code coordinate to [0,1] and use a Bernoulli entropy), as a barrier function for the constraint set. Additionally, one could combine this entropy regularization and the projection step by using exponentiated gradient (Warmuth, 1997), a.k.a. entropic mirror descent (Beck and Teboulle, 2003), which might simplify the algorithm.\\n\\nFor a full conference submission, the authors should explore these things as well as evaluate on more tasks than reconstruction, like classification on the code vectors (as explored in the BiGAN and ALI papers).\", \"small_notes\": \"in Section 2, many uses of the plural term \\\"minima\\\" where the singular term \\\"minimum\\\" should be used.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice simple heuristic\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors present a heuristic for reliably recovering the latent state vector that generates a GAN sample. It seems to work well in practice, providing accurate recovery up to the level of accuracy measured.\", \"specific_comments\": \"On the topic of reconstructing latent states from GAN samples, the authors should probably include references to BiGAN and adversarially-learned inference, which are slightly different but still closely related. Metz, Poole, Pfau and Sohl-Dickstein (2017) also reconstruct latent states from pixels, but are more focused on accurately generating out-of-sample images than reconstructing the latent state.\\n\\nThe clipping heuristic used is more formally known as a case of projected or proximal gradient descent in the optimization literature. Probably worth putting it in the right context. The heuristic of randomly resetting any index that goes out of bounds is novel as far as I know.\\n\\nGiven regular gradient descent with no clipping is still 98% effective according to their metrics, a more stringent measure of accuracy which more clearly demonstrates the gap between the baseline and the proposed method would be helpful.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Revisions made\", \"comment\": \"Dear reviewer,\\n\\nThanks for the pointers on related work. While we had referred to the BiGAN paper, it wasn't previously mentioned in the related work section. We have dutifully mentioned each paper in related work and highlighted the similarities and differences to the method presented here.\\n\\nWe agree with your judgment on terminology and have accordingly changed all mentions of \\\"standard clipping\\\" to \\\"projected gradient\\\" to better agree with the optimization literature. \\n\\nAdditionally, we will run new experiments to a stricter measure of accuracy and include the updated results in an updated draft when ready. Already, we've expanded the experiments on reconstruction from noisy images, showing that the method continues to work well up to greater degrees of noise than previously reported. \\n\\n[updated version posted as revision]\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
rJR7aaEYx | Image Captioning with Sparse LTSM | [
"Yujun Lin",
"Song Han",
"Yu Wang",
"William J. Dally"
] | Long Short-Term Memory (LSTM) is widely used to solve sequence modeling problems, for example, image captioning. We found the LSTM cells are heavily redundant. We adopt network pruning to reduce the redundancy of LSTM and introduce sparsity as new regularization to reduce overfitting. We can achieve better performance than the dense baseline while reducing the total number of parameters in LSTM by more than 80%, from 2.1 million to only 0.4 million. Sparse LSTM can improve the BLUE-4 score by 1.3 points on Flickr8k dataset and CIDER score by 1.7 points on MSCOCO dataset. We explore four types of pruning policies on LSTM, visualize the sparsity pattern, weight distribution of sparse LSTM and analyze the pros and cons of each policy. | [
"Deep learning"
] | https://openreview.net/pdf?id=rJR7aaEYx | https://openreview.net/forum?id=rJR7aaEYx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"B1hBo94jx",
"BkuK7kZog",
"S1druF6il"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489443652246,
1489200000303,
1490028607659
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper113/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper113/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper uses the sparsity-inducing techniques of Han et al 2016 and Narang et al 2017 for the LSTM used in image captioning. This amounts to four techniques Type 1 ... Type 4, with 1,1,3,4 hyperparameters respectively. It is not clear from the paper how these hyperparameters are set or how they were tuned, except for the mentioned 80% sparsity. The experiments consist of running the 4 types and comparing them to a baseline, which uses the full LSTM. One has to squint quite hard to see which sparsity approach works best, and the result tables don't offer very consistent takeaways.\\n\\nOverall, it is not clear if these results should be a paper by itself, or merely one section in either Han et al 2016 or Narang et al 2017 papers, both of which report experiments with RNNs already. Consistent with the results already presented in these much more thorough papers, the added sparsity appears to have a small regularizing effect. Therefore, it is not clear what value is added with this work.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper explores regularization of LSTM-style recurrent networks using sparsity, showing improved image captioning performance from hybrid CNN-LSTM models and comparing different approaches to applying sparsification.\", \"pros\": \"Explores sparsity a regularization technique for LSTMs, a widely used recurrent network architecture.\\n\\nAnalyzes several sparsification protocols, varying the rate and timing of sparsification during training.\\n\\nExperiments show that the simplest sparsification approach (Type I) where sparsification is applied only once after a single iteration of training gets consistently good performance, better than the baseline and mostly better than the more complex protocols (Types II-IV).\", \"cons\": \"Not much novelty -- the aim of the paper is to compare two existing RNN sparsification methods (Han et al 2016b & Narang et al. 2017) for image captioning with LSTMs. It\\u2019s not clear that the experimental results would consistently generalize to other problems or non-RNN architectures. (Han et al. 2016b, the source of the sparsification method, includes experiments in other problem settings and with pure convnets, in addition to image captioning RNNs.)\\n\\nThe largest performance improvements from sparsification aren\\u2019t very big, and the runtime improvement isn\\u2019t quantified (I would guess it\\u2019s relatively small in this case, with the convnet being the bulk of the execution time).\\n\\nOnly the sparsification schedule is varied in the experiments. The paper could explore the impact of other hyperparameters, such as the sparsity proportion (all experiments use 80% sparsity).\", \"minor\": \"Figure 4 sparsity patterns -- if the only takeaway from these is that different LSTM gates have different degrees of sparsity, these visualizations could be replaced with a simple table or plot of that summary statistic for each gate. The actual patterns (besides the overall \\u201cdarkness\\u201d indicating the degree of sparsity) seem like noise.\\n\\n\\nOverall, the paper doesn\\u2019t propose anything new, and though the evaluation may be a useful reference for practitioners, its scope is too limited given that only the sparsification schedule is explored.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
Hynn8SHOx | DL-gleaning: An approach for Improving inference speed and accuracy | [
"HyunYong Lee and Byung-Tak Lee"
] | Improving inference speed and accuracy is one of the main objectives of current deep learning-related research. In this paper, we introduce our approach using middle output layer for this purpose. From the feasibility study using Inception-v4, we found that our approach has potential to reduce the average inference time while increasing the inference accuracy. | [
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=Hynn8SHOx | https://openreview.net/forum?id=Hynn8SHOx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"HJAZOYaog",
"B1OJyGKcx",
"B1Ymc5lol"
],
"note_type": [
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028550200,
1488686815884,
1489181216651
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper4/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper4/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Not promising\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper proposes to add output layers from middle layers to speed up inference. The paper showed that in the oracle condition speedup is possible. However, because of errors in making decisions on whether to pick the middle layer result, it's very unlikely that the proposed approach would work. There is no free lunch.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"No promising results\", \"rating\": \"3: Clear rejection\", \"review\": \"This work tries to speed up inference by taking a shortcut from the network. The paper shows that an accuracy increase is possible with such a shortcut. However, the way the accuracy is calculated for the proposed model is very unrealistic. The accuracy is calculated by only accepting the correct answers from the mid-level output. This way of an accuracy calculation will likely show an increase when you take the best out of the two networks. It is also known that in many cases shallower networks are less accurate than deeper models, so it is only normal that mid-level output gives some reasonable predictions but in average worse than the final output. I am not convinced about the future of this work, since there is no conducted experiments with promising results.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJGIC1BFe | A Contextual Discretization framework for compressing Recurrent Neural Networks | [
"Aidan Clark",
"Vinay Uday Prabhu",
"John Whaley"
] | In this paper, we address the issue of training Recurrent Neural Networks with binary weights and introduce a novel Contextualized Discretization (CD) framework and showcase its effectiveness across multiple RNN architectures and two disparate tasks. We also propose a modified GRU architecture that allows harnessing the CD method and reclaim the exclusive usage of weights in $\{-1, 1\}$, which in turn reduces the number of power-two bit multiplications from $O(n^3)$ to $O(n^2)$. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=SJGIC1BFe | https://openreview.net/forum?id=SJGIC1BFe | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"rJWDutTse",
"SyFiItyse",
"S1FG841je"
],
"note_type": [
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028632996,
1489110689117,
1489090065451
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper150/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper150/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"A simple trick for binary recurrent nets\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors introduce a simple trick to improve the training of binary-weighted RNNs.\\nThe motivation of this work is to consider the difference in the distribution of weights between the input-to-hidden connection and hidden-to-hidden connection. The main idea is to use different scaling terms for each weight (i.e., input-to-hidden and hidden-to-hidden). This is shown with GRU-RNN implementation.\\n\\nAs shown in Fig. 1 (b), the performance is dramatically improved by using this simple scaling trick. I will complain a bit about the term 'contextual discretization'. It seems a bit overselling the idea after I learned that it is about hand-picking different scaling variables to different multiplication terms in the update function of GRU-RNN. The scaling terms are chosen after observing the distribution of weights in a GRU-RNN using full-precision, therefore the values reported in this paper may not apply in general. However, the method is surprisingly effective based on the reported results. The paper is clearly written, and the method is tested on two different domains, text and audio.\", \"minor_comments\": \"citation to GRU-RNN is missing.\", \"pros\": \"A simple trick that works well.\", \"cons\": \"The method is not entirely new, I believe a similar trick was already proposed for non-recurrent nets.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"rating\": \"3: Clear rejection\", \"review\": \"In this paper, the authors propose a weight discretization scheme for training recurrent neural networks. The work shows that RNNs can be successfully trained on language modelling and music generation tasks when the binary weights are chosen to be much smaller and different for the hidden-to-hidden and input-to-hidden connections.\\n\\nThe idea of the paper is very simple but apparently effective. Some of the earlier research about binary and ternary weights did involve different choices for different parts of networks (e.g, the different GRU gates in Ott et al., 2016). Since recurrent neural networks are complicated non-linear dynamical systems, it makes sense that the scaling of the weights should be chosen carefully to allow for useful dynamics which are useful for solving practical problems. It's good that the paper draws attention to this. I think that the branding of this method as 'contextual discretization' is making it seem more elaborate than it really is. \\n\\nWhile the authors cite the work by Rastegari et al. (2016), they don't point out that the Binary-Weight-Networks described there are almost identical to the method proposed for implementing binary GRU's. In Binary-Weight-Networks, a multiplication with a binary weight vector is followed by a multiplication with a scalar which has been chosen to make the end result as similar as possible to a multiplication with the original real-valued weights. While that method has apparently not been applied to RNNs yet, it is more elegant than picking these values by hand and it can be seen as a more general method than the one proposed in this paper. I think that this similarity severely undermines the novelty of the ideas presented.\\n\\nThe paper is clearly written and easy to understand. Given how short the paper is, there could have been a more elaborate discussion of related work.\", \"pros\": \"The method is very simple yet effective for the tasks it is evaluated on and this is an interesting result.\", \"cons\": \"The method is almost identical to Binary-Weight-Nets.\\nThe hand-picked scaling values could be very task specific.\\nThe paper exaggerates the novelty of the method by giving it a new name.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1mQ01SYl | On Hyperparameter Optimization in Learning Systems | [
"Luca Franceschi",
"Michele Donini",
"Paolo Frasconi",
"Massimiliano Pontil"
] | We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. The reverse-mode procedure extends previous work by (Maclaurin et al. 2015) and offers the opportunity to insert constraints on the hyperparameters in a natural way. The forward-mode procedure is suitable for stochastic hyperparameter updates, which may significantly speedup the overall hyperparameter optimization procedure. | [
"hyperparameter optimization",
"systems",
"procedures",
"hyperparameters",
"procedure",
"gradient",
"validation error",
"respect",
"iterative learning algorithm",
"ways"
] | https://openreview.net/pdf?id=r1mQ01SYl | https://openreview.net/forum?id=r1mQ01SYl | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"ryyDdY6oe",
"rJW42RHoe",
"ByjnC9Lsx",
"r1sV1-1og"
],
"note_type": [
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1490028631489,
1489525801406,
1489575602632,
1489076018937
],
"note_signatures": [
[
"ICLR.cc/2017/pcs"
],
[
"~Massimiliano_Pontil4"
],
[
"ICLR.cc/2017/workshop/paper148/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper148/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"thanks and link to longer paper\", \"comment\": \"Thanks for the positive comment. We now also have a longer in ArXiv (see: https://arxiv.org/pdf/1703.01785.pdf) with more detailed derivations and experimental results, covering some of your questions.\"}",
"{\"title\": \"appropriate subject, good analysis\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The topic of hyperparameter optimisation is quite obviously pertinent to ICLR.\\nThe approach the authors take seems straightforward and the results presented in the recently added full paper seem good. In particular the \\\"real-time\\\" updates of the forward mode optimisation are a useful innovation. I believe many ICLR attendants would find this interesting and useful.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Good paper\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper considers an important topic in hyperparameter optimization of learning algorithms.\\nUnfortunately, due to the page limit of the workshop track the material is too compressed starting from \\npage 1 where most variables can be guessed but are not defined (v_t, w_t, mu, eta, m and \\u03b3 in \\\"In this example, \\u03bb = (\\u00b5, \\u03b3)\\\"). \\nThe paper might benefit from figures showing optimization trajectories of hyperparameters. \\nThe paper is missing the results regarding the scalability with the number of parameters and hyperparameters. \\nThe results with grid search baseline are preliminary as suggested by the authors. \\nA supplementary/appendix figure (instead of Figure 1- Right) with the accuracy vs time for random search and the proposed approach might improve the presentation of the experimental results.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S15zzTmYx | INCREMENTAL LEARNING WITH PRE-TRAINED CONVOLUTIONAL NEURAL NETWORKS AND BINARY ASSOCIATIVE MEMORIES | [
"Ghouthi Boukli Hacene",
"Vincent Gripon",
"Nicolas Farrugia",
"Mattieu Arzel",
"Michel Jezequel"
] | Thanks to their ability to absorb large amounts of data, Convolutional Neural Networks (CNNs) have become the state-of-the-art in various vision challenges, sometimes even on par with biological vision. CNNs rely on optimisation routines that typically require intensive computational power, thus the question of implementing CNNs on embedded architectures is a very active field of research. Of particular interest is the problem of incremental learning, where the device adapts to new observations or classes. To tackle this challenging problem, we propose to combine pre-trained CNNs with Binary Associative Memories, using product random sampling as an intermediate between the two methods. The obtained architecture requires significantly less computational power and memory usage than existing counterparts. Moreover, using various challenging vision datasets we
show that the proposed architecture is able to perform one-shot learning – even using only part of the dataset –, while keeping very good accuracy. | [
"Computer vision",
"Deep learning",
"Supervised Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=S15zzTmYx | https://openreview.net/forum?id=S15zzTmYx | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Bkv-e4Sig",
"r14mdKTjg",
"ry_elqIog",
"ryDiqwSol",
"SknwBcBie"
],
"note_type": [
"official_review",
"comment",
"comment",
"comment",
"official_review"
],
"note_created": [
1489481726628,
1490028571870,
1489571823891,
1489496735320,
1489507684135
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper52/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Ghouthi_BOUKLI_HACENE1"
],
[
"~Ghouthi_BOUKLI_HACENE1"
],
[
"ICLR.cc/2017/workshop/paper52/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Important topic, but surprisingly modest performance\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper introduces a combination of transfer learning based on pre-trained DNNs with associative memory.\\nThe idea itself makes sense and I am surprised no one has explored it before.\\n\\nHowever, the results seem to be relatively poor and thus the method does not seem to be very valuable as is.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Added baseline results using a linear softmax classifer\", \"comment\": \"The authors wish to thank the anonymous reviewer for his encouraging feedback on the proposed method. We just uploaded a new version of the paper, in which we provide results from a simple baseline, using a linear softmax classifier (logistic regression) trained on the output of the CNN.\"}",
"{\"title\": \"Results updated with ImageNet subset\", \"comment\": \"We wish to thank the anonymous reviewer for his encouraging comment. We just updated the paper, adding results from another dataset (Imagenet subset 2, disjoint from the one used to train the CNN) in table 1, in addition to CIFAR. Note that CIFAR is already challenging using Inception feature extraction (87 % success with a K-NN). Importantly, our approach enables important reductions in both memory and processing complexity, which was the main motivation for this work.\"}",
"{\"title\": \"Elegant approach, but poor performance and missing baselines\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes a technique for doing one-shot learning in computation/memory constrained settings, by taking features from a pre-trained CNN, product-quantizing them (with random anchors) to obtain an alphabet, and linking \\\"words\\\" from this alphabet (the PQ chunk centers comprising used to approximate a given feature vector) with class labels via a binary associative memory.\\n\\nThe setting is interesting and basic idea is (to the best of my knowledge) novel and appealing. However the experimental results aren't compelling and lack natural baselines. At least as a reference, it would have been nice to see the performance of a simple linear classifier trained on the feature vectors, if not a comparison against previous techniques that attempt to learn such linear classifiers incrementally. As it is it is hard to judge the merit of the proposed approach relative to the state of the art, and on their own the results are somewhat underwhelming.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HkG1JKVKg | Design and preliminary evaluation of team based competitions in video forecasting | [
"Florin Popescu",
"Isabelle Guyon",
"Lisheng Sun",
"Diviyan Kalainathan",
"Sebastien Treguer",
"Cecile Capponi",
"Stephane Ayache",
"Xavier Baro",
"Julio C. Jacques Junior",
"Sergio Escalera"
] | The article describes the design of a series of competitions the aim of which is the evaluation and the further development of the state-of-the art in spatio-temporal forecasting. The means of doing so is to provide novel test data incrementally, while evaluating work of competing teams that submit algorithms in terms of performance criteria which include accuracy of predictions and time of computation. Initial results are presented hereing, whereas final results of the ongoing challenge will be presented at ICLR. | [
"competitions",
"design",
"preliminary evaluation",
"team",
"video forecasting design",
"video",
"article",
"series",
"aim",
"evaluation"
] | https://openreview.net/pdf?id=HkG1JKVKg | https://openreview.net/forum?id=HkG1JKVKg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SyG3eC7ix",
"Hkba1vQoe",
"HyLNdKpsl"
],
"note_type": [
"official_review",
"official_review",
"comment"
],
"note_created": [
1489391786201,
1489362872987,
1490028590210
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper85/AnonReviewer1"
],
[
"ICLR.cc/2017/workshop/paper85/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"unclear and leaves questions\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper presents the design process of upcoming challenges in spatio-temporal forecasting tasks. The work proposes a 'hackathon-type setup' where teams are presented with new data and are expected to build a reasonable model.\\n\\nThis approach seems interesting, but the paper is unclear in writing and confusing as to details of the task, which seems to be video prediction. Furthermore, the hackathon-type setup seems incompatible with existing work in video prediction which should take much longer than a few hours to develop and train, even with a starter kit. If only code can be written during the hackathon and evaluated after, it is unclear whether this can be a successful setup.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This workshop paper presents a platform for a competition on spatial-time forecasting (e.g., video), based on CodeLab.\\n1. Logistic: Multiple teams will attend this competition and are given an iPython notebook to start with. They need to finish their codes in 3 hours and submit. A coach is assigned to each team to solve any technical problems. \\n2. Authors have collected a talking-to-camera video dataset focusing on face for the competition, and mentioned the data preprocessing step in details.\\n3. Some baseline methods have been implemented and their performances are listed in Table 1. \\n\\nThere are many issues to be answered. For example:\\n1. The target of this competition is not clear. The data are preprocessed to focus on faces and the topic is spatial-time forecasting. An obvious guess is to predict the pixels of some future frame, which can also be inferred from the description of the baseline methods and error metric. However, it is not clear whether the competition wants to predict the next frame, or focused on long-term prediction. It is also not clear whether there are other related tasks. \\n\\n2. The purpose is not clear. Who will attend the competition? Why is time limited to 3 hours? How many libraries are provided? Are all the attendants expected to reimplement state-of-the-art previous methods in an more efficient way, or to develop a novel approach for this particular task? or to test their ability to efficiently find the right submodules and combine from a massive number of libraries? Why team work is needed? It is not clear. \\n\\nOverall, I feel this workshop paper, as a proposal, needs to be further revised.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
HyzsgBEtg | Dance Dance Convolution | [
"Chris Donahue",
"Zachary C. Lipton",
"Julian McAuley"
] | Dance Dance Revolution (DDR) is a popular rhythm-based video game. Players perform steps on a dance platform in synchronization with music as directed by on-screen step charts. While many step charts are available in standardized packs, users may grow tired of existing charts, or wish to dance to a song for which no chart exists. We introduce the task of learning to choreograph. Given a raw audio track, the goal is to produce a new step chart. This task decomposes naturally into two subtasks: deciding when to place steps and deciding which steps to select. We demonstrate deep learning solutions for both tasks and establish strong benchmarks for future work. | [
"Deep learning",
"Supervised Learning",
"Applications",
"Games"
] | https://openreview.net/pdf?id=HyzsgBEtg | https://openreview.net/forum?id=HyzsgBEtg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"r1yGGiHje",
"HypXbJZoe",
"SJ14OYTol",
"H1R3xdUsx",
"SJt1FuIox"
],
"note_type": [
"official_review",
"official_review",
"comment",
"comment",
"comment"
],
"note_created": [
1489510919425,
1489199396993,
1490028583101,
1489563830080,
1489565921362
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper74/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper74/AnonReviewer1"
],
[
"ICLR.cc/2017/pcs"
],
[
"~Chris_Donahue1"
],
[
"~Chris_Donahue1"
]
],
"structured_content_str": [
"{\"title\": \"an interesting application and a work in progress\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This is an interesting application of DL in rhythm-based video games that learns two sub-tasks: step placement from raw audio, and step selection from ground truth placement. This seems to be a work in progress, as it doesn't address the complete end-to-end problem yet.\\n\\nSome key information is missing in the paper. For example, the authors observed that data augmentation and inclusion of manual features (beta phase etc) significantly improved the performance but there is no comparison results given. Have the authors tried to learn such manual features from data?\\n\\nAlso there is a clear performance gap between Fraxtill and ITG, particularly on the step selection task under the best model LSTM64. Also, there is little difference between LSTM5 and LSTM64 on ITG while the improvement is more clear on Fraxtill. What are the reasons behind these observations?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes a system to generate \\\"step charts\\\" for the game Dance Dance Revolution (DDR) from audio signals. The system constists of two neural networks: one to determine where to place the steps in time, and one to select the type of the steps (up / down / left / right).\\n\\nWhile the application is quite unusual, the paper is clear and the design choices for the system are well motivated. The evaluation is also quite solid, with several baselines provided.\", \"a_few_questions_and_comments\": [\"For the step selection, the LSTM is conditioned on beat phase. How is beat phase determined? As far as I can tell this is not described anywhere.\", \"Why does adding time delta features help when using LSTM? Shouldn't it be able to derive those by itself?\", \"The AUC scores reported in table 1 are actually pretty low across the board. I suppose this is somewhat expected but it would be good to briefly discuss it.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"No Title\", \"comment\": \"Dear reviewer,\\n\\nThanks for your thoughtful comments. A full-length version (forthcoming) contains many of these details, some of which we had to omit here for brevity. Some specific replies and clarifications follow.\\n\\nBeat phase is a one-hot encoding over the four possible 1/16th note subdivisions. We can calculate this information programmatically because charts in our corpus contain metronomic markings. \\n\\nThe step selection model runs over steps (chosen by step placement algorithm), not over evenly spaced intervals. The delta time features encode the duration since the last step and until the next step, both of which the step-selection LSTM cannot infer by itself.\\n\\nAs with many metrics, AUC is good primarily as a relative measure of performance. Note that for this problem, the \\u201cground truth\\u201d step placements depend on the individual choreographer and the choices they made on that particular chart. \\n\\nEven if the same choreographer wrote a second step chart for the same song, the chosen step placements would not overlap completely with the previous choices. We stress that in this case there is a ceiling on the highest possible AUC and that this metric should only be interpreted relatively to compare models.\"}",
"{\"title\": \"No Title\", \"comment\": \"Dear reviewer,\\n\\nThank you for the thoughtful review. We omitted some of these details to accommodate the extended abstract length. \\n\\nAn unconditioned LSTM64 model trained on non-augmented Fraxtil data achieved a perplexity of 3.53 while the same unconditioned LSTM64 model trained on augmented data achieved a perplexity of 3.35. These scores are significantly worse than the 3.01 perplexity achieved by the beat-conditioned LSTM64 model trained on augmented data.\\n\\nWe believe the performance gap (on step selection) between the two datasets can be attributed to the fact that Fraxtil is a single-author dataset while ITG contains annotations from 9 choreographers. Author style tends to be distinctive and thus the single-author sequences are more predictable.\\n\\nWe have updated the workshop draft to address these points.\"}"
]
} |
|
S1OS0krtg | Knowledge distillation using unlabeled mismatched images | [
"Mandar Kulkarni",
"Kalpesh Patil",
"Shirish Karande"
] | Current approaches for Knowledge Distillation (KD) either directly use training data or sample from the training data distribution. In this paper, we demonstrate effectiveness of 'mismatched' unlabeled stimulus to perform KD for image classification networks. For illustration, we consider scenarios where this is a complete absence of training data, or mismatched stimulus has to be used for augmenting a small amount of training data. We demonstrate that stimulus complexity is a key factor for distillation's good performance. Our examples include use of various datasets for stimulating MNIST and CIFAR teachers.
| [
"Deep learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=S1OS0krtg | https://openreview.net/forum?id=S1OS0krtg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SyZOEnl9l",
"rklv_Y6sl",
"rkoNDI39e",
"H13xj5Goe",
"Hy3SzQnsl",
"H17og7s5x",
"B1d71Uyog"
],
"note_type": [
"official_review",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_review"
],
"note_created": [
1488139368881,
1490028632248,
1488901939447,
1489312499610,
1489936964012,
1488822426620,
1489096480189
],
"note_signatures": [
[
"ICLR.cc/2017/workshop/paper149/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper149/AnonReviewer2"
],
[
"~Mandar_Kulkarni1"
],
[
"~Mandar_Kulkarni1"
],
[
"~Mandar_Kulkarni1"
],
[
"ICLR.cc/2017/workshop/paper149/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"In this paper, the authors study knowledge distillation in a scenario where the data on which the teacher is trained is no longer available. They propose to use other stimulus (in the form of other datasets or noise) to train the student model, and evaluate the performance on the original dataset.\\n\\nThe main finding seems to be that using natural images as stimulus is better than using random noise. This is not very surprising. \\n\\nFrom a theoretical standpoint, the fact that random stimulus allows for knowledge transfer is interesting. But this was already covered by Papamakarios (2015). More practically, I cannot see how this kind of technique would be useful in real situations (i.e. I can envision a scenario in which we no longer have the data on which the teacher was trained, but why would we ever be interested in training a student model to perform well on that data?)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"thank you\", \"comment\": \"Thank you for the detailed response. I have revised my score upward given the response.\\nI would additionally request that a thorough revision be applied to the paper to take care of typos.\\n\\nThanks!\"}",
"{\"title\": \"Additional experiments\", \"comment\": \"Thank you for the review and comments.\\n\\nMain contribution of our work is to demonstrate the effectiveness of 'mismatched' unlabelled stimulus for distillation when training data is not available. In addition, we study the effect of complexity of the stimulus on the generalisation performance of the student network. Also, we validate the use of unlabelled stimulus for data augmentation.\\n\\nFor MNIST teacher, we have already shown results using CIFAR, STL and Shape stimulus. For CIFAR teacher, we now add additional experiments using mismatched stimulus such as MNIST, Shape, SVHN and Texture.\\n\\nThe details of experiments based on reviewer's questions and corresponding results are summarised below. \\n\\n1. Given that you don't have access to the training data, what is your stopping criteria when distilling knowledge from the teacher to the student model? \\n\\nOur objective function is to minimise the cross entropy loss between soft targets of the teacher and the student on the unlabelled stimulus. We terminate the iterations when the cross entropy loss cease to change. To visualise possibility of overfitting (if any), we plotted the cross entropy loss and the test accuracy for too cases: MNIST teacher using 1k CIFAR stimulus and CIFAR teacher using 1k Texture stimulus. The plots are added in the Appendix under the subsection 'Plot of cross entropy loss and test accuracy.' \\nNote that, the test accuracy and the cross entropy loss settles down with more iterations. Even with small training size (1k), overfitting is not observed. This could be because of soft labels used in the optimisation.\\n\\n2. Don't you think that CIFAR and Tiny ImageNet are both very similar as they both contain natural images? Did you try to leverage MNIST for distilling knowledge in the CIFAR experiment.\\n\\nAs mentioned in the reply to the first reviewer's comments, we have tried using MNIST stimulus for CIFAR teacher, but it did not work well. \\nTo quantify the result, we used various stimulus such as MNIST, Shape ,Street View House Numbers (SVHN), Texture dataset (https://www.robots.ox.ac.uk/~vgg/data/dtd/). The description of the Shape dataset is provided in the the reply to the first reviewer's comments. \\nNote that, none of these dataset has any overlap with the CIFAR data.\\nAlso, these datasets are of varied 'complexity' (variations). Visually, the order of complexity is MNIST < Shape < SVHN < Texture < TinyImagenet. The experimental results are added in the revised paper.\\nThe result re-iterate the fact that the Complexity of the stimulus plays an important role in better student generalisation. For the similar result with MNIST teacher, please refer to our first reply.\\nWe explored one of the quantification approach for complexity. The results are given the Appendix under the subsection 'Quantification of complexity'.\\n\\n3. Unlabelled stimulus for data augmentation\\nThough we have considered a scenario of zero training data, in the real application, the mismatched stimulus can also be used for data augmentation in the cases where a small training set is available. We have performed experiments with CIFAR teacher using 5k labelled samples from CIFAR dataset and augmenting it with 5k unlabelled samples from different stimulus. Results of the experiment are tabulated below. \\n\\n Only CIFAR + Noise + SVHN +MNIST +Shape +Texture + TinyImagenet \\n-------------------------------------------------------------------------------------------------------------------------------------------\\nCIFAR Test acc 0.5477 0.582 0.586 0.594 0.593 0.632 0.634 \\n\\nNote that augmenting CIFAR training set with 5k Texture samples provided significant improvement in the test acc.\\nFor the similar results with MNIST teacher, please refer to our first reply.\", \"novelty\": \"As pointed out by the reviewer, [1] has shown results for CIFAR teacher with the subset of 80M unlabeled Tiny Imagenet dataset. \\nIn our work, we use stimulus from 120k TinyImagenet labelled dataset obtained from the following links\", \"http\": \"//cs231n.stanford.edu/tiny-imagenet-100-B.zip\\n\\nThough CIFAR and 120k TinyImagenet are both natural image datasets, we observe that there is no significant overlap between their classes. \\nAdditionally, we used mismatched datasets such as MNIST, Shape, SVHN and Texture dataset as the stimulus for distillation.\\n\\nExperimental results demonstrate that complexity of stimulus is the key factor for a good performance. \\nAlso, unlabelled stimulus can be effective for data augmentation as well.\\nTo best of our knowledge, these trends are not studied in the literature yet.\"}",
"{\"title\": \"Request for feedback\", \"comment\": \"Your feedback was very useful in highlighting the right direction for the work. Would be grateful if you can comment on the updated version which focuses on how different datasets have varied effectiveness for stimulus. Any additional comments would help guide our future work.\"}",
"{\"title\": \"Additional Experiments\", \"comment\": \"Thank you for the review and comments.\\n\\nAs per the comments, we performed additional experiments to investigate the effect of complexity of the stimulus and the use of stimulus for data augmentation. The experiments and results are summarized below. \\n\\n1. Effect of complexity of unlabeled stimulus:\\n\\nThough, we have directly shown results using natural image datasets, we suspect that the 'complexity' of the stimulus plays an important role. To validate this, we used a simple shape dataset as a stimulus with MNIST teacher. The dataset was obtained from the following link.\", \"http\": \"//www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/BabyAIImageAndQuestionDatasets\\n\\nIt was previously used for demonstrating an effectiveness of Curriculum learning (Bengio et. al, ICML 2009).\\nThe dataset consist of 10k examples of simple shape images. Due to small variability, the dataset is more simple than CIFAR or STL.\\nThe resultant plot has been added in the Fig. 1 (a)&(b). \\nNote that, though the dataset performs better than noise, it performs inferior to CIFAR and STL. \\nFurther, we used MNIST data as the stimulus for CIFAR teacher and it did not work well.\\nWe believe that the complexity of the stimulus could be a key factor for good generalisation performance. \\n \\n2. Use of stimulus for data augmentation\\n\\nUnavailability of training data is a practical scenario when there are strict rules about data crossing digital boundaries, and the time duration for which data can be stored, further there exist circumstances where even the weights used for network may have to remain private. \\nFollowing paper talks about the privacy issues related with the training data.\", \"https\": \"//openreview.net/pdf?id=HkwoSDPgg\", \"when_one_has_to_distill_the_knowledge_in_a_network_to_match_various_practical_constraints\": \"(commodity hardware, large capacity to absorb newer data, or create some for specialist/generalist, and/or privacy), in the absence of training data there is a genuine problem. Indeed there has been work on using random noise, however, the performance of random noise does not come close to the teacher, and for data even slightly more complicated the error floor is hit rather early.\\n\\nThough, we considered an extreme scenario of availability of no training data, an unlabelled stimulus can be effective for data augmentation as well.\\nTo validate this, we performed an experiment where we use 500 samples from MNIST training set along with 3k unlabelled stimulus from various datasets.\\nWe perform this experiment with and without using training labels. While using training labels along with the unlabelled stimulus, uniform prior is assumed for the labels of the stimulus.\\n \\nResults are tabulated below.\\n\\nUsing training labels With 500 training samples MNIST + CIFAR(3k) MNIST + shape(3k) MNIST + noise(3k) \\n-------------------------------------------------------------------------------------------------------------------------------------------------------\\n YES 0.955 0.972 0.973 0.956\\n-------------------------------------------------------------------------------------------------------------------------------------------------------\\n No 0.95 0.972 - -\\n--------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nWe note that, augmenting the small training set with unlabelled stimulus (CIFAR and shape) provides a fair improvement in the test accuracy (approx. 2%).\"}",
"{\"title\": \"Novelty and Significance?\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper tackles semi-supervised learning leveraging Knowledge Distillation. In particular, they investigate the setting where a teacher has been pretrained but the training data are not available anymore.\\n\\nThe authors show that distilling knowledge using samples from another distribution/dataset achieves reasonable results and outperforms knowledge distillation with noisy inputs.\\n\\n\\n- Questions:\\nGiven that you don't have access to the training data, what is your stopping criteria when distilling knowledge from the teacher to the student model?\\nDon't you think that CIFAR and Tiny ImageNet are both very similar as they both contain natural images? Did you try to leverage MNIST for distilling knowledge in the CIFAR experiment.\\n\\n\\n- Novelty:\\nThe novelty is somewhat incremental. [1] already tried to distill knowledge from 80 million tiny image datasets for solving the CIFAR task. Although, they do use the CIFAR training set as well.\\n\\n- Clarity:\\nThe paper clarity could be improved overall (citation format looks strange, some typos, what is W_s in equation 1). However, the experimental part is rather clear.\\n\\n- Significance:\\nI would be nice to add in the paper a compelling example where you have a pretrained model, but don't have access to the data to train a student model. I don't really see how the approach developed would be useful in real application.\", \"in_summary\": \"\", \"pros\": [\"Rather clear experiment section\", \"Clearly shows the benefits of distilling knowledge from 'true data' rather noisy inputs.\"], \"cons\": [\"low novelty\", \"low significance\", \"[1] Do network really need to be deep?, Ba, Jimmy and Caruana, Rich, NIPS 2014\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1Y7-1HYg | Unseen Style Transfer Based on a Conditional Fast Style Transfer Network | [
"Keiji Yanai"
] | In this paper, we propose a feed-forward neural style transfer network which can
transfer unseen arbitrary styles. To do that, first, we extend the fast neural style
transfer network proposed by Johnson et al. (2016) so that the network can learn
multiple styles at the same time by adding a conditional input. We call this as “a
conditional style transfer network”. Next, we add a style condition network which
generates a conditional signal from a style image directly, and train “a conditional
style transfer network with a style condition network” in an end-to-end manner.
The proposed network can generate a stylized image from a content image and a
style image in one-time feed-forward computation instantly. | [
"network",
"style condition network",
"style image",
"unseen style transfer",
"unseen arbitrary styles"
] | https://openreview.net/pdf?id=H1Y7-1HYg | https://openreview.net/forum?id=H1Y7-1HYg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"SyZqx_U9x",
"BkyHFTWie",
"H178dtpjx",
"ByZKuiVje",
"HJ-hZsgix"
],
"note_type": [
"comment",
"comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1488515208925,
1489258806598,
1490028619505,
1489447033208,
1489183144969
],
"note_signatures": [
[
"~Keiji_Yanai1"
],
[
"~Keiji_Yanai1"
],
[
"ICLR.cc/2017/pcs"
],
[
"ICLR.cc/2017/workshop/paper132/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper132/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"We have prepared an interactive demo page.\", \"comment\": \"We have prepared an interactive demo page where an image matrix of multiple-styles-by-multiple-contents for any given style and content images is shown instantly. Please enjoy it !\", \"http\": \"//foodcam.mobi/yanai/fast_style/\"}",
"{\"title\": \"We added qualitative comparison with the results by Johnson's fast single style transfer network in the Appendix.\", \"comment\": \"Thank you for the comments and recognizing the novelty of the proposed approach.\\n\\nWe added qualitative comparison with the results by Johnson's fast single style transfer network in the Appendix, and uploaded the revised manuscript. We trained four independent models for Johnson's network to generate results for four styles, since it can learn a single style.\\n\\nThe quality of Johnson\\u2019s model and Conditional Style Transfer (trained 14 styles at once) are almost the same, while the results by Unseen Transfer is slightly different from them. However, we think Unseen Style Transfer is a good approximation of Johnson\\u2019s and Conditional Style Transfer.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}",
"{\"title\": \"Interesting ideas for doing fast style transfer using unseen styles\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes two primary novel ideas of doing fast style transfer even on styles which are as yet unseen by the model. First, the authors extend the model of Johnson et al by making it take a conditional vector which encodes the style to be transferred. Second, they propose another neural network architecture which generates such a conditional style vector. This allows the authors directly plug the conditional vector generator (which they call Style Condition Network) into the Conditional Fast Style Transfer Network, allowing them to train the entire architecture end-to-end. In addition the architecture is flexible enough that one can represent the styles as a continuous vector and hence can potentially represent a mixture of different styles. The experimental results show that the model has been successfully able to transfer the styles.\\n\\nWhile the approach proposed seems to be novel, I thought the experimental section is a bit weak. The styles on which the model has been trained on seems to have transferred well. However those on which the model is not trained, the transfer seems to be a bit weak. Furthermore, since all the results are qualitative, it is somewhat hard to get a sense how good or bad the proposed approach is, with regards to the unseen styles. \\n\\nIn any case I think paper has enough interesting ideas that it justifies being spoken about at a workshop.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"feedforward network for arbitrary styles\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"A brief summary of the paper's contributions, in the context of prior work.\", \"This paper addresses the problem of fast style transfer for unseen style images. The paper builds on the fast conv-deconv network of Johnson et al. The key technical insight is to have a \\u201cconditional vector\\u201d input which encodes different styles. The vector is concatenated to the convolutional responses of the content image. This representation is then deconvolved to the final style-transferred image output. The network can also be extended to regress to the conditional vector given a style image.\", \"An assessment of novelty, clarity, significance, and quality.\", \"The approach is novel, as far as I\\u2019m aware, but I\\u2019m not an expert in this topic. The paper is clear enough and cites relevant work that I\\u2019m familiar with in this space. As I\\u2019m not an expert in this area, I\\u2019m happy to support a champion.\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": \"The approach appears to be novel.\", \"cons\": \"The paper shows qualitative results only and doesn\\u2019t compare against any baselines. However, given the limited number of pages for the paper submission, this is probably ok.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJyBKyHKg | Particle Value Functions | [
"Chris J. Maddison",
"Dieterich Lawson",
"George Tucker",
"Nicolas Heess",
"Arnaud Doucet",
"Andriy Mnih",
"Yee Whye Teh"
] | The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agent's experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.
| [
"Reinforcement Learning",
"Games"
] | https://openreview.net/pdf?id=BJyBKyHKg | https://openreview.net/forum?id=BJyBKyHKg | ICLR.cc/2017/workshop | 2017 | {
"note_id": [
"Hytw1CUox",
"ryoYFdfsl",
"SkZEvnxjl",
"Sy-zgCIsl",
"ry0MMIPig",
"BkPSziOsx",
"Hyx0_Udsx",
"BJ7Wo9dsg",
"B1gJyKOog",
"rkZZIIDjx",
"rysi7MDix",
"Byt8uFTjg"
],
"note_type": [
"comment",
"official_review",
"official_review",
"comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1489588064943,
1489303939359,
1489188649307,
1489588232627,
1489621526387,
1489707583233,
1489688776143,
1489705723495,
1489698520497,
1489622521217,
1489605538889,
1490028625014
],
"note_signatures": [
[
"~Chris_J_Maddison1"
],
[
"ICLR.cc/2017/workshop/paper140/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper140/AnonReviewer3"
],
[
"~Chris_J_Maddison1"
],
[
"~George_Tucker1"
],
[
"ICLR.cc/2017/workshop/paper140/AnonReviewer2"
],
[
"~Chris_J_Maddison1"
],
[
"~Chris_J_Maddison1"
],
[
"ICLR.cc/2017/workshop/paper140/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper140/AnonReviewer2"
],
[
"ICLR.cc/2017/workshop/paper140/AnonReviewer2"
],
[
"ICLR.cc/2017/pcs"
]
],
"structured_content_str": [
"{\"title\": \"We are optimizing a different objective\", \"comment\": \"Thank you for pointing us to these references. We would like to point out that our work addresses a separate question from that addressed in the literature you cite. Our contribution aims to optimize the risk sensitive objective, whose literature we duly cited. In summary:\\n\\n(1) Most of your references only consider the inner maximization over pi', which is a completely different problem.\\n\\n(2) The value function you mention and the risk sensitive one do not correspond in all cases.\\n\\nWhile we agree that a related work section would clarify these distinctions, we would also like to point out that this workshop submission had a strict 3-page limit. We\\u2019ve added these references to the Appendix, along with a few others.\\n\\nIn more detail, the value function, which you reference (see originally Albertini & Runggaldier, 1988),\\n\\nV^pi(s; beta) = max_{pi'} E[R(xi) | s; pi'] - KL[pi'(xi|s) || pi(xi|s)] / beta,\\n\\nis the value achieved by the optimal policy, when optimizing over pi' under the cumulative return objective with a KL penalty assuming complete control over the transition dynamics of the environment. This is not the same as the optimization we consider. We consider the risk sensitive objective over pi,\\n\\nV^pi(s; beta) = beta^-1 log E[exp(beta * R(xi)) | s; pi]\\n\\nThere are at least two distinctions.\\n\\n(1) Most of the references you provided consider only the inner maximization of E[R(xi) | s; pi'] - KL[pi'(xi|s) || pi(xi|s)] / beta over pi\\u2019 while pi is a fixed reference distribution. In this case, the problems are clearly completely different. Although in some cases (see (2)), the risk sensitive objective would be optimized by a joint optimization over pi and pi\\u2019.\\n\\n(2) When pi is a policy interacting with a stochastic environment that we don\\u2019t control, as in the generic MDP definition we took in the paper, then the KL regularized control problem does not necessarily coincide with the risk sensitive one. They do coincide in the following cases: for some some transition dynamics, such as in path integral control, or generally when the policy completely specifies the transition dynamics of the environment. \\n\\nNone of the references that you cite address the question of optimizing risk sensitive objectives directly, despite possibly displaying risk aware \\\"properties\\\". The degree to which KL penalties modulate risk preferences is an interesting question, but beyond the scope of our contribution.\", \"references\": \"Logarithmic Transformations for Discrete-Time, Finite-Horizon Stochastic Control Problems (Albertini & Runggaldier, 1988)\"}",
"{\"title\": \"Incremental progress over existing research\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper discusses an interesting approach, which is very similar to existing research that the paper does not mention.\\n\\nEq. (2) is the minimum free energy over trajectories xi given initial state s:\\nV(s; beta) = min_{pi'} KL[pi'(xi|s) || pi(xi|s)] / beta - E[R(xi) | s; pi'].\\n(EDITED: to fix + to - in the R term.)\\n\\nThis has been studied extensively, for example:\\nLinearly Solvable Markov Decision Problems (Todorov, 2006)\\nInformation Theory of Decisions and Actions (Tishby and Polani, 2011)\\nOptimal Control as a Graphical Model Inference Problem (Kappen et al., 2012)\", \"rewards_as_exponential_hmm_emissions_have_been_studied_in\": \"An Approximate Inference Approach to Temporal Optimization in Optimal Control (Rawlik et al., 2011)\\n\\nThe properties of positive vs. negative values of beta have been studied in:\\nFree Energy and the Generalized Optimality Equations for Sequential Decision Making (Ortega and Braun, 2012)\\n\\nThe risk-aversion properties, and in particular the cliff domain, have been studied in:\\nTaming the Noise in Reinforcement Learning via Soft Updates (Fox et al., 2016)\\n\\n\\nThe application of the particle filtering technique in this context seems somewhat novel, however it is hard to assess this contribution in isolation from comparison to the large body of existing research.\\n(EDITED: to retract allusion to recent works by Todorov and Kappen)\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper studies a family of risk-sensitive value functions in the context of reinforcement learning. These value functions are parametrized by a single scalar \\\\beta which governs their \\\"risk sensitivity\\\", that is, essentially the value assigned to small short-term rewards relative to large delayed rewards. Large \\\\beta leads to \\\"risk-taking\\\" behavior, making better use of sparse rewards. The authors propose an efficient algorithm for optimizing (an approximation to) these value functions, based on a bootstrap particle filter. The approach is tested in Cliffworld with tabular policies.\\n\\nThe paper is written clearly, with an extensive appendix describing all details. I have not checked the derivations in the appendix. I am not aware of similar approaches in recent RL literature. However, I am not very knowledgeable about literature in some related fields, like optimal control or bandits.\\n\\nCould the authors please comment on computational complexity compared to usual policy gradients? Is it correct that it is the same, perhaps up to a constant factor?\", \"pros\": [\"Addressing an important problem: exploration/exploitation, sparse rewards\", \"Departing from the standard expected reward objective\", \"Compiling techniques from different fields in a new and useful way\", \"Experiments in Cliffworld with tabular policies support the intuitions behind the approach. Risk-taking agent is able to solve the task unlike its more conservative versions.\"], \"cons\": [\"No experiments in realistic RL scenarios with function approximators.\", \"Basically no discussion of alternative approaches or comparisons to baselines (one that comes to mind is Prioritized DQN, but there may be more)\", \"Overall, despite weak experimental evaluation, I tend to recommend acceptance, since the approach looks interesting, new and promising. The paper is outside of my main field of expertise, therefore I am only moderately certain.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank you for the thoughtful review and questions. We would like to point out that some of the omissions were due to a strict 3-page limit. To address a few of your concerns:\\n\\n(1) We did compare gradients of the PVF on the Cliffworld example to REINFORCE gradients and VIMCO gradients, with the details in the Appendix. We clarified this in the main body and added some more results to the Appendix.\\n\\n(2) The complexity for estimating the PVF is of order O(KT), in other words the same as K independent trajectories of length T. This is because the resampling step can be implemented in order K using the Alias method.\\n\\n(3) We decided to focus on policy gradient style algorithms, and did not explore any Q learning or related methods. All of the TD lambda algorithms apply to the PVF, and it would be interesting to consider in a full-length paper.\"}",
"{\"title\": \"Clarification\", \"comment\": \"Can you clarify which recent works by Todorov and Kappen you're referring to?\"}",
"{\"title\": \"Re: Clarification\", \"comment\": \"I disagree on (3), however that's a matter of interpretation.\\n\\nWith these clarifications, and the added appendices, I have no remaining concerns.\\nI've updated my rating.\"}",
"{\"title\": \"RE: Review\", \"comment\": \"Thanks for the quick followup. To emphasize, our actual contribution (and the only aspect of this work we claimed as original) is a new risk sensitive value function (PVF) defined by an SMC distribution, which you agreed is novel. The advantage of the PVF over existing methods is that it gives you a risk sensitive bound that decomposes in time, is cheap to estimate, and is a single objective for jointly optimizing the policy and possibly a proposal distribution.\\n\\nWhat we aren\\u2019t clear on, is the ways in which risk sensitive control and KL-control are distinct. Just to clarify our understanding define:\\n\\nL(pi, pi\\u2019) = E[R(xi) | s; pi'] - KL[pi'(xi|s) || pi(xi|s)] / beta\\n\\nOur understanding of the KL-control literature is that it considers the optimization of L(pi, pi\\u2019) over pi\\u2019 with pi fixed:\\n\\nargmax_{pi\\u2019} L(pi, pi\\u2019)\\n\\nWhen (in the special cases discussed) the optimal pi\\u2019 given pi achieves the risk sensitive objective value at pi, then the risk sensitive control problem is a joint optimization of L(pi, pi\\u2019) over both pi and pi\\u2019:\\n\\nargmax_{pi} [max_{pi\\u2019} L(pi, pi\\u2019)]\\n\\nIt doesn\\u2019t seem that the joint optimization problem is explicitly addressed in the KL-control references. Is it implicitly assumed that one could perform the joint optimization given perfect inference (as in EM)?\\n\\nIf our understanding is correct, it seems that risk sensitive control and KL-control are not the same. They are clearly closely related, in a way that mirrors the relationship between variational inference and maximum likelihood. We\\u2019ve added a sentence to the main body of the paper to that effect.\\n\\nRegardless of whether these objectives are distinct, we agree that ideas from the KL-control literature are absolutely relevant to risk-sensitive control and our PVF. We\\u2019ve now tried to reflect this in our Appendix, and are happy to clarify further. In particular, the Risk Sensitive Path Integral Control (van den Broek et al., 2012), is an interesting case in which you can express the risk sensitive optimal policy as the solution to a path integral problem.\"}",
"{\"title\": \"Re: Clarification\", \"comment\": \"Thanks for the great questions.\\n\\n(1) The PVF can be extended to incorporate sampling from an auxillary pi', but the version we considered can be understood as assuming pi\\u2019=pi. The PVF takes a different approach than an EM style approach for L(pi, pi'). Instead the PVF is defined as an SMC estimator of the risk sensitive value function in pi. This estimator is a bound (that is consistent in the number particles) on the risk sensitive objective. The advantage is that it has simple policy gradients that decompose additively in time, and thus we consider directly optimizing the PVF via policy gradients of pi.\\n\\n(2) This is a great question, which we do not have a complete answer for. The degree to which these algorithms inspired by natural gradients results in \\u201crisk sensitivity\\u201d is not clear to us, but their objective did not appear to be the risk sensitive one. We felt that these algorithms were distinct enough, and the connection unclear enough, to justify leaving a discussion to a later full conference paper. If the connection is clear to you, we'd be excited to learn more.\\n\\n(3) For the Cliffworld experiments we trained policies with policy gradients of PVFs with varying risk preference parameter beta. For the finite sample cases we considered, the beta = 0, or standard REINFORCE, variant did not solve the task. We suspect this is due to an optimization bias early on in training. Our interpretation was simply that as beta increases the PVF objective tolerates riskier policies and, in this case, suffers less from the optimization bias on this task.\"}",
"{\"title\": \"Re: Clarification\", \"comment\": \"Thank you for the clarification. I agree.\", \"the_paper_still_seems_to_me_to_be_missing_3_things\": \"1. As far as I see, you never mention how V^pi = max_{pi'}{L} is used. If your motivation is the joint optimization of pi and pi', how is max_{pi}{V^pi} performed?\\n\\n2. It is true that the joint optimization problem of pi and pi' is not usually addressed in KL-control.\\nWithout further constraints, the optimal solution has pi=pi', going back to plain optimal control.\\nHowever, algorithms like REPS (Peters et al., 2010), DPP (Azar et al., 2012), Psi-learning (Rawlik et al., 2012) and TRPO (Schulman et al., 2015) do alternate between updating pi and pi'.\\nHow does your overall approach (given (1) above) relate to these algorithms?\\n\\n3. Again, why is the cliff domain showing risk sensitivity rather than risk-sensitive properties that have other underlying causes? Fox et al. (2016) analyze this as an increase in KL-cost along the cliff due to learning not to fall, i.e., the agent is not risk-averse but rather averse to the intrinsic cost of *preventing* the risk (which *is* prevented, in an off-policy sense). Have you reached a different conclusion?\"}",
"{\"title\": \"Retracted\", \"comment\": \"I'm sorry, I was alluding to half-remembered talks.\\nI've retracted this part of my review.\"}",
"{\"title\": \"No, you really aren't\", \"comment\": \"First, let me emphasize that I appreciate this paper's discussion of the risk-sensitive aspect of (what I argue is) KL-control. This aspect is not often discussed, however see:\\nRisk Sensitive Path Integral Control (van den Broek et al., 2012)\\nYour eq. (2) is exactly their J (Section 2) (albeit for continuous systems).\\n\\nIt is also true that most works in KL-control omit to mention the formulation as an optimization problem over trajectory distributions, such as the one given in our comments above and solved by your eq. (2). KL-control has a formulation in terms of full-trajectory optimization that is equivalent in the same sense that the Bellman equation solves the optimal control problem. For example, see Section 7.1 in Tishby and Polani (2011), and eq. (17) in:\\nA Generalized Path Integral Control Approach to Reinforcement Learning (Theodorou et al., 2010)\\n(The KL-control assumption is their eq. (8).)\\n\\nIt is true that this full-trajectory formulation only coincides with your eq. (2) in the special case of LMDPs (Todorov, 2006) and path integrals:\\nLinear Theory for Control of Nonlinear Stochastic Systems (Kappen, 2005).\\nThe linear form is only exact under the assumption of full controllability, however is often used as an approximation with attractive properties regardless of this assumption, as you do here.\\n(You do not consider it an approximation to a principled approach, but rather motivate it directly.)\\n\\nDo note that the second equation in your comment is the optimum achieved in your first equation, under the full-controllability assumption.\\nThis can be verified by differentiating over pi'.\\n\\nThis reviewer is well-aware of the challenging space limitation, and was not expecting a survey. A clear and correct statement of the novelty of the contribution is however expected. I maintain that the application of particle filtering to KL-control is your main contribution. As far as I know, it is novel.\\n(EDITED: to retract allusion to recent works by Todorov and Kappen)\\n\\nI strongly disagree with your assertion that \\\"the problems are clearly completely different\\\" just because they are not always formulated the same way \\u2014 but sometimes are! For even more evidence of this, see Section 3.3 in:\\nTrading value and information in MDPs (Rubin et al., 2012).\\n\\nFinally, you make a good point about the distinction between risk-sensitive optimization, and solutions that simply display risk-aware properties. But this puts the burden back on you: Fox et al. (2016) show risk-averse properties of an RL-learned KL-control policy, however their explanation is alternative to risk-sensitivity, and to some degree excludes your motivation as the underlying mechanism. They demonstrate this specifically on the same cliff domain as you do. Your paper is incomplete without a discussion, and hopefully a resolution of this apparent contradiction.\"}",
"{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}"
]
} |
|
ryBnUWb0b | Predicting Floor-Level for 911 Calls with Neural Networks and Smartphone Sensor Data | [
"William Falcon",
"Henning Schulzrinne"
] | In cities with tall buildings, emergency responders need an accurate floor level location to find 911 callers quickly. We introduce a system to estimate a victim's floor level via their mobile device's sensor data in a two-step process. First, we train a neural network to determine when a smartphone enters or exits a building via GPS signal changes. Second, we use a barometer equipped smartphone to measure the change in barometric pressure from the entrance of the building to the victim's indoor location. Unlike impractical previous approaches, our system is the first that does not require the use of beacons, prior knowledge of the building infrastructure, or knowledge of user behavior. We demonstrate real-world feasibility through 63 experiments across five different tall buildings throughout New York City where our system predicted the correct floor level with 100% accuracy.
| [
"Recurrent Neural Networks",
"RNN",
"LSTM",
"Mobile Device",
"Sensors"
] | Accept (Poster) | https://openreview.net/pdf?id=ryBnUWb0b | https://openreview.net/forum?id=ryBnUWb0b | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"B19JVJpSz",
"S1MIIQpXM",
"Hkesgr4Zf",
"BkGbWw9Xf",
"B11TNj_gM",
"SJw9Jr4-G",
"Bk4vzMTQG",
"ryex-7aXz",
"B1gGvY3Qf",
"SJ3JfMCrf",
"H1zsR8cmz",
"S1xJ7MTmM",
"ryca0nYef",
"r1gRb9SWG",
"ry1E-75eG",
"S1mOIY2mf",
"rJp2Dt3XG",
"SyX3mXp7f",
"B1kuDK27M"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1517249506373,
1515169353742,
1512489112297,
1514987769905,
1511728310814,
1512488846582,
1515164252395,
1515167975749,
1515128584145,
1517326820457,
1514987162488,
1515164375994,
1511800514188,
1512575431926,
1511825702741,
1515128426667,
1515128757246,
1515168683163,
1515128678906
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
],
[
"ICLR.cc/2018/Conference/Paper682/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper682/Authors"
]
],
"structured_content_str": [
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Reviewers agree that the paper is well done and addresses an interesting problem, but uses fairly standard ML techniques.\\nThe authors have responded to rebuttals with careful revisions, and improved results.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Floor prediction results\", \"comment\": \"Based on this data, and these results, the line between both models is certainly more blurry. What is clear is that the neural network models do outperform the other models. We've changed some of the wording to highlight this point and not make it a strictly LSTM vs others approach but instead a neural network vs others approach. Although given more complicated examples, we think the LSTM would perform better on the IO task. However, generating more data is very time-consuming so it makes the overall problem difficult to model.\\n\\nBut we believe the point you mentioned with hierarchical LSTMs is extremely relevant in this context because it allows a foundation on which to build future work to model the full problem end-to-end with a model based on LSTM architectures and the hierarchical approaches mentioned. We added this point in the future direction and certainly think it's feasible but it'll likely require more data and model design. \\n\\n However, we're still fine-tuning the LSTM to see why there was a 1% drop.\"}",
"{\"title\": \"Appendix A, section B has potential pitfalls\", \"comment\": \"Thank you for your valuable feedback!\\nIn Appendix A, section B we provide a lengthy discussion about potential pitfalls of our system in a real-world scenario and offer potential solutions.\\n\\nWas there something in addition to this that you'd like to see?\"}",
"{\"title\": \"Updating HMM results today\", \"comment\": \"Hi. Once again, thank you for your feedback.\", \"re_hmm\": \"Yes, currently finalizing the HMM baseline right now and will be adding to the paper by today.\", \"re_model_accuracy\": \"These are the results from the latest hyperparameter optimization. We're verifying these today once the tests complete. Although the model accuracy dropped for the indoor/outdoor classification task, it increased to 100% with no margin of error for the floor prediction task. The other baselines don't achieve near the same result on the floor prediction task. However, we're running a final optimization today to ensure we have the best results given the hyperparameter search.\\n\\nWe can add those model results to the floor prediction task for clarification.\"}",
"{\"title\": \"A simple but useful method that serves a practical purpose well; improvements needed in writing and experimental comparisons.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes a two-step method to determine which floor a mobile phone is on inside a tall building.\\nAn LSTM RNN classifier analyzes the changes/fading in GPS signals to determine whether a user has entered a building. Using the entrance point's barometer reading as a reference, the method calculates the relative floor the user has moved to using a well known relationship between heights and barometric readings.\\n\\nThe paper builds on a simple but useful idea and is able to develop it into a basic method for the goal. The method has minimal dependence on prior knowledge and is thus expected to have wide applicability, and is found to be sufficiently successful on data collected from a real world context. The authors present some additional explorations on the cases when the method may run into complications.\\n\\nThe paper could use some reorganization. The ideas are presented often out of order and are repeated in cycles, with some critical details that are needed to understand the method revealed only in the later cycles. Most importantly, it should be stated upfront that the outdoor-indoor transition is determined using the loss of GPS signals. Instead, the paper elaborates much on the neural net model but delays until the middle of p.4 to state this critical fact. However once this fact is stated, it is obvious that the neural net model is not the only solution.\\n\\nThe RNN model for Indoor/Outdoor determination is compared to several baseline classifiers. However these are not the right methods to compare to -- at least, it is not clear how you set up the vector input to these non-auto-regressive classifiers. You need to compare your model to a time series method that includes auto-regressive terms, or other state space methods like Markov models or HMMs.\", \"other_questions\": \"p.2, Which channel's RSSI is the one included in the data sample per second?\\n\\np.4, k=3, what is k?\\n\\nDo you assume that the entrance is always at the lowest floor? What about basements or higher floor entrances? Also, you may continue to see good GPS signals in elevators that are mounted outside a building, and by the time they fade out, you can be on any floor reached by those elevators.\\n\\nHow does each choice of your training parameters affect the performance? e.g. number of epoches, batch size, learning rate. What are the other architectures considered? What did you learn about which architecture works and which does not? Why?\\n\\nAs soon as you start to use clustering to help in floor estimation, you are exploiting prior knowledge about previous visits to the building. This goes somewhat against the starting assumption and claim.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"New model accuracy is 100% with no margin of error. Added device references, discussion about new model, and code + data can be public if requested beforehand\", \"comment\": \"Thank you so much for your valuable feedback! I want to preface the breakdown below by letting you know that we added time-distributed dropout which helped our model's accuracy. The new accuracy is 100% with no margin of error in the floor number.\\n\\n1. As of June 2017 the market share of phones in the US is 44.9% Apple and 29.1% Samsung [1]. 74% are iPhone 6 or newer [2]. The iPhone 6 has a barometer [3]. Models after the 6 still continue to have a barometer. \\nFor the Samsung phones, the Galaxy s5 is the most popular [4], and has a barometer [5].\\n\\n\\n[1] https://www.prnewswire.com/news-releases/comscore-reports-june-2017-us-smartphone-subscriber-market-share-300498296.html\\n[2] https://s3.amazonaws.com/open-source-william-falcon/911/2017_US_Cross_Platform_Future_in_Focus.pdf\\n[3] https://support.apple.com/kb/sp705?locale=en_US\\n[4] https://deviceatlas.com/blog/most-popular-smartphones-2016\\n[5] https://news.samsung.com/global/10-sensors-of-galaxy-s5-heart-rate-finger-scanner-and-more\\n\\n2. Makes sense, we separated it for the non deep learning audience trying to understand it. However, happy to update everything to say LSTM.\\n3. Thanks for this great suggestion. We had experimented with end-to-end models but decided against it. We did have a seq2seq model that attempted to turn the sequence of readings into a sequence of meter offsets. It did not fully work, but we're still experimenting with it. This model does not however get rid of the clustering step. \\n\\nAn additional benefit of separating this step from the rest of the model is that it can be used as a stand-alone indoor/outdoor classifier. \\n\\nI'll address your concerns one at a time:\\n a. In which task would it perform better? The indoor-outdoor classification task or the floor prediction task?\\n c. What about this model would solve the issue of the user being on the roof?\\n d. Just to make sure I understand, the one-hot encoding suggestion aims to learn a mapping between the floor height and the barometric pressure which in turn removes the need for clustering?\\n e. This sounds like an interesting approach, but seems to fall outside of the constraint of having a self-contained model which did not need prior knowledge. Generating a one-hot encoding for every building in the world without a central repository of building plans makes this intractable.\\n\\n4. We used the bias (tensorflow LSTM cell). https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell\\n5. Happy to add explanations for why the \\\"ad-hoc\\\" parameters were chosen:\\n a. Jaccard window, binary mask lengths, and window length were chosen via grid search.\\n b. Will add those details to the paper.\\n\\n6. Yes! All the data + code will be made public after reviews. However, if you feel strongly about having it before, we can make it available sooner through an anonymous repository. In addition, we're planning on releasing a basic iOS app which you'll be able to download from the app store to run the model on your phone and see how it works on any arbitrary building for you.\\n\\n7. Yes, many typos. Apologize for that. We did a last minute typo review too close to the deadline and missed those issues. This is in fact going to change now that we've increased the model accuracy to 100% with no floor margin of error.\\n\\nWe're updating the paper now and will submit a revised version in the coming weeks\"}",
"{\"title\": \"Floor prediction results\", \"comment\": \"For the floor prediction task the result given is using the LSTM right (I don't think it's actually specified)? Do you have results for the baselines for this?\"}",
"{\"title\": \"Floor prediction results\", \"comment\": \"The table has been updated. The algorithm did fairly well on this task when using each of the classifiers with the exception of the HMM. The difference between classifiers on this task would likely come through when the possibility of acquiring a GPS lock during a trial comes up such as a glass elevator on the outside of the building. In this case, the LSTM would likely produce less false positives as indicated by the increased accuracy in the IO classification task. Fewer false-positives mean that the algorithm would likely identify the correct anchor barometric pressure point at the entrance to the building instead of a stairwell or entering the building from the glass elevator.\"}",
"{\"title\": \"HMM added, code released\", \"comment\": \"Hello. We've clarified these issues in the primary post above. Please let us know if we've addressed your concerns.\\n\\nThank you once again for your valuable feedback\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks to all the reviewers for the valuable feedback!\"}",
"{\"title\": \"Drop in accuracy (?) and baselines\", \"comment\": \"Can you explain why in the table 1 in the revision from 29th October the validation and test accuracy of the LSTM are 0.949 and 0.911 and in the most recent version they have dropped to 0.935 and 0.898 (worse than the baselines)?\", \"also_i_agree_with_the_statement_by_reviewer_2\": \"\\\"The RNN model for Indoor/Outdoor determination is compared to several baseline classifiers. However these are not the right methods to compare to -- at least, it is not clear how you set up the vector input to these non-auto-regressive classifiers. You need to compare your model to a time series method that includes auto-regressive terms, or other state space methods like Markov models or HMMs.\\\"\\n\\nIt seems like no changes have been made to address this.\"}",
"{\"title\": \"Floor prediction results\", \"comment\": \"Yes, LSTM only. Generating others now\"}",
"{\"title\": \"The paper combines existing methods to outperform baseline methods on floor level estimation. Limitations of their approach are not explored.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors motivate the problem of floor level estimation and tackle it with a RNN. The results are good. The models the authors compare to are well chosen. As the paper foremost provides application (and combination) of existing methods it would be benefitial to know something about the limitations of their approach and about the observed prequesits.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"New results. We turned the problem into classification by fixing a window in the series of size (k=3). Updating paper with structural suggestions.\", \"comment\": \"Thank you for your feedback! We're working on adding your suggestions and will post an update in the next few weeks.\\n\\nWanted to let you know we've improved the results from 91% to 100% by adjusting our regularization mechanism in the LSTM. We'll make the appropriate changes to the paper.\\n\\n\\\"The paper could use some reorganization\\\"\\n1. Agreed and the updated draft will have:\\n - Cleaner organization\\n - Upfront clarification about the GPS signal\\n - Shortened discussion about the neural net model\\n\\n\\\"The RNN model for Indoor/Outdoor determination is compared to several baseline classifiers.\\\"\\n2. The problem is reduced to classification by creating a fixed window of width k (in our case, k=3) where the middle point is what we're trying to classify as indoors/outdoors. \\n - Happy to add the HMM comparison.\\n - Happy to add a time series comparison.\\n\\n\\\"p.2, Which channel's RSSI is the one included in the data sample per second?\\n\\\"\\n3. We get the RSSI strength as proxied by the iPhone status bar. Unfortunately, the API to access the details of that signal is private. Therefore, we don't have that detailed information. However, happy to add clarification about how exactly we're getting that signal (also available in the sensory app code).\\n\\n\\n4. k is the window size. Will clarify this. \\n\\n\\\"Do you assume that the entrance is always at the lowest floor? What about basements or higher floor entrances? \\\"\\n\\n5. We actually don't assume the entrance is on the lower floors. In fact, one of the buildings that we test in has entrances 4 stories appart. This is where the clustering method shines. As soon as the user enters the building through one of those lower entrances, the floor-level indexes will update because it will detect another cluster.\\n\\n\\n\\\"Also, you may continue to see good GPS signals in elevators that are mounted outside a building, and by the time they fade out, you can be on any floor reached by those elevators.\\\"\\n6. Yup, this is true. Unfortunately this method does heavily rely on the indoor/outdoor classifier. \\n - We'll add a brief discussion to highlight this issue.\\n\\n\\n\\\"How does each choice of your training parameters affect the performance? e.g. number of epoches, batch size, learning rate. What are the other architectures considered? What did you learn about which architecture works and which does not? Why?\\n\\\"\\n7. We can add a more thorough description about this and provide training logs in the code that give visibility into the parameters for each experiment and the results.\\n - The window choice (k) actually might be the most critical hyperparameter (next to learning rate). The general pattern is that a longer window did not help much. \\n - The fully connected network actually does surprisingly well but the RNN generalizes slightly better. A 1-layer RNN did not provide much modeling power. It was the multi-layer model that added the needed complexity to capture these relationships. We also tried bi-directional but it failed to perform well. \\n\\n\\\"As soon as you start to use clustering to help in floor estimation, you are exploiting prior knowledge about previous visits to the building. This goes somewhat against the starting assumption and claim.\\n\\\"\\n8. Fair point. We provide a prior for each situation will will get you pretty close to the correct floor-level. However, it's impossible to get more accurate without building plans, beacons or some sort of learning. We consider the clustering method more of the learning approach: It updates the estimated floor heights as either the same user or other users walk in that building. In the case where the implementer of the system (ie. A company), only wants to use a single-user's information and keep it 100% on their device, the clustering system will still work using that user's repeated visits. In the case where a central database might aggregate this data, the clusters for each building will develop a lot faster and converge on the true distribution of floor heights in a buillding.\"}",
"{\"title\": \"A fairly simple application of existing methods to a problem, and there remain some methodological issues\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"Update: Based on the discussions and the revisions, I have improved my rating. However I still feel like the novelty is somewhat limited, hence the recommendation.\", \"======================\", \"The paper introduces a system to estimate a floor-level via their mobile device's sensor data using an LSTM to determine when a smartphone enters or exits a building, then using the change in barometric pressure from the entrance of the building to indoor location. Overall the methodology is a fairly simple application of existing methods to a problem, and there remain some methodological issues (see below).\", \"General Comments\", \"The claim that the bmp280 device is in most smartphones today doesn\\u2019t seem to be backed up by the \\u201ccomScore\\u201d reference (a simple ranking of manufacturers). Please provide the original source for this information.\", \"Almost all exciting results based on RNNs are achieved with LSTMs, so calling an RNN with LSTM hidden units a new name IOLSTM seems rather strange - this is simply an LSTM.\", \"There exist models for modelling multiple levels of abstraction, such as the contextual LSTM of [1]. This would be much more satisfying that the two level approach taken here, would likely perform better, would replace the need for the clustering method, and would solve issues such as the user being on the roof. The only caveat is that it may require an encoding of the building (through a one-hot encoding) to ensure that the relationship between the floor height and barometric pressure is learnt. For unseen buildings a background class could be used, the estimators as used before, or aggregation of the other buildings by turning the whole vector on.\", \"It\\u2019s not clear if a bias of 1 was added to the forget gate of the LSTM or not. This has been shown to improve results [2].\", \"Overall the whole pipeline feels very ad-hoc, with many hand-tuned parameters. Notwithstanding the network architecture, here I\\u2019m referring to the window for the barometric pressure, the Jaccard distance threshold, the binary mask lengths, and the time window for selecting p0.\", \"Are there plans to release the data and/or the code for the experiments? Currently the results would be impossible to reproduce.\", \"The typo of accuracy given by the authors is somewhat worrying, given that the result is repeated several times in the paper.\", \"Typographical Issues\", \"Page 1: \\u201dfloor-level accuracy\\u201d back ticks\", \"Page 4: Figure 4.1\\u2192Figure 1; Nawarathne et al Nawarathne et al.\\u2192Nawarathne et al.\", \"Page 6: \\u201dcarpet to carpet\\u201d back ticks\", \"Table 2: What does -4+ mean?\", \"References. The references should have capitalisation where appropriate.For example, Iodetector\\u2192IODetector, wi-fi\\u2192Wi-Fi, apple\\u2192Apple, iphone\\u2192iPhone, i\\u2192I etc.\", \"[1] Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and LarryHeck. Contextual LSTM (CLSTM) models for large scale NLP tasks. arXivpreprint arXiv:1602.06291, 2016.\", \"[2] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. InProceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342\\u20132350,2015\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"HMM added, code released, models verified again\", \"comment\": \"Thank you once again for your feedback during these last few weeks. We've gone ahead and completed the following:\\n1. Added the HMM baseline\\n2. Reran all the models and updated the results. The LSTM and feedforward model performed the same on the test set. We've reworded the results and method page to reflect this.\\n3. By increasing the classifier accuracy we improved the floor-prediction task to 100% with no margin of error on the floor predictions.\\n4. We tried the hierarchical LSTM approach as suggested but did not get a model to work in the few weeks we experimented with it. It looks promising, but it'll need more experimentation. We included this approach in future works section.\\n5. We released all the code at this repository: https://github.com/blindpaper01/paper_FMhXSlwRYpUtuchTv/ \\n\\nAlthough the code is mostly organized, works and is commented it'll be polished up once it needs to be released to the broader community. The Sensory app was not released yet to preserve anonymity. \\n\\n6. Fixed typos (some of the numbering typos are actually from the ICLR auto-formatting file).\\n\\nPlease let us know if there's anything else you'd like us to clarify.\\nThank you so much for your feedback once again!\"}",
"{\"title\": \"Code released, results updated\", \"comment\": \"Dear reviewer,\\n\\nWe've released a main update listed above. Please let us know if there's anything we can help clarify! \\n\\nThank you once again for your feedback!\"}",
"{\"title\": \"Floor prediction results\", \"comment\": \"Thanks for adding this. It starts to look like there's nothing to choose between the LSTM and Feedforward network ...\"}",
"{\"title\": \"HMM added\", \"comment\": \"Hello. We've added the HMM baseline. We apologize for the delay, we wanted to make sure we set the HMM baseline as rigorous as possible.\\n\\nThe code is also available for your review. \\n\\nThank you once again for your feedback!\"}"
]
} |
Skk3Jm96W | Some Considerations on Learning to Explore via Meta-Reinforcement Learning | [
"Bradly Stadie",
"Ge Yang",
"Rein Houthooft",
"Xi Chen",
"Yan Duan",
"Yuhuai Wu",
"Pieter Abbeel",
"Ilya Sutskever"
] | We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and ERL2. Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and ERL2 deliver better performance on tasks where exploration is important. | [
"reinforcement learning",
"rl",
"exploration",
"meta learning",
"meta reinforcement learning",
"curiosity"
] | Invite to Workshop Track | https://openreview.net/pdf?id=Skk3Jm96W | https://openreview.net/forum?id=Skk3Jm96W | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BkQIdmTMG",
"rJyE8QpzG",
"HkZ9VkpHf",
"SJ0Q_6Hlf",
"BJCjPQpMM",
"S1Sbum6ff",
"rJfePXpMG",
"ByDI8XpMz",
"ryse_yclM",
"SkE07mveG",
"rkYuBQTfG",
"H1NWlrYmM"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1514121291118,
1514120743140,
1517249672914,
1511540774475,
1514121126056,
1514121213529,
1514120938026,
1514120783026,
1511811059316,
1511629771783,
1514120561182,
1514913787656
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper41/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Paper41/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper41/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper41/Authors"
],
[
"ICLR.cc/2018/Conference/Paper41/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"We have fixed issues with plots and exposition and addressed the prior literature Part 3\", \"comment\": \"\\u201cp2: \\\"In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they\\u2019ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016).\\\"\\n\\n\\u201cThese are very recent refs - one should cite original work on hierarchical RL including:\\n\\nJ. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. M\\u00e4kisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.\\n\\nM. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.\\u201d\\n\\nM. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997\\u201d\\n\\n\\nThese refs cite older work in the area, which in turn cites the work you mention. This is not a review paper and hence mentioning every prior work in a field as large as hierarchical RL is not practical nor desired. We have added a review article by Barto and your last reference on HQ learning to account for this. \\n\\n=========================================================================\\n\\n\\n\\n\\n\\u201cReferences to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches? \\n\\n(6) J. Schmidhuber. G\\u00f6del machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006. \\n\\n(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987. \\n \\nPapers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it's RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.\\n\\nRef (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what's now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc. \\n\\nRef (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.\\u201d\\n\\nWe added several sentences regarding this to our paper. We have also connected this idea to a more broad interpretation of our work. Please see appendix B which cites this work in reference to our algorithm derivation. \\n=========================================================================\", \"general_recommendation\": \"Accept, provided the comments are taken into account, and the relation to previous work is established\\n\\nWe feel the paper now is substantially improved and we exerted significant energy addressing your concerns. Please see in particular the improved figures and heuristic metrics, as well as the improved works cited section, which address the majority of the major issues you had with this work. We would appreciate it if you could reconsider your score in light of these new revisions. \\n\\n\\n\\n=========================================================================\"}",
"{\"title\": \"We have added discussion of prior literature and better highlighted the novelty of our contributions.\", \"comment\": \"The first and primary issue is that authors claim their exists not prior work on \\\"exploration in Meta-RL\\\"....The paper under-review must survey such literature and discuss why these new approaches are a unique contribution.\\n\\nWe have added numerous references to these fields in the related literature section of the paper and clarified our contribution in this context. We are interested in the problem of meta-learning for RL (which largely deals with finding initializations that are quick to adapt to new domains). This problem ends up having a different formulation from the areas mentioned above. Our specific contribution is the creation of two new algorithms that find good initializations for RL algorithms to quickly adapt to new domains, yet do not sacrifice exploratory power to obtain these initializations. We show further that one can consider a large number of interesting algorithms for finding initializations that are good at exploring. This is also a novel contribution. \\n=========================================================================\\n\\n\\nThe empirical results do not currently support the claimed contributions of the paper. \\n\\nThe results have been strengthened since the initial submission. It is now clear that our methods provide substantially better performance. Further, the heuristic metrics indicate they are superior at exploration. \\n\\n=========================================================================\\n\\nThe first batch of results in on a new task introduced by this paper. Why was a new domain introduced? How are existing domains not suitable. \\n\\nThe domains are gridworlds and mazes, neither of which should require this sort of justification prior to use. The gridworld does not use a standard reference implementation (we am not aware of any such implementation) and was designed so that its level of difficulty could be more easily controlled during experimentation. \\n\\n=========================================================================\\n\\nDesigning domains are very difficult and why benchmark domains that have been well vetted by the community are such an important standard\\nWe agree with this. And indeed, we ourselves have designed reference domains for RL problems that are extremely popular in the community. In these cases, the domains were usually derived from an initial paper such as this one and subsequently improved upon by the community over time. In our experience, releasing a new domain in the context of this paper aligns well with how our previous successful domains have been released. \\n=========================================================================\\n\\nIn the experiment, the parameters were randomly sampled---is a very non-conventional choice. Usually one performance a search for the best setting and then compares the results. This would introduce substantial variance in the results, requiring many more runs to make statistically significant conclusions.\\n\\nWe have averaged over many more trials and this has significantly smoothed the curves. We were trying to avoid overfitting, which is a systematic problem in the way RL results are typically reported. \\n\\n=========================================================================\\n\\n\\nThe results on the first task are not clear. In fig4 one could argue that e-maml is perhaps performing the best, but the variance of the individual lines makes it difficult to conclude much. In fig5 rl2 gets the best final performance---do you have a hypothesis as to why? Much more analysis of the results is needed.\\n\\nThe result are more clear now and RL2 no longer gets the best final performance. Also, an important thing to consider is how fast the algorithms approach their final performance. For instance, in the referenced graph, E-MAML converged within ~10 million timesteps whereas RL2 took nearly twice as long. We apologize for not making this important point more explicit in the paper. In any case, this particular comment has been outmoded. \\n\\n=========================================================================\\n\\n\\nThere are well-known measures used in transfer learning to access performance, such as jump-start. Why did you define new ones here?\\n\\nJump start is quite similar to the gap metric we consider in the paper. We have clarified this. \\n\\n=========================================================================\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Overall, the paper is missing a couple of ingredients that would put it over the bar for acceptance:\\n\\n- I am mystified by statements such as \\\"RL2 no longer gets the best final performance.\\\" from one revision to another, as I have lower confidence in the results now.\\n\\n- More importantly, the paper is missing comparisons of the proposed methods on *already existing* benchmarks. I agree with Reviewer 1 that a paper that only compares on benchmarks introduced in the very same submission is not as strong as it could be.\\n\\nIn general, the idea seems interesting and compelling enough (at least on the Krazy World & maze environments) that I can recommend inviting to the workshop track.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This is an interesting paper about correcting some of the myopic bias in meta RL. For two existing algorithms (MAML, RL2) it proposes a modification of the metaloss that encourages more exploration in the first (couple of) test episodes. The approach is a reasonable one, the proposed methods seem to work, the (toy) domains are appropriate, and the paper is well-rounded with background, motivation and a lot of auxiliary results.\\n\\nNevertheless, it could be substantially improved:\", \"section_4_is_of_mixed_rigor\": \"some aspects are formally defined and clear, others are not defined at all, and in the current state many things are either incomplete or redundant. Please be more rigorous throughout, define all the terms you use (e.g. \\\\tau, R, \\\\bar{\\\\tau}, ...). Actually, the text never makes it clear how \\\\tau and \\\\ber{\\\\tau} relate to each other: make this connection in a formal way, please.\\n\\nIn your (Elman) formulation, \\u201cL\\u201d is not an RNN, but just a feed-forward mapping?\", \"equation_3_is_over_complicated\": \"it is actually just a product of two integrals, because all the terms are separable.\", \"the_integral_notation_is_not_meaningful\": \"you can\\u2019t sample something in the subscript the way you would in an expectation. Please make this rigorous.\\n\\nThe variability across seems extremely large, so it might be worth averaging over mores seeds for the learning curves, so that differences are more likely to be significant.\\n\\nFigure fontsizes are too small to read, and the figures in the appendix are impossible to read. Also, I\\u2019d recommend always plotting std instead of variance, so that the units or reward remain comparable.\\n\\nI understand that you built a rich, flexible domain. But please describe the variant you actually use, cleanly, without all the other variants. Or, alternatively, run experiments on multiple variants.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We have fixed issues with plots and exposition and addressed the prior literature.\", \"comment\": \"First and foremost, we would like to apologize for having missed the relevant prior work by Schmidhuber et al. We have taken care to better connect our work to this prior work, as detailed below. \\n\\n=========================================================================\\n\\n\\u201cEquation (3) and equations on the top of page 4: Mathematically, it looks better to swap \\\\mathrm{d}\\\\tau and \\\\mathrm{d}\\\\bar{\\\\tau}, to obtain a consistent ordering with the double integrals.\\u201d \\n\\nAgreed. This change has been made. \\n\\n=========================================================================\\n\\n\\n\\u201cIn page 4, last paragraph before Section 5, \\u201cHowever, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes\\u201d: I did not fully understand this - please explain better.\\u201d\\n\\nPlease see equation 4 in the latest draft and the accompanying text. We have better explained the procedure. \\n\\n=========================================================================\\n\\n\\nIt is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates? \\n\\nFor E-MAML/MAML, the inner update is VPG and the outer update is PPO. For E-RL2/RL2, PPO is used. We have noted this in the experiments section of the paper. \\n\\n=========================================================================\\n\\n\\n\\u201cI'd like to see more detailed hyperparameter settings.\\u201d\\nWe have included some further discussion on the training procedure in the experiments section. Further, it is our intention to release the code for this paper, which will include the hyper-parameters used in these algorithms. We can also put these hyper-parameters into a table in an appendix of this paper, to ensure redundancy in their availability. \\n\\n=========================================================================\\n\\n\\n\\u201cFigures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14.\\u201d\\n\\nWe agree. We have switched to a [2, 2]-layout. The figures are still somewhat small, though when viewed on a computer one can easily zoom in and read them more easily. Of course, we would be willing to move to a single column layout in the final version if the present figures are still too difficult to read. \\n\\n=========================================================================\\n\\n\\n\\u201cFigures 5, 6, 9: Wouldn't it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?\\u201d\\n\\nWe have updated the figures and made the axes consistent. \\n\\n=========================================================================\\n\\n\\n\\u201cIt could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze.\\u201d \\n\\nWe have been working on continuous control tasks and would hope to include them in the final version. The difficulties we have thus far encountered with these tasks are interesting, but perhaps outside the scope of this paper at the present time. \\n\\n=========================================================================\\n\\n\\n\\u201cOverall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating. \\u201c\\n\\nSee the above comment regarding continuous control. As for difficulties with terminology, some of this stems from following the leads set in the prior literature (the MAML and RL2 papers) which refer to the problem as meta learning. We have attempted to give a more thorough overview of lifelong learning/transfer learning in this revised draft. Please see our response to the first review for further details. \\n\\n=========================================================================\\n\\n\\n\\u201c(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.\\u201d \\n\\n\\nWe have added this citation. Apologies for having missed it. This reference was actually in our bib file but for some reason did not make it into the paper proper.\"}",
"{\"title\": \"We have fixed issues with plots and exposition and addressed the prior literature Part 2\", \"comment\": \"=========================================================================\", \"p2\": \"Authors write: \\\"To the best of our knowledge, there does not exist any literature addressing the topic of exploration in meta RL.\\\"\\n\\n\\u201cBut there is such literature - see the following meta-RL work where exploration is the central issue:\\n\\n(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.\\u201d\\n\\n\\nWe have adjusted the discussion and added this reference. \\n\\n=========================================================================\\n\\n\\n\\u201cJ. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.\\u201d \\n\\n\\nWe have added this reference. \\n\\n=========================================================================\"}",
"{\"title\": \"We have fixed the plots and made the notation more clear and rigorous.\", \"comment\": \"This is an interesting paper about correcting some of the myopic bias in meta RL. For two existing algorithms (MAML, RL2) it proposes a modification of the metaloss that encourages more exploration in the first (couple of) test episodes. The approach is a reasonable one, the proposed methods seem to work, the (toy) domains are appropriate, and the paper is well-rounded with background, motivation and a lot of auxiliary results.\\n\\nThank you for this excellent summary and compliment of the work! \\n\\n=========================================================================\", \"section_4_is_of_mixed_rigor\": \"some aspects are formally defined and clear, others are not defined at all, and in the current state many things are either incomplete or redundant. Please be more rigorous throughout, define all the terms you use (e.g. \\\\tau, R, \\\\bar{\\\\tau}, ...). Actually, the text never makes it clear how \\\\tau and \\\\ber{\\\\tau} relate to each other: make this connection in a formal way, please.\\n\\nWe have made the suggested improvements, clarifying notation and more explicitly defining tau and \\\\bar{tau}. R was defined in the MDP notation section and means the usual thing for MDPs. \\n\\n=========================================================================\", \"equation_3_is_over_complicated\": \"it is actually just a product of two integrals, because all the terms are separable.\\n\\nYes, this is true. It was not our intention to show off or otherwise make this equation seem more complex than it is. In fact, we were trying to simplify things by not skipping steps and separating the integrals prematurely. We asked our colleagues about this, and the response was mixed with half of them preferring the current notation and the other half preferring earlier separation. If you have strong feelings about this, we are willing to change it for the final version. \\n=========================================================================\", \"the_integral_notation_is_not_meaningful\": \"you can\\u2019t sample something in the subscript the way you would in an expectation. Please make this rigorous.\\n\\nThis is a fair comment. We were simply trying to make explicit the dependence on the sampling distribution, since it is one of the key insights of our method. However, we agree with you and have changed the notation. We have added an appendix B which seeks to examine some of these choices in a more rigorous context. \\n\\n=========================================================================\\n\\n\\nThe variability across seems extremely large, so it might be worth averaging over mores seeds for the learning curves, so that differences are more likely to be significant.\\n\\nWe did this and it helped substantially with obtaining more smooth results with more significant differences. Thank you for the suggestion it was very helpful! \\n\\n=========================================================================\\n\\n\\nFigure fontsizes are too small to read, and the figures in the appendix are impossible to read. Also, I\\u2019d recommend always plotting std instead of variance, so that the units or reward remain comparable.\\n\\nFixed. Thanks! \\n=========================================================================\\n\\n\\nI understand that you built a rich, flexible domain. But please describe the variant you actually use, cleanly, without all the other variants. Or, alternatively, run experiments on multiple variants.\\n\\nWe plan to release the source for the domain we used. But the variant we used is the one pictured in the paper, with all options turned on. We can add the environment hyperparameters to an appendix of the paper with a brief description if you think this would be useful. \\n\\n=========================================================================\", \"rating\": \"6: Marginally above acceptance threshold\\n\\nIn light of the fact we have addressed your major concerns with this work, we would appreciate it if you would consider revising your score.\"}",
"{\"title\": \"We have added discussion of prior literature and better highlighted the novelty of our contributions Part 2\", \"comment\": \"Figure 6 is difficult to read. \\n\\nThe figures have been dramatically improved. We apologize for the poor initial pass. \\n\\n=========================================================================\\n\\n\\nWhy not define the Gap and then plot the gap. \\n\\nWe feel it is illustrative to see the initial policy and the post-update policy in the same place. Actually seeing the gap between the two algorithms can be easier to interpret than the gap itself, which is a scalar. \\n\\n=========================================================================\\n\\n\\nThese are very unclear plots especially bottom right. It's your job to sub-select and highlight results to clearly support the contribution of the paper---that is not the case here. Same thing with figure 7. I am not sure what to conclude from this graph.\\n\\nWe took these comments to heart and exerted a lot of effort on improving the plots. We solicited feedback from our colleagues who suggest the new plots are much more clear, readable, and better convey our points. We also took better care to clarify this in our captions. \\n\\n=========================================================================\\n\\nThe paper, overall is very informal and unpolished. The text is littered with colloquial language, which though fun, is not as precise as required for technical documents. Meta-RL is never formally and precisely defined. There are many strong statements e.g., : \\\"which indicates that at the very least the meta learning is able to do system identification correctly.\\\">> none of the results support such a claim. Expectations and policies are defined with U which is never formally defined. The background states the problem of study is a finite horizon MDP, but I think they mean episodic tasks. The word heuristic is used, when really should be metric or measure. \\n\\nThank you for these comments. We have cleaned up the writing. \\n=========================================================================\"}",
"{\"title\": \"A new exploration algorithm for reinforcement learning\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Summary: this paper proposes algorithmic extensions to two existing RL algorithms to improve exploration in meta-reinforcement learning. The new approach is compared to the baselines on which they are built on a new domain, and a grid-world.\\n\\nThis paper needs substantial revision. The first and primary issue is that authors claim their exists not prior work on \\\"exploration in Meta-RL\\\". This appears to be the case because the authors did not use the usual names for this: life-long learning, learning-to-learn, continual learning, multi-task learning, etc. If you use these terms you see that much of the work in these settings is about how to utilize and adapt exploration. Either given a \\\"free learning phases\\\", exploration based in internal drives (curiosity, intrinsic motivation). These are subfields with too much literature to list here. The paper under-review must survey such literature and discuss why these new approaches are a unique contribution.\\n\\nThe empirical results do not currently support the claimed contributions of the paper. The first batch of results in on a new task introduced by this paper. Why was a new domain introduced? How are existing domains not suitable. This is problematic because domains can easily exhibit designer bias, which is difficult to detect. Designing domains are very difficult and why benchmark domains that have been well vetted by the community are such an important standard. In the experiment, the parameters were randomly sampled---is a very non-conventional choice. Usually one performance a search for the best setting and then compares the results. This would introduce substantial variance in the results, requiring many more runs to make statistically significant conclusions.\\n\\nThe results on the first task are not clear. In fig4 one could argue that e-maml is perhaps performing the best, but the variance of the individual lines makes it difficult to conclude much. In fig5 rl2 gets the best final performance---do you have a hypothesis as to why? Much more analysis of the results is needed.\\n\\nThere are well-known measures used in transfer learning to access performance, such as jump-start. Why did you define new ones here?\\n \\nFigure 6 is difficult to read. Why not define the Gap and then plot the gap. These are very unclear plots especially bottom right. It's your job to sub-select and highlight results to clearly support the contribution of the paper---that is not the case here. Same thing with figure 7. I am not sure what to conclude from this graph.\\n\\nThe paper, overall is very informal and unpolished. The text is littered with colloquial language, which though fun, is not as precise as required for technical documents. Meta-RL is never formally and precisely defined. There are many strong statements e.g., : \\\"which indicates that at the very least the meta learning is able to do system identification correctly.\\\">> none of the results support such a claim. Expectations and policies are defined with U which is never formally defined. The background states the problem of study is a finite horizon MDP, but I think they mean episodic tasks. The word heuristic is used, when really should be metric or measure.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting direction for exploration in meta-RL. Many relations to prior work missing though. Let's wait for rebuttal.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes a trick of extending objective functions to drive exploration in meta-RL on top of two recent so-called meta-RL algorithms, Model-Agnostic Meta-Learning (MAML) and RL^2.\", \"pros\": [\"Quite simple but promising idea to augment exploration in MAML and RL^2 by taking initial sampling distribution into account.\", \"Excellent analysis of learning curves with variances across two different environments. Charts across different random seeds and hyperparameters indicate reproducibility.\", \"Cons/Typos/Suggestions:\", \"The brief introduction to meta-RL is missing lots of related work - see below.\", \"Equation (3) and equations on the top of page 4: Mathematically, it looks better to swap \\\\mathrm{d}\\\\tau and \\\\mathrm{d}\\\\bar{\\\\tau}, to obtain a consistent ordering with the double integrals.\", \"In page 4, last paragraph before Section 5, \\u201cHowever, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes\\u201d: I did not fully understand this - please explain better.\", \"It is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates?\", \"I'd like to see more detailed hyperparameter settings.\", \"Figures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14.\", \"Figures 5, 6, 9: Wouldn't it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?\", \"3. It could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze.\", \"Overall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating.\"], \"relation_to_prior_work\": \"\", \"p_2\": \"Authors write: \\\"Recently, a flurry of new work in Deep Reinforcement Learning has provided the foundations for tackling RL problems that were previously thought intractable. This work includes: 1) Mnih et al. (2015; 2016), which allow for discrete control in complex environments directly from raw images. 2) Schulman et al. (2015); Mnih et al. (2016); Schulman et al. (2017); Lillicrap et al. (2015), which have allowed for high-dimensional continuous control in complex environments from raw state information.\\\"\", \"here_it_should_be_mentioned_that_the_first_rl_for_high_dimensional_continuous_control_in_complex_environments_from_raw_state_information_was_actually_published_in_mid_2013\": \"(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.\", \"p2\": \"\\\"In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they\\u2019ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016).\\\"\", \"not_quite_true___rl_robots_with_high_dimensional_video_inputs_and_intrinsic_motivation_learned_to_explore_in_2015\": \"(2) Kompella, Stollenga, Luciw, Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.\", \"but_there_is_such_literature___see_the_following_meta_rl_work_where_exploration_is_the_central_issue\": \"(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.\", \"the_rl_method_of_this_paper_is_the_one_from_the_original_meta_rl_work\": \"(4) J. Schmidhuber. On learning how to learn learning strategies. Technical Report FKI-198-94, Fakult\\u00e4t f\\u00fcr Informatik, Technische Universit\\u00e4t M\\u00fcnchen, November 1994.\", \"which_then_led_to\": \"(5) J. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.\", \"these_are_very_recent_refs___one_should_cite_original_work_on_hierarchical_rl_including\": \"J. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. M\\u00e4kisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.\\n\\nM. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.\\n\\nM. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997\\n\\nReferences to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches? \\n\\n(6) J. Schmidhuber. G\\u00f6del machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006. \\n\\n(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987. \\n \\nPapers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it's RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.\\n\\nRef (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what's now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc. \\n\\nRef (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.\", \"general_recommendation\": \"Accept, provided the comments are taken into account, and the relation to previous work is established.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"We have made substantial changes based on reviewer comments\", \"comment\": \"The following concerns were listed across multiple reviewers:\\n\\n1) Our paper misses citations wherein the similar problems are considered under different names. This problem is quite a large one, and it is unfortunate that the literature is at times disjoint and difficult to search. You will notice that the first and second reviewer both told us that we missed many essential references, but the crucial missed references provided by both are entirely different. Further, the third reviewer did not indicate any issues with the literature we cited. We believe this indicates the difficulty in accurately capturing prior work in this area. \\n\\n2) The graphs suffered from a variety of deficiencies. These deficiencies were both major (not clearly and convincingly demonstrating the strengths of our proposed methods) and minor (the text or graphs themselves being at times too small). \\n\\n3) There were portions of the paper that appeared hastily written or wherein spelling and grammatical mistakes were present. Further, there were claims that the reviewers felt were not sufficiently substantiated and parts of the paper lacked rigor.\", \"we_have_addressed_these_concerns_in_the_following_ways\": \"1) We have made an effort to address relevant prior literature. In particular, we have better explained the work\\u2019s connection to prior work by Schmidhuber et al and better explained what distinguishes this work from prior work on lifelong learning. See responses to individual reviewers for a more thorough explanation of these changes. Further, we have included an additional appendix which highlights our algorithmic development as a novel process for investigating exploration in meta-RL. We feel this appendix should completely remove any doubts regarding the novelty of this work. \\n\\n2) As for the graphs, we have fixed the presentation and layout issues. We have averaged over more seeds, which decreased the overall reported standard deviation across all algorithms, thus making the graphs more legible. We have also separated the learning curves onto multiple plots so that we can directly plot the standard deviations onto the learning curves without the plots appearing too busy. \\n\\n3) We have carefully edited the paper and fixed any substandard writing. We have also taken care to properly define notation, and made several improvements to the notation. We improved the writing\\u2019s clarity, and better highlighted the strength of our contributions. We removed several claims that the reviewers felt were too strong, and replaced them with more agreeable claims that are better supported by the experimental results. We have added an interesting new appendix which considers some of our insights in a more formal and rigorous manner. Finally, we have completely rewritten the experiments section, better explaining the experimental procedure. \\n\\n\\nPlease see the responses to individual reviews below for further elaboration on specific changes we made to address reviewer comments.\"}",
"{\"title\": \"improved\", \"comment\": \"The revised paper is not perfect, but improved substantially, and addresses multiple issues. I raised my review score.\"}"
]
} |
r1RQdCg0W | MACH: Embarrassingly parallel $K$-class classification in $O(d\log{K})$ memory and $O(K\log{K} + d\log{K})$ time, instead of $O(Kd)$ | [
"Qixuan Huang",
"Anshumali Shrivastava",
"Yiqiu Wang"
] | We present Merged-Averaged Classifiers via Hashing (MACH) for $K$-classification with large $K$. Compared to traditional one-vs-all classifiers that require $O(Kd)$ memory and inference cost, MACH only need $O(d\log{K})$ memory while only requiring $O(K\log{K} + d\log{K})$ operation for inference. MACH is the first generic $K$-classification algorithm, with provably theoretical guarantees, which requires $O(\log{K})$ memory without any assumption on the relationship between classes. MACH uses universal hashing to reduce classification with a large number of classes to few independent classification task with very small (constant) number of classes. We provide theoretical quantification of accuracy-memory tradeoff by showing the first connection between extreme classification and heavy hitters. With MACH we can train ODP dataset with 100,000 classes and 400,000 features on a single Titan X GPU (12GB), with the classification accuracy of 19.28\%, which is the best-reported accuracy on this dataset. Before this work, the best performing baseline is a one-vs-all classifier that requires 40 billion parameters (320 GB model size) and achieves 9\% accuracy. In contrast, MACH can achieve 9\% accuracy with 480x reduction in the model size (of mere 0.6GB). With MACH, we also demonstrate complete training of fine-grained imagenet dataset (compressed size 104GB), with 21,000 classes, on a single GPU. | [
"Extreme Classification",
"Large-scale learning",
"hashing",
"GPU",
"High Performance Computing"
] | Reject | https://openreview.net/pdf?id=r1RQdCg0W | https://openreview.net/forum?id=r1RQdCg0W | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1YDsMlXf",
"HkC5WzafM",
"SJB-0Mtlz",
"H1VwD15lG",
"Sk9lYDj7M",
"r1GeQQxmG",
"H1tJH9FxM",
"HkQCEy6rM",
"HJVtSZaMz",
"Skch5gafG",
"SJfRHbeQz"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1514314592603,
1514115477777,
1511759356697,
1511810907824,
1515055345759,
1514316522400,
1511789792729,
1517249739209,
1514112380547,
1514109618544,
1514309066267
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper455/Authors"
],
[
"ICLR.cc/2018/Conference/Paper455/Authors"
],
[
"ICLR.cc/2018/Conference/Paper455/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper455/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper455/Authors"
],
[
"ICLR.cc/2018/Conference/Paper455/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper455/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper455/Authors"
],
[
"ICLR.cc/2018/Conference/Paper455/Authors"
],
[
"ICLR.cc/2018/Conference/Paper455/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Memory usage, time ? (We want to add this comparison to the paper)\", \"comment\": \"We are really grateful for your efforts and taking time to run dismec.\\nCould you send us details of (or link to your codes?). We would like to report this in the paper (and also the comparison with imagenet). \\nWe want to know memory usage, running time (approx a day?), how many cores. In our codes, dismec was run on a single 64GB machine, with 8 cores and one titan X. \\n\\nFurthermore, on imagenet, sparsity won't help. MACH does not need this assumption. So we need to think beyond sparsity. \\n\\nMACH has all these properties. \\n\\nThe main argument is that we can run on Titan X (< 12GB working memory) (sequentially run 25 logistic regression of size 32 classes each) in 7.2 hrs. If we run with 25 GPUs in parallel, then it can be done in 17 minutes! Compare this to about a day on a large machine. \\n\\nWe think the ability to train dataset on GPUs or even single GPU is very impactful. GPU clusters are everywhere and cheap now. If we can train in few hours on easily available single GPU or in few minutes on 25 GPUs (also cheap to have). Then why wait over a day on a high-memory, high-core machines (expensive). Furthermore, with data growing faster than our machines, any work which enhances our capability to train them is beneficial. \\n\\nWe hope you see the importance of simplicity of our method and how fast we can train with increased parallelism. 17 min on 25 Titan X. The parallelism is trivial. \\n\\nWe are happy to run any specific benchmark (head-to-head) you have in mind if that could convince you.\"}",
"{\"title\": \"Thanks for nice comments. The methods you mentioned do not save memory\", \"comment\": \"First of all, we appreciate your detail comments, spotting typos, and encouragement.\\n\\n(1) Hierarchical softmax and LSH does not save memory; they make memory worse compared to the vanilla classifier. \\nHierarchical softmax and any tree-like structure will lead to more (around twice) memory compared to the vanilla classifier. Every leaf (K leaves) requires memory (for a vector), and hence the total memory is of the order 2k ( K + K/2 + ...) . Of course, running time will be log(K). \\nIn theory, LSH requires K^{1 + \\\\rho} memory ( way more than K or 2K). We still need all the weights. \\nMemory is the prime bottleneck for scalability. Note prediction is parallelizable over K (then argmax) even for vanilla models. Thus prediction time is not a major barrier with parallelism.\\n\\nWe stress, (to the best of our knowledge) no known method can train ODP dataset on a single Titan X with 12GB memory. All other methods will need more than 160gb main memory. The comparison will be trivial, they all will go out of memory. \\n\\nAlso, see new comparisons with Dismec and PDsparse algorithms (similar) in comment to AnonReviewer1\\n\\nmatrix factorization (see 3) \\n\\n2) ODP is similar domain as word2vec. We are not sure, but direct classification accuracy in word2vec does not make sense (does it?), it is usually for word embeddings (or other language models) which need all the parameters as those are the required outputs, not the class label (which is argmax ). \\n\\n3) What you are mentioning (similar to matrix factorization) is a form of dimensionality reduction from D to M. As mentioned in the paper, this is orthogonal and complementary. We can treat the final layer as the candidate for MACH for more savings. As you said, just dimentionality reduction won't be logarithmic in K by itself. \\n\\n\\nWe thank you again for the encouragement and hope that your opinion will be even more favorable after the discussions mentioned above.\"}",
"{\"title\": \"Good ideas, but insufficient results\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The manuscript proposes an efficient hashing method, namely MACH, for softmax approximation in the context of large output space, which saves both memory and computation. In particular, the proposed MACH uses 2-universal hashing to randomly group classes, and trains a classifier to predict the group membership. It does this procedure multiple times to reduce the collision and trains a classifier for each run. The final prediction is the average of all classifiers up to some constant bias and multiplier as shown in Eq (2).\\n\\nThe manuscript is well written and easy to follow. The idea is novel as far as I know. And it saves both training time and prediction time. One unique advantage of the proposed method is that, during inference, the likelihood of a given class can be computed very efficiently without computing the expensive partition function as in traditional softmax and many other softmax variants. Another impressive advantage is that the training and prediction is embarrassingly parallel, and thus can be linearly sped up, which is very practical and rarely seen in other softmax approximation.\\n\\nThough the results on ODP dataset is very strong, the experiments still leave something to be desired.\\n(1) More baselines should be compared. There are lots of softmax variants for dealing with large output space, such as NCE, hierarchical softmax, adaptive softmax (\\\"Efficient softmax approximation for GPUs\\\" by Grave et. al), LSH hashing (as cited in the manuscript) and matrix factorization (adding one more hidden layer). The results of MACH would be more significant if comparison to these or some of these baselines can be available.\\n(2) More datasets should be evaluated. In this manuscript, only ODP and imagenet are evaluated. However, there are also lots of other datasets available, especially in the area of language modeling, such as one billion word dataset (\\\"One billion\\nword benchmark for measuring progress in statistical language modeling\\\" by Chelba et. al) and many others.\\n(3) Why the experiments only focus on simple logistic regression? With neural network, it could actually save computation and memory. For example, if one more hidden layer with M hidden units is added, then the memory consumption would be M(d+K) rather than Kd. And M could be a much smaller number, such as 512. I guess the accuracy might possibly be improved, though the memory is still linear in K.\", \"minor_issues\": \"(1) In Eq (3), it should be P^j_b rather than P^b_j?\\n(2) The proof of theorem 1 seems unfinished\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"MACH: Embarrassingly parallel $K$-class classification\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"The paper presents a hashing based scheme (MACH) for reducing memory and computation time for K-way classification when K is large. The main idea is to use R hash functions to generate R different datasets/classifiers where the K classes are mapped into a small number of buckets (B). During inference the probabilities from the R classifiers are summed up to obtain the best scoring class. The authors provide theoretical guarantees showing that both memory and computation time become functions of log(K) and thus providing significant speed-up for large scale classification problems. Results are provided on the Imagenet and ODP datasets with comparisons to regular one-vs-all classifiers and tree-based methods for speeding up classification.\", \"Positives\", \"The idea of using R hash functions to remap K-way classification into R B-way classification problems is fairly novel and the authors provide sound theoretical arguments showing how the K probabilities can be approximated using the R different problems.\", \"The theoritical savings in memory and computation time is fairly significant and results suggest the proposed approach provides a good trade-off between accuracy and resource costs.\", \"Negatives\", \"Hierarchical softmax is one of more standard techniques that has been very effective at large-scale classification. The paper does not provide comparisons with this baseline which also reduces computation time to log(K).\", \"The provided baselines LOMTree, Recall Tree are missing descriptions/citations. Without this it is hard to judge if these are good baselines to compare with.\", \"Figure 1 only shows how accuracy varies as the model parameters are varied. A better graph to include would be a time vs accuracy trade-off for all methods.\", \"On the Imagenet dataset the best result using the proposed approach is only 85% of the OAA baseline. Is there any setting where the proposed approach reaches 95% of the baseline accuracy?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We will add the required discussion\", \"comment\": \"Alright, we will add a discussion about dismec and ppd-sparse and also make a note about this conversation, and the results over 100 node machine. It seems sparsity regularizer can help with accuracy (known in literature). We note that MACH does not use any such regularization.\\n\\nTill now, we were still trying to run DisMEC using our machines which we can access (56 cored and 512 RAM) on both the datasets Imagenet and ODP. However, the results seem hopeless so far, and it seems the progress is significantly slower on both of them. It will take a couple of weeks more before we can see the final accuracy (if the machines don't crash). Imagenet seems even worse. It would be a lot of more convenient to report some official accuracy numbers if we can get them. \\n\\nIt should be noted that we can run MACH on both the dataset over a smaller and significantly cheaper machine (64GB and 1 Titan X with 8 cores) and in substantially lesser time. \\n\\nWe thank you for bringing these newer comparisons. We think it makes our method even more exciting and bolsters our arguments further.\\n \\nWe hope that under the light of these discussions you will be more supportive of the paper. \\n\\nWe will be happy to take into account any other suggestions.\\n\\nThanks again, we appreciate your efforts, and we find this discussion very useful.\"}",
"{\"title\": \"results\", \"comment\": \"Thanks for your feedback.\\n\\nThe results above were for DiSMEC and not PPDSparse. Since it trains One-versus-rest in a parallel way, the memory requirements on a single node are quite moderate, something around 8GB for training a batch of 1,000 labels on a single node. Each batch of labels is trained on a separate node.\\n\\nYou are absolutely right that sparsity does not make sense in case of Imagenet and the results for OAA in Figure 1(right) will hold. In both cases OAA seems to be better than MACH.\\n\\nI completely agree that MACH has computational advantages. However, at the same time, the performance is also lost in the speedup gain, i.e. 25% versus 19%. The impact of MACH would be substantial if similar levels of accuracy at much lower computational cost.\\n\\nIt is important that authors could verify these, and update the manuscript appropriately thereby mentioning the pros and cons of each scheme, which is missing from the current version.\"}",
"{\"title\": \"Extreme multi-class classification with Hashing\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Thanks to the authors for their feedback.\\n==============================\\nThe paper presents a method for classification scheme for problems involving large number of classes in multi-class setting. This is related to the theme of extreme classification but the setting is restricted to that of multi-class classification instead of multi-label classification. The training process involves data transformation using R hash functions, and then learning R classifiers. During prediction the probability of a test instance belonging to a class is given by the sum of the probabilities assigned by the R meta-classifiers to the meta-class in the which the given class label falls. The paper demonstrates better results on ODP and Imagenet-21K datasets compared to LOMTree, RecallTree and OAA.\\n\\nThere are following concerns regarding the paper which don't seem to be adequately addressed :\\n \\n - The paper seems to propose a method in which two-step trees are being constructed based on random binning of labels, such that the first level has B nodes. It is not intuitively clear why such a method could be better in terms of prediction accuracy than OAA. The authors mention algorithms for training and prediction, and go on to mention that the method performs better than OAA. Also, please refer to point 2 below.\\n\\n - The paper repeatedly mentions that OAA has O(Kd) storage and prediction complexity. This is however not entirely true due to sparsity of training data, and the model. These statements seem quite misleading especially in the context of text datasets such as ODP. The authors are requested to check the papers [1] and [2], in which it is shown that OAA can perform surprisingly well. Also, exploiting the sparsity in the data/models, actual model sizes for WikiLSHTC-325K from [3] can be reduced from around 900GB to less than 10GB with weight pruning, and sparsity inducing regularizers. It is not clear if the 160GB model size reported for ODP took the above suggestions into considerations, and which kind of regularization was used. Was the solver used from vowpal wabbit or packages such as Liblinear were used for reporting OAA results.\\n\\n - Lack of empirical comparison - The paper lacks empirical comparisons especially on large-scale multi-class LSHTC-1/2/3 datasets [4] on which many approaches have been proposed. For a fair comparison, the proposed method must be compared against these datasets. It would be important to clarify if the method can be used on multi-label datasets or not, if so, it needs to be evaluated on the XML datasets [3].\\n\\n[1] PPDSparse - http://www.kdd.org/kdd2017/papers/view/a-parallel-and-primal-dual-sparse-method-for-extreme-classification\\n[2] DiSMEC - https://arxiv.org/abs/1609.02521\\n[3] http://manikvarma.org/downloads/XC/XMLRepository.html\\n[4] http://lshtc.iit.demokritos.gr/LSHTC2_CFP\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"There is a very nice discussion with one of the reviewers on the experiments, that I think would need to be battened down in an ideal setting. I'm also a bit surprised at the lack of discussion or comparison to two seemingly highly related papers:\\n\\n1. T. G. Dietterich and G. Bakiri (1995) Solving Multiclass via Error Correcting Output Codes.\\n2. Hsu, Kakade, Langford and Zhang (2009) Multi-Label Prediction via Compressed Sensing.\"}",
"{\"title\": \"Thanks for Positive Feedback\", \"comment\": [\"Thanks for taking the time in improving our work.\", \"We DID compare with log(K)running time methods (both LOMTree and RecallTree are log(K) running time not memory). Hierarchical softmax and any tree-like structure will lead to more (around twice) memory compared to the vanilla classifier. Every leaf (K leaves) requires a memory and hence the total memory is of the order 2k ( K + K/2 + ...) . Of course running time will be log(K).\", \"However, as mentioned memory is prime bottleneck in scalability. We still have to update and store those many parameters.\", \"Although, we have provided citations. we appreciate you pointing it out. We will make it more explicit at various places.\", \"We avoided the time tradeoff because time depends on several factors like parallelism, implementation etc. For example, we can trivially parallelize across R processors.\", \"It seems there is a price for approximations on fine-grained imagenet. Even recalltree and LOMTree with twice the memory does worse than MACH.\", \"We thank you again for the encouragement and hope that your opinion will be even more positive after these discussions.\"]}",
"{\"title\": \"MACH seems superior (more experiments)\", \"comment\": \"Thanks for pointing our sparsity and also reference related. We tried compared with [1] and [2] (referred in your comment) as pointed out on ODP dataset, and we are delighted to share the results. We hope these results (below) will convince you that\\n1) we are indeed using challenging large-scale dataset. \\n2) sparsity is nice to reduce the model size, but training is prohibitively slow. We still have 40 billion parameters to think about, even if we are not storing all of them (See results of dismec) \\n3) And our proposal is blazing fast and accurate and above all simple. Afterall what will beat small (32 classes only instead of 100k) logistic regression? \\n4) Still, we stress, (to the best of our knowledge) no known method can train ODP dataset on a single Titan X.\\n\\nWe will add the new results in any future version of the paper. \\n\\nFirst of all, ODP is a large scale dataset, evident from the fact that both the methods [1] and [2] are either prohibitively slow or goes out or memory.\\n\\nIt is perfectly fine to have sparse models which will make the final model small in memory. The major hurdle is to train them. We have no idea which weights are sparse. So the only hope to always keep the memory small is some variant of iterative hard thresholding to get rid of small weights repetitively. That is what is done by \\nDismec, reference [2]. As expected, this should be very slow. \\n\\n****** Dismec Details on ODP dataset***********\\n\\nWe tried running dismec with the recommended control model set.\", \"control_model_size\": \"Set a ambiguity control hyper-parameter delta (0.01). if a value in weight matrix is between -delta and delta, prune the value because the value carries very little discriminative information of distinguishing one label against another.\", \"running_time\": \"approx. 3 models / 24h, requires 106 models for ODP dataset, approx. 35 days to finish training on Rush. We haven't finished it yet.\\nCompare this to our proposed MACH which takes 7.3 hrs on a single GPU. Afterall, we are training small logistic regression with 32 classes only, its blazing fast. No iterative thresholding, not slow training. \\n\\nFurthermore, Dismec does not come with probabilistic guarantees of log{K} memory. Sparsity is also a very specific assumption and not always the way to reduce model size. \\n\\nThe results are not surprising as in [2] sophisticated computers with 300-1000 cores were used. We use a simple machine with a single Titan X. \\n\\n********** PD-Sparse**************\\n\\nWe also ran PD-sparse a non-parallel version [1] (we couldn't find the code for [1]), but it should have same memory consumption as [1]. The difference seems regarding parallelization. We again used the ODP dataset with recommended settings. We couldn't run it. Below are details \\n\\nIt goes out of memory on our 64gb machine. So we tried using another 512GB RAM machine, it failed after consuming 70% of memory. \\n\\nTo do a cross sanity check, we ran PPD on LSHTC1 (one of the datasets used in the original paper [1]). It went out of memory on our machine (64 GB) but worked on 512 GB RAM Machine with accuracy as expected in [1]. Interestingly, the run consumed more than 343 GB of main memory. This is ten times more than the memory required for storing KD double this dataset with K =12294 and D=347255. \\n***********************************\\n\\nLet us know if you are still not convinced. We are excited about MACH, a really simple, theoretically sound algorithm, for extreme class classification. No bells and whistles, no assumption, not even sparsity.\"}",
"{\"title\": \"DiSMEC on ODP dataset\", \"comment\": \"Thanks for the update on various points.\\n\\nI would disagree with some of the responses particularly on sparsity, on the merit of using a single Titan X and hence the projected training time mentioned for DiSMEC on ODP dataset. These are mentioned in detail below. Before that I would like to mention some of my empirical findings.\\n\\nTo verify my doubts on using DiSMEC on ODP as in the initial review, I was able to run it in a day or so, since I had access to a few hundreds cores. It turns out it gives an accuracy of 24.8% which is about 30% better than MACH, and much better than reported for the OAA performance in earlier papers such as Daume etal [1] which reported 9% on this dataset. \\n\\nFurthermore, after storing the model in sparse format, the model size was around 3.1GB, instead of 160 GB as mentioned in this and earlier papers. It would be great if the authors could verify these findings if they have access to a moderately sized cluster with a few hundred cores. If the authors then agree, it would be great to mention these in the new version of the paper for future references.\\n\\n - Sparsity : For text dataset with large number of labels such as in ODP, it is quite common for the model to be sparse. This is because, all the words/features are highly unlikely to be surely present or surely not present for each label/class. Therefore, there is bound to lots of zeros in the model. From an information theoretic view-point as well, it does not make much of a sense for ODP model to be 160GB when the training data is 4GB. Therefore, sparsity is not merely an assumption as an approximation but is a reasonable way to control model complexity and hence the model size.\\n\\n- Computational resources - The argument of the paper mainly hinges on the usage of a single Titan X. However, it is not clear what is the use-case/scenario in which one wants to train strictly on a single GPU. This needs to be appropriately emphasized and explained. On the other hand, a few hundred/thousands cores is something which typically is available in organizations/institutions which might care about problems of large sizes such as on ODP and Imagenet dataset.\\n\\nAlso, the authors can download the PPDSparse code from XMC respository or directly from the link http://www.cs.cmu.edu/~eyan/software/AsyncPDSparse.zip\\n\\n[1] Logarithmic Time One-Against-Some, ICML 2017\"}"
]
} |
rJ3fy0k0Z | Deterministic Policy Imitation Gradient Algorithm | [
"Fumihiro Sasaki",
"Atsuo Kawaguchi"
] | The goal of imitation learning (IL) is to enable a learner to imitate an expert’s behavior given the expert’s demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks. | [
"Imitation Learning"
] | Reject | https://openreview.net/pdf?id=rJ3fy0k0Z | https://openreview.net/forum?id=rJ3fy0k0Z | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HJRp8kaBG",
"S1WJnrpmz",
"SypN6BT7M",
"S1_na_OlG",
"B1nuCculG",
"S1tVQ5Kef",
"SknsnHTQG"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1517250245790,
1515178969191,
1515179316987,
1511718320481,
1511726708162,
1511789361151,
1515179172495
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper184/Authors"
],
[
"ICLR.cc/2018/Conference/Paper184/Authors"
],
[
"ICLR.cc/2018/Conference/Paper184/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper184/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper184/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper184/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"All of the reviewers found some aspects of the formulation and experiments interesting, but they found the paper hard to read and understand. Some of the components of the technique such as the state screening function (SSF) seem ad-hoc and heuristic without much justification. Please improve the exposition and remove the unnecessary component of the technique, or come up with better justifications.\"}",
"{\"title\": \"Responses\", \"comment\": \"Thank you for your constructive comments on our paper. We will fix typos and Figure.1. in the camera-ready version.\\n\\n> The justification for filtering is pretty weak. \\n\\nSince Figure.1 shows worse performance of Ours \\\\setminus SSF which does not filter states appearing in the demonstration, we think that the justification is enough.\\n\\n> What is the statistical basis for doing so?\\n\\nIntroducing a SSF is a kind of heuristic method, but it works as mentioned above.\\n\\n> Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\\n\\nIntroducing the SSF itself is not a variance reduction approach. We would say that direct use of the Joacobian of (single-step) reward function rather than that of Q-function to derive the PG (8) might reduce the variance because the range of outputs are bounded.\\nSince we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have the information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible. The prior IL works have never mentioned about the garbage.\"}",
"{\"title\": \"Thank you for positive evaluations.\", \"comment\": \"Thank you for your constructive comments and positive evaluations on our paper. We will clarify the role of SSF in the camera-ready version.\\n\\n> My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy)\\n\\nThank you for clearly understanding the role of SSF.\\n\\n> Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks.\\n\\nThe expert's policy used in the experimetns is a stochastic one. Hence, the proposed method works not only on a deterministic continuous control tasks but also a stochastic one. We expect that it generalizes well to other tasks.\"}",
"{\"title\": \"Hard to read\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes to extend the determinist policy gradient algorithm to learn from demonstrations. The method is combined with a type of density estimation of the expert to avoid noisy policy updates. It is tested on Mujoco tasks with expert demonstrations generated with a pre-trained network.\\n\\nI found the paper a bit hard to read. My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy) so that the policy is not updated by gradient coming from transitions that are unlikely to be generated by the expert policy. \\n\\nI do think that this part is interesting and I would have liked this trick to be used with other imitation methods. Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks. Also, the expert demonstration are generated by the pre-trained network so the distribution of the expert is indeed the distribution of the optimal policy. So I'm not sure the experiments tell a lot. But if the density estimation could be combined with other methods and tested on other tasks, I think this could be a good paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper proposes an extension of the generative adversarial imitation learning (GAIL) algorithm by replacing the stochastic policy of the learner with a deterministic one. Simulation results with MuJoCo physics simulator show that this simple trick reduces the amount of needed data by an order of magnitude.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper considers the problem of model-free imitation learning. The problem is formulated in the framework of generative adversarial imitation learning (GAIL), wherein we alternate between optimizing reward parameters and learner policy's parameters. The reward parameters are optimized so that the margin between the cost of the learner's policy and the expert's policy is maximized. The learner's policy is optimized (using any model-free RL method) so that the same cost margin is minimized. Previous formulation of GAIL uses a stochastic behavior policy and the RIENFORCE-like algorithms. The authors of this paper propose to use a deterministic policy instead, and apply the deterministic policy gradient DPG (Silver et al., 2014) for optimizing the behavior policy.\\nThe authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher. Although, a more detailed discussion and a clearer explanation is needed to clarify what SSF is actually doing, based on the provided formulation.\\nExcept from a few typos here and there, the paper is overall well-written. The proposed idea seems new. However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway). My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would significantly reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\", \"pros\": [\"A new GAIL formulation for saving on interaction data.\"], \"cons\": [\"Incremental improvement over GAIL\", \"Experiments only on simulated toy problems\", \"No theoretical guarantees for the state screening function (SSF) method\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Combines IRL, adversarial training, and ideas from deterministic policy gradients. Paper is hard to read. MuJoCo results are good.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper lists 5 previous very recent papers that combine IRL, adversarial learning, and stochastic policies. The goal of this paper is to do the same thing but with deterministic policies as a way of decreasing the sample complexity. The approach is related to that used in the deterministic policy gradient work. Imitation learning results on the standard control problems appear very encouraging.\", \"detailed_comments\": \"\\\"s with environment\\\" -> \\\"s with the environment\\\"?\\n\\n\\\"that IL algorithm\\\" -> \\\"that IL algorithms\\\".\\n\\n\\\"e to the real-world environments\\\" -> \\\"e to real-world environments\\\".\\n\\n\\\" two folds\\\" -> \\\" two fold\\\".\\n\\n\\\"adopting deterministic policy\\\" -> \\\"adopting a deterministic policy\\\".\\n\\n\\\"those appeared on the expert\\u2019s demonstrations\\\" -> \\\"those appearing in the expert\\u2019s demonstrations\\\".\\n\\n\\\"t tens of times less interactions\\\" -> \\\"t tens of times fewer interactions\\\".\\n\\nOk, I can't flag all of the examples of disfluency. The examples above come from just the abstract. The text of the paper seems even less well edited. I'd highly recommend getting some help proof reading the work.\\n\\n\\\"Thus, the noisy policy updates could frequently be performed in IL and make the learner\\u2019s policy poor. From this observation, we assume that preventing the noisy policy updates with states that are not typical of those appeared on the expert\\u2019s demonstrations benefits to the imitation.\\\": The justification for filtering is pretty weak. What is the statistical basis for doing so? Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\\n\\nUnfortunately, the text in Figure 1 is too small. The smallest font size you should use is that of a footnote in the text. As such, it is very difficult to assess the results.\\n\\nAs best I can tell, the empirical results seem impressive and interesting.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Responses\", \"comment\": \"Thank you for your constructive comments on our paper. We will fix typos and clarify the role of SSF in the camera-ready version.\\n\\n> The authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher.\\n\\nThe main purpose of introducing a SSF is not what you mentioned. Since we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible.\\n\\n> However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway)\\n\\nFigure.1 shows worse performance of Ours \\\\setminus SSF which just replace a stochastic policy with a deterministic one. If Ours \\\\setminus SSF worked well, we agree with your opinion that the main contribution is just incremental. However, introducing the SSF besides replacing a stochastic policy with a deterministic one is required to imitate the expert's behavior. Hence, we don't agree that the proposed method is just incremental. \\n\\n> My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\\n\\nBecause the GAIL algorithm is an on-policy algorithm, it essentially requires much interactions for an update and never uses behavior policy. Hence, it would not make it virtually similar in term of data efficiency to the proposed method which is off-policy algorithm.\\n\\n> Cons:\\n> - Incremental improvement over GAIL\\n\\nAs mentioned above, we think that the proposed method is not just incremental improvement over GAIL. \\n\\n> - Experiments only on simulated toy problems \\n\\nWe wonder why you thought the Mujoco tasks are just \\\"toy\\\" problems. Even though those tasks are not real-world problems, they have not been solved until GAIL has been proposed. In addition, the variants of GAIL (Baram et al., 2017; Wang et al., 2017; Hausman et al.) also evaluated their performance using those tasks. Hence, we think that those tasks are enough difficult to solve and can be used as a well-suited benchmark to evaluate whether the proposed method is applicable to the real-world problems in comparison with other IL algorithms.\"}"
]
} |
SkBYYyZRZ | Searching for Activation Functions | [
"Prajit Ramachandran",
"Barret Zoph",
"Quoc V. Le"
] | The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x * sigmoid(beta * x), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network. | [
"meta learning",
"activation functions"
] | Invite to Workshop Track | https://openreview.net/pdf?id=SkBYYyZRZ | https://openreview.net/forum?id=SkBYYyZRZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"rkQoM7wmM",
"S1UnJvoyz",
"SkVAW7PXM",
"HkYPN1pBM",
"B1sGYLokG",
"S1jZrPjyG",
"HkC-JdjkG",
"Skfsiap7G",
"Hy7GD19gM",
"Sy-QnQHef",
"BkIXIiLNG",
"rJMj2S57z",
"r1a4oTTmz",
"HylYITVZG",
"SkQHfvoA-",
"rk32mXwXz",
"HJ5pEygNM"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"decision",
"comment",
"comment",
"comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1514775194822,
1510858670123,
1514774988299,
1517249633188,
1510856979383,
1510860035065,
1510862598227,
1515211674177,
1511810827232,
1511500825553,
1515791902283,
1514982553772,
1515211572929,
1512523383570,
1509810747118,
1514775476480,
1515349185810
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper503/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper503/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper503/Authors"
],
[
"ICLR.cc/2018/Conference/Paper503/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper503/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper503/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper503/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper503/Authors"
],
[
"ICLR.cc/2018/Conference/Paper503/AnonReviewer4"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper503/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Re: Reviewer1\", \"comment\": \"1. Can the reviewer explain further why our work is not novel? Our activation function and the method to find it have not been explored before, and our work holds the promise of improving representation learning across many models. Furthermore, no previous work has come close to our level of thorough empirical evaluation. This type of contribution is as important as novelty -- it can be argued that the resurgence of CNNs is primarily due to conceptually simple empirical studies demonstrating their effectiveness on new datasets.\\n\\n2. We respectfully disagree with the reviewer that theoretical depth is necessary to be accepted. Following this argument, we can also argue that many extremely useful techniques in representation / deep learning, such as word2vec, ReLU, BatchNorm, etc, should not be accepted to ICLR because the original papers did not supply theoretical results about why they worked. Our community has typically followed that paradigm of discovering techniques experimentally and further work analyzing the technique. We believe our thorough and fair empirical evaluation provides a solid foundation for further work analyzing the theoretical properties of Swish.\\n\\n3. We experimented with the leaky ReLU using alpha = 0.5 on Inception-ResNet-v2 using the same hyperparameter sweep, and and did not find any improvement over the alpha used in our work (which was suggested by the original paper that proposed leaky ReLUs).\"}",
"{\"title\": \"Figure 7 would be more helpful if more typical beta values were shown\", \"comment\": \"Given the distribution of actual learned \\u03b2 values for Swish the were presented in Figure 7, it would be more instructive to show \\u03b2=0, \\u03b2=0.3, \\u03b2=0.5, \\u03b2=1.0 in Figures 4&5. While \\u03b2=10.0 is interesting to look at in the 1st derivative plot, it doesn\\u2019t seem to have been learned as useful value for \\u03b2.\"}",
"{\"title\": \"Re: Reviewer4\", \"comment\": \"The reviewer suggested \\u201cSince the paper is fairly experimental, providing code for reproducibility would be appreciated\\u201d. We agree, and we will open source some of the experiments around the time of acceptance.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The author's propose to use swish and show that it performs significantly better than Relus on sota vision models. Reviewers and anonymous ones counter that PRelus should be doing quite well too. Unfortunately, the paper falls in the category where it is hard to prove the utility of the method through one paper alone, and broader consensus relies on reproduction by the community. As a results, I'm going to recommend publishing to a workshop for now.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"Related work\", \"comment\": \"You mention this in the body, but it would be helpful in the related work if you pointed out that (Hendrycks & Gimpel, 2016) considered this activation function but found a slightly different version to be better, and that Elfwing et. al already proposed Swish-1 under a different name.\\n\\nI see you went from sigmoid(x) -> sigmoid(beta * x) to avoid outright duplication, but empirically it looks like Swish-1 is equal or better than Swish? \\n\\nTable 3 is a little misleading - the magnitude of the differences is what we really care about, and those magnitudes are quite small.\\n\\nFigure 8 is a little misleading - ReLU's are far and away the worst on that particular dataset+model, I imagine the plot for existing work like PReLU, which gives basically the same performance, would look very different. \\n\\nIn the original version, you bolded the non-ReLU activations which provide basically the same perf, but you don't in the new version - why not? PReLU is often the same as Swish, but without the bolding it's a lot harder to read.\\n\\nThe differences in perf are small enough to make me think this is just hyperparameter noise. For instance, you try 2 learning rates for the NMT results, why only 2? What 2 did you choose? Why did you choose them? If you had introduced PReLU, would it's numbers be higher? Concrete questions aside, I have a very hard time trusting this paper.\"}",
"{\"title\": \"Figure 8 should show PReLU given data in Table 6\", \"comment\": \"Figure 8 plot should show PReLU not ReLU since given data in Table 6, PReLU is better than ReLU in every case.\\n\\nin addition, in many of the other results in the paper LReLU is slightly better than PReLU. The two differences are that LReLU has \\u03b1=0.01 and PReLU at \\u03b1=.25 and that \\u03b1 in PReLU is learnable. Looking closely at Swish and PReLU plots, a more comparable starting initialization for PReLU would be \\u03b1=.10 and it would be somewhat closer to the value the you use for LReLU.\\n\\nWe suggest rerunning PReLU with \\u03b1=.10 and putting this result in Figure 8 and Table 6.\"}",
"{\"title\": \"Insights from Learnable Swish parameter(\\u03b2)\", \"comment\": \"Figure 7 shows an interesting feature that the \\u03b2=1 is the most prevalent single \\u03b2 value after training. Since Swish smoothly varies with \\u03b2, one can only assume that the reason for this inconsistency was that \\u03b2 was initialized to 1 and that during training this parameter was not adjusted in many cases. The text of the paper should clearly state the initialization value of \\u03b2.\\n\\nThe more interesting aspect of this distribution is that over 2x more \\u03b2 values were learned to be better in the range of (0.0 to 0.9) than at the (assumed) starting value of \\u03b2=1. \\u03b2\\u2019s in this range suggests that larger negative values must have some advantage. \\n\\nIt would be very interesting to see understand if distribution of \\u03b2 values changes in the different layers of the neural network. Are the \\u03b2 in the range (0.0 to 0.9) more important at higher levels or lower levels. It would also be instructive to see the effects of starting with \\u03b2 at another initial starting value.\\n\\nSwish approaches x/2 as \\u03b2 approaches inf, why is this better than approaching x in the manner that PReLU does?\\n\\nWhile the paper asserts the non-monotonic feature of Swish as an important aspect of Swish, but there is nothing that explains why this could be an advantage. In fact for Figure 6 show most negative preactivations are between -6 and 0 and given that Figure 7 shows most \\u03b2 between 0 and 1 most negative values will not be effected by non-monotonic behavior. Might the real lesson of the paper be that a smooth activation function with a smooth and continuous derivative function with a \\\"learnable\\\" small domain of negative values is more important for learning and generalization than non-montonicity?\"}",
"{\"title\": \"Re: Reviewer3\", \"comment\": \"Thank you for the comment.\\n\\n[[Our activation only beats other nonlinearities by \\u201ca small fraction\\u201d]] First of all, we question the conventional wisdom that ReLU greatly outperforms tanh or sigmoid units in modern architectures. While AlexNet may benefit from the optimization properties of ReLU, modern architectures use BatchNorm, which eases optimization even for sigmoid and tanh units. The BatchNorm paper [1] reports around a 3% gap between sigmoid and ReLU (it\\u2019s unclear if the sigmoid experiment was with tuning and this experiment is done on the older Inception-v1). The PReLU paper [2], cited 1800 times, proposes PReLU and reports a gain of 1.2% (Figure 3), again on a much weaker baseline. We cannot find any evidence in recent work that suggests that gap between sigmoid / tanh units and ReLU is huge. The gains produced by Swish are around 1% on top of much harder baselines, such as Inception-ResNet-v2, is already a third of the gain produced by ReLU and on par with the gains produced by PReLU. \\n\\n[[Small fraction gained due to hyperparameter tuning]] We want to emphasize how hard it is to get improvements on these state-of-art models. The models we tried (e.g., Inception-ResNet-v2) have been **heavily tuned** using ReLUs. The fact that Swish improves on these heavily tuned models with very minor additional tuning is impressive. This result suggests that models can simply replace the ReLUs with Swish units and enjoy performance gains. We believe the drop-in-replacement property of Swish is extremely powerful because one of the key impediments to the adoption of a new technique is the need to run many additional experiments (e,g,, a lot of hyperparameter tuning). This achievement is impactful because it enables the replacement of ReLUs that are widely used across research and industry.\\n\\n[[Searching for betas]] The reviewer also misunderstands the betas in Swish. When we use Swish-beta, one does not need to search for the optimal value of beta because it can be learned by backpropagation.\\n\\n[[Gradient on the negative side]] We do not claim that Swish is the first activation function to utilize gradients in the negative preactivation regime. We simply suggested that Swish may benefit from same properties utilized by LReLU and PReLU.\\n\\n[1] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In JMLR, 2015. (See Figure 3: https://arxiv.org/pdf/1502.03167.pdf )\\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In CVPR, 2015 (See Table 2: https://arxiv.org/pdf/1502.01852.pdf )\"}",
"{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper is utilizing reinforcement learning to search new activation function. The search space is combination of a set of unary and binary functions. The search result is a new activation function named Swish function. The authors also run a number of ImageNet experiments, and one NTM experiment.\", \"comments\": \"1. The search function set and method is not novel. \\n2. There is no theoretical depth in the searched activation about why it is better.\\n3. For leaky ReLU, use larger alpha will lead better result, eg, alpha = 0.3 or 0.5. I suggest to add experiment to leak ReLU with larger alpha. This result has been shown in previous work.\\n\\nOverall, I think this paper is not meeting ICLR novelty standard. I recommend to submit this paper to ICLR workshop track.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Another approach for arriving at proven concepts on activation functions\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Authors propose a reinforcement learning based approach for finding a non-linearity by searching through combinations from a set of unary and binary operators. The best one found is termed Swish unit; x * sigmoid(b*x).\\n\\nThe properties of Swish like allowing information flow on the negative side and linear nature on the positive have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc. As pointed out by the authors themselves for b=1 Swish is equivalent to SiL proposed in Elfwing et. al. (2017).\\n\\nIn terms of experimental validation, in most cases the increase is performance when using Swish as compared to other models are very small fractions. Again, the authors do state that \\\"our results may not be directly comparable to the results in the corresponding works due to differences in our training steps.\\\" \\n\\nBased on the Figure 6 authors claim that the non-monotonic bump of Swish on the negative side is very important aspect. More explanation is required on why is it important and how does it help optimization. Distribution of learned b in Swish for different layers of a network can interesting to observe.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply\", \"comment\": \"1. Novelty\\n\\nThe methodology of searching has been used in Genetic Programming for a long time. The RNN controller has been used in many paper from Google Brain. This paper's contribution is using RL to search in a GP flavor. Although it is new in activation function search field, in methodology view, it is not novel.\\n\\n2. Theoretical depth\\n\\nActually, BatchNorm and ReLU provides its explanation of why they work in the original paper and the explanation was accepted by community for a long time. I understand how deep learning community's experimentally flavor, but activation function is a fundamentally problem in understanding how neural network works. Swish performs similarly or slightly better compare to the commonly used activation functions. If without any theoretical explanation, it is hard to acknowledge it as a breaking research. What's more, different activation function may requires different initialization and learning rate, I respect the authors have enough computation power to sweep, but without any theoretical explanation, the paper is more like a experiment report rather than a good ICLR paper.\"}",
"{\"title\": \"Reply:\", \"comment\": \"Yes, I do agree that ReLU is one of the major reason for improvement of deep learning models. But, it is not just because ReLU was able to experimentally beat performance of existing non-linearities by a small fraction.\\n\\nThe fractional increase in performance on benchmarks can be because of various reasons, not just switching non-linearity. For example, in many cases a simple larger batch size can result in small fractional change in performance. The hyper-parameter settings in which other non-linearities might perform better can be different than the ones more suitable for proposed non-linearity. Also, I do not agree that the search factor helps researchers to save time on trying out different non-linearities, still one has to spend time on searching best 'betas' (which will result in small improvement over benchmarks) for every dataset. I would rather use a more well understood non-linearity which gives reasonable results on benchmarks.\\n\\nThe properties of the non-linearities proposed in the article like \\\"allowing information flow on the negative side and linear nature on the positive side\\\"(also mentioned in my review) have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc.\\n\\nThe results from the article show that Swish-1 ( or SiL from Elfwing et al. (2017)) performs same as Swish.\"}",
"{\"title\": \"Clearing up concerns and misunderstandings\", \"comment\": \"We thank the reviewers for their comments and feedback. We are extremely surprised by the low scores for the paper that proposes a novel method that finds better activation functions, one of which has a potential to be better than ReLUs. During the discussion with the reviewers, we have found a few major concerns and misunderstandings amongst the reviewers, and we want to bring it up to a general discussion:\\n\\nThe reviewers are concerned that our activation only beats other nonlinearities by \\u201ca small fraction\\u201d. First of all, we question the conventional wisdom that ReLU greatly outperforms tanh or sigmoid units in modern architectures. While AlexNet may benefit from the optimization properties of ReLU, modern architectures use BatchNorm, which eases optimization even for sigmoid and tanh units. The BatchNorm paper [1] reports around a 3% gap between sigmoid and ReLU (it\\u2019s unclear if the sigmoid experiment was with tuning and this experiment is done on the older Inception-v1). The PReLU paper [2], cited 1800 times, proposes PReLU and reports a gain of 1.2%, again on a much weaker baseline. We cannot find any evidence in recent work that suggests that gap between sigmoid / tanh units and ReLU is huge. The gains produced by Swish are around 1% on top of much harder baselines, such as Inception-ResNet-v2, is already a third of the gain produced by ReLU and on par with the gains produced by PReLU. \\n\\nThe reviewers are concerned that the small gains are simply due to hyperparameter tuning. We stress here that unlike many prior works, the models we tried (e.g., Inception-ResNet-v2) have been **heavily tuned** using ReLUs. The fact that Swish improves on these heavily tuned models with very minor additional tuning is impressive. This result suggests that models can simply replace the ReLUs with Swish units and enjoy performance gains. We believe the drop-in-replacement property of Swish is extremely powerful because one of the key impediments to the adoption of a new technique is the need to run many additional experiments (e,g,, a lot of hyperparameter tuning). This achievement is impactful because it enables the replacement of ReLUs that are widely used across research and industry.\\n\\nThe reviewers are also concerned that our activation function is too similar to the work by Elfwing et al. When we conducted our research, we were honestly not aware of the work by Elfwing et al (their paper was first posted fairly recently on arxiv in Feb, 2017 and to the best of our knowledge, not accepted to any mainstream conference). That said, we have happily cited their work and credited their contributions. We are also happy to reuse the name \\u201cSiL\\u201d proposed by Elfwing et al if the reviewers see fit. In that case, Elfwing et al should be thrilled to know that their proposal is validated through a thorough search procedure. We also want to emphasize a number of key differences between our work and Elfwing et al. First, the focus of our paper is to search for an activation functions. Any researcher can use our recipes to drop in new primitives to search for better activation functions. Furthermore, our work has much more comprehensive empirical validation. Elfwing et al. only conducted experiments on relatively shallow reinforcement learning tasks, whereas we evaluated on challenging supervised benchmarks such as ImageNet with extremely tough baselines and equal amounts of tuning for fairness. We believe that we have conducted the most thorough evaluation of activation functions among any published work.\\n\\nPlease reconsider your rejection decisions.\\n\\n[1] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, 2015. (See Figure 3: https://arxiv.org/pdf/1502.03167.pdf )\\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In CVPR, 2015 (See Table 2: https://arxiv.org/pdf/1502.01852.pdf )\"}",
"{\"title\": \"Well written paper and well conducted experiments.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The author uses reinforcement learning to find new potential activation functions from a rich set of possible candidates. The search is performed by maximizing the validation performance on CIFAR-10 for a given network architecture. One candidate stood out and is thoroughly analyze in the reste of the paper. The analysis is conducted across images datasets and one translation dataset on different architectures and numerous baselines, including recent ones such as SELU. The improvement is marginal compared to some baselines but systematic. Signed test shows that the improvement is statistically significant.\\n\\nOverall the paper is well written and the lack of theoretical grounding is compensated by a reliable and thorough benchmark. While a new activation function is not exiting, improving basic building blocks is still important for the community. \\n\\nSince the paper is fairly experimental, providing code for reproducibility would be appreciated.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"non-monotonic vs. small negative negative for negative pre-activations\", \"comment\": \"You state: \\\"In Figure 6, a large percentage of preactivations fall inside the domain of the bump (\\u22125 \\u2264 x \\u2264 0), which indicates that the non-monotonic bump is an important aspect of Swish.\\\"\\n\\nIt seems that non-monotonic behavior is an artifact of your function that could have negative consequences by making a \\\"bumpier\\\" loss surface for optimizers. What is the value of Swish approaching 0 as x heads to -inf? Why wouldn't small negative values be sufficient for all negative pre-actiations (x \\u2264 -5)? \\n\\nWouldn't something like CELU with small alpha in the long run be better? CELU paper:\", \"https\": \"//arxiv.org/pdf/1704.07483.pdf\"}",
"{\"title\": \"Re: Reviewer3\", \"comment\": \"We don\\u2019t completely understand the reviewer\\u2019s rationale for rejection. Is it because of the lack of novelty, the inconsistent gains, or the work being insignificant?\\n\\nFirst, in terms of the work being significant, we want to emphasize that ReLU is the cornerstone of deep learning models. Being able to replace ReLU is extremely impactful because it produces a gain across a large number of models. So in terms of impact, we believe that our work is significant.\\n\\nSecondly, in terms of inconsistent gains, the signed tests already confirm that the gains are statistically significant in our experiments. These results suggest that switching to Swish is an easy and consistent way of getting an improvement regardless of which baseline activation function is used. Unlike previous studies, the baselines in our work are extremely strong: they are state-of-the-art models where the models are built with ReLUs as the default activation. Furthermore, the same amount of tuning was used for every activation function, and in fact, many non-Swish activation functions actually got more tuning. Thus, it is unreasonable to expect a huge improvement. That said, in some cases, Swish on Imagenet makes a more than 1% top-1 improvement. For context, the gap between Inception-v3 and Inception-v4 (a year of work) is only 1.2%.\\n\\nFinally, in terms of novelty, our work differs from Elfwing et al. (2017) in a number of significant ways. They just propose a single activation function, whereas our work searches over a vast space of activation functions to find the best empirically performing activation function. The search component is important because we save researchers from the painful process of manually trying out a number of individual activation functions in order to find one that outperforms ReLU (i.e., graduate student descent). The activation function found by this search, Swish, is more general than the other proposed by Elfwing et al. (2017). Another key contribution is our thorough empirical study. Their activation function was tested only on relatively shallow reinforcement learning models. We performed a thorough experimental evaluation on many challenging, deep, large-scale supervised models with extremely strong baselines. We believe these differences are significant enough to differentiate us. \\n\\nThe non-monotonic bump, which is controlled by beta, has gradients for negative preactivations (unlike ReLU). We have plotted the beta distribution over the each layer Swish here: https://imgur.com/a/AIbS2 . Note this is on the Mobile NASNet-A model, which has many layers composed in parallel (similar to Inception and unlike ResNet). The plot suggests that the tuneable beta is flexibly used. Early layers use large values of beta, which corresponds to ReLU-like behavior, whereas later layers tend to stay around the [0, 1.5] range, corresponding to a more linear-like behavior.\"}",
"{\"title\": \"No response from authors\", \"comment\": \"The authors appear to have made a decision to ignore all comments which are not from reviewers. To be clear, if I were a reviewer, I would score this paper as a 4 with confidence of 4.\\n\\nIn addition to the above issues, I'd point out that ReLU isn't the only baseline here - to claim a worthwhile contribution, they also need to demonstrate improvement over functions such as PReLU, where the empirical evidence is even weaker to non-existent.\"}"
]
} |
rkfbLilAb | Improving Search Through A3C Reinforcement Learning Based Conversational Agent | [
"Milan Aggarwal",
"Aarushi Arora",
"Shagun Sodhani",
"Balaji Krishnamurthy"
] | We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states. | [
"Subjective search",
"Reinforcement Learning",
"Conversational Agent",
"Virtual user model",
"A3C",
"Context aggregation"
] | Reject | https://openreview.net/pdf?id=rkfbLilAb | https://openreview.net/forum?id=rkfbLilAb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"Hy3jUZaXf",
"H1f_jh_ef",
"Hy4tIW5xf",
"HkAuVWpmz",
"HywNB1pBM",
"Byxcyzp7M",
"SkDTDVamf",
"SkPmHMpXz",
"BkL816Ygf",
"ryJv-MaXM",
"H1QVkGpmM"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1515161252369,
1511734121566,
1511818885213,
1515160693860,
1517249839452,
1515163528274,
1515173823497,
1515164959112,
1511800654276,
1515163990888,
1515163435116
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper370/Authors"
],
[
"ICLR.cc/2018/Conference/Paper370/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper370/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper370/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper370/Authors"
],
[
"ICLR.cc/2018/Conference/Paper370/Authors"
],
[
"ICLR.cc/2018/Conference/Paper370/Authors"
],
[
"ICLR.cc/2018/Conference/Paper370/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper370/Authors"
],
[
"ICLR.cc/2018/Conference/Paper370/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Experimental Details\", \"comment\": \"We evaluated our system through real humans and added the results in section 4.3. Please refer to appendix (section 6.2) for some conversations between actual users and trained agent. For performing experiments with humans, we developed chat interface where an actual user can interact with the agent during their search. The implementation details of the chat interface have been discussed in the appendix (section 6.1.1). User action is obtained from user utterance using a rule-based Natural language unit (NLU) which uses dependency tree based syntactic parsing, stop words and pre-defined rules (as described in appendix, section 6.1.2). You may refer to supplementary material (footnote-2, page-9) which contains a video demonstrating search on our conversational search interface.\\n\\nIn order to evaluate our system with the virtual user, we simulate validation episodes between the agent and the virtual user after every training episode. This simulation comprises of sequence of alternate actions between the user and the agent. The user action is sampled using the user model while the agent action is sampled using the policy learned till that point. Corresponding to a single validation episode, we determine two performance metrics. First is total reward obtained at the end of the episode. The values of the states observed in the episode is obtained using the model, average of states values observed during the validation episode is determined and used as the second performance metric. Average of these values over different validation episodes is taken and depicted in figures 3,4,5 and 6.\"}",
"{\"title\": \"Lack of context\", \"rating\": \"2: Strong rejection\", \"review\": \"This paper proposes to use RL (Q-learning and A3C) to optimize the interaction strategy of a search assistant. The method is trained against a simulated user to bootstrap the learning process. The algorithm is tested on some search base of assets such as images or videos.\\n\\nMy first concern is about the proposed reward function which is composed of different terms. These are very engineered and cannot easily transfer to other tasks. Then the different algorithms are assessed according to their performance w.r.t. to these rewards. They of course improve with training since this is the purpose of RL to optimize these numbers. Assessment of a dialogue system should be done according to metrics obtained through actual interactions with users, not according to auxiliary tasks etc. \\n\\nBut above all, this paper incredibly lacks of context in both RL and dialogue systems. The authors cite a 2014 paper when it comes to refer to Q-learning (Q-learning was first published in 1989 by Watkins). The first time dialogue has been casted into a RL problem is in 1997 by E. Levin and R. Pieraccini (although it has been suggested before by M. Walker). User simulation has been proposed at the same time and further developed in the early 2000 by Schatzmann, Young, Pietquin etc. Using LSTMs to build user models has been proposed in 2016 (Interspeech) by El Asri et al. Buiding efficient reward functions for RL-based conversational systems has also been studied for more than 20 years with early work by M. Walker on PARADISE (@ACL 1997) but also via inverse RL by Chandramohan et al (2011). A2C (which is a single-agent version of A3C) has been used by Strub et al (@ IJCAI 2017) to optimize visually grounded dialogue systems. RL-based recommender systems have also been studied before (e.g. Shani in JMLR 2005). \\n\\nI think the authors should first read the state of the art in the domain before they suggest new solutions.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting problem but a not convincing experimental protocol\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper \\\"IMPROVING SEARCH THROUGH A3C REINFORCEMENT LEARNING BASED CONVERSATIONAL AGENT\\\" proposes to define an agent to guide users in information retrieval tasks. By proposing refinements of the query, categorizations of the results or some other bookmarking actions, the agent is supposed to help the user in achieving his search. The proposed agent is learned via reinforcement learning.\\n\\nMy concern with this paper is about the experiments that are only based on simulated agents, as it is the case for learning. While it can be questionable for learning (but we understand why it is difficult to overcome), it is very problematic for the experiments to not have anything that demonstrates the usability of the approach in a real-world scenario. I have serious doubts about the performances of such an artificially learned approach for achieving real-world search tasks. Also, for me the experimental section is not sufficiently detailed, which lead to not reproducible results. Moreover, authors should have considered baselines (only the two proposed agents are compared which is clearly not sufficient). \\n\\nAlso, both models have some issues from my point of view. First, the Q-learning methods looks very complex: how could we expect to get an accurate model with 10^7 states ? No generalization about the situations is done here, examples of trajectories have to be collected for each individual considered state, which looks very huge (especially if we think about the number of possible trajectories in such an MDP). The second model is able to generalize from similar situations thanks to the neural architecture that is proposed. However, I have some concerns about it: why keeping the history of actions in the inputs since it is captured by the LSTM cell ? It is a redondant information that might disturb the process. Secondly, the proposed loss looks very heuristic for me, it is difficult to understand what is really optimized here. Particularly, the loss entropy function looks strange to me. Is it classical ? Are there some references of such a method to maintain some exploration ability. I understand the need of exploration, but including it in the loss function reduces the interpretability of the objective (wouldn't it be preferable to use a more classical loss but with an epsilon greedy policy?).\", \"other_remarks\": [\"In the begining of \\\"varying memory capacity\\\" section, what is \\\"100, 150 and 250\\\" ? Time steps ? What is the unit ? Seconds ?\", \"I did not understand the \\\"Capturing seach context at local and global level\\\" at all\", \"In the loss entropy formula, the two negation signs could be removed\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We evaluated our system by performing human evaluation and updated our paper with corresponding results, please refer to section 4.3 in the updated paper.\", \"comment\": \"We evaluated our system trained using A3C algorithm through professional designers who regularly use image search site for their design tasks and asked them to compare our system with conventional search interface in terms of engagement, time required and ease of performing the search. In addition to this we asked them to rate our system on the basis of information flow, appropriateness and repetitiveness. The evaluation shows that although we trained the bootstrapped agent through user model, it performs decently well with actual users by driving their search forward with appropriate actions without being much repetitive. The comparison with the conventional search shows that conversational search is more engaging. In terms of search time, it resulted in more search time for some designers while it reduces the time required to search the desired results in some cases, in majority cases it required about the same time. The designers are regular users of conventional search interface and well versed with it, even then majority of them did not face any cognitive load while using our system with one-third of them believing that it is easier than conventional search.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"meta score: 4\\n\\nThis paper is primarily an application paper applying known RL techniques to dialogue. Very little reference to the extensive literature in this area.\", \"pros\": [\"interesting application (digital search)\", \"revised version contains subjective evaluation of experiments\"], \"cons\": [\"limited technical novelty\", \"very weak links to the state-of-the-art, missing many key aspects of the research domain\"]}",
"{\"title\": \"A3C and rollouts are better than REINFORCE\", \"comment\": \"Thanks for your reviews.\\n\\nStandard REINFORCE method for policy gradient has high variance in gradient estimates [1]. Moreover while optimising and weighing the likelihood for performing an action in a given state, it does not measure the reward with respect to a baseline reward due to which the agent is not able to compare different actions. This may result in gradient pointing in wrong direction since it does not know how good an action is with respect to other good actions in a given state. This may weaken the probability with which the agent takes the best action (or better actions).\\n\\nIt has been shown that if a baseline value for a state is used to critic the rewards obtained for performing different actions in that state reduces the variance in gradient estimates as well as provides correct appraisal for an action taken in a given state (good actions get a positive appraisal) without requiring to sample other actions [2]. Moreover it has been shown that if baseline value of the state is learned through function approximation, we get an an unbiased or very less biased gradient estimates with reduced variance achieving better bias-variance tradeoff. Due to these advantages we use A3C algorithm since it learns the state value function along with the policy and provides unbiased gradient estimator with reduces variance.\\n\\nIn standard policy gradient methods, multiple episodes are sampled before updating the parameters using the gradients obtained over these episodes. It has been observed that sampling gradients over multiple episodes which can span over large number of turns results in higher variance in the gradient estimates due to which the model takes more time to learn [3]. The higher variance is the result of stochastic nature of policy since taking sampling random actions initially (when the agent has not learned much) over multiple episodes before updating the parameters compounds the variance. Due to this reason, we instead use truncated rollouts where we update parameters of the policy and value model after every n-steps in an episode which are proven to be much effective and results in faster learning.\\n\\n[1] : Sehnke, Frank, et al. \\\"Parameter-exploring policy gradients.\\\" Neural Networks 23.4 (2010): 551-559.\\n[2] : Sutton, Richard S., et al. \\\"Policy gradient methods for reinforcement learning with function approximation.\\\" Advances in neural information processing systems. 2000\\n[3] : Tesauro, Gerald, and Gregory R. Galperin. \\\"On-line policy improvement using Monte-Carlo search.\\\" Advances in Neural Information Processing Systems. 1997. ; Gabillon, Victor, et al. \\\"Classification-based policy iteration with a critic.\\\" (2011).\"}",
"{\"title\": \"Reward Function and Evaluation\", \"comment\": \"Thanks for your reviews.\\n\\nWe have modeled rewards specifically for the domain of digital assets search in order to obtain a bootstrapped agent which performs reasonably well in assisting humans in their search so that it can be fine tuned further based on interaction with humans. As our problem caters to a subjective task of searching digital assets which is different from more common objective tasks such as reservation, it is difficult to determine generic rewards based on whether the agent has been able to provide exact information to the user unlike objective search tasks where rewards are measured based on required information has been provided to the user. This makes rewards transferability between subjective and objective search difficult. Though our modeled rewards are easily transferable to search tasks such as e-commerce sites where search tasks comprises of a subjective component (in addition to objective preferences such as price).\\n\\nSince we aim to optimise dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success (based on implicit and explicit feedback from the user during the search) which is also used in PARADISE framework [1]. At the same time several metrics used by PARADISE cannot be used for modelling rewards. For instance, time required (number of turns) for user to search desired results cannot be penalised since it can be possible that user is finding the system engaging and helpful in refining the results better which may increase number of turns in the search.\\n\\nWe evaluated our system through humans and added the results to the paper, please refer to section 4.3 in the updated paper. You may refer to appendix (section 6.2) for some conversations between actual users and the trained agent.\\n\\nThanks for suggesting related references, we have updated our paper based on the suggestions. Kindly suggest any other further improvements.\\n\\n[1] Walker, Marilyn A., et al. \\\"PARADISE: A framework for evaluating spoken dialogue agents.\\\" Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 1997.\"}",
"{\"title\": \"State and Reward Modeling\", \"comment\": \"Thanks for your reviews.\\n\\nOur state representation comprises of history of actions taken by the user and the agent (along with other variables as described in the state space section 3.3) and not only the most recent action taken by the user. User action is obtained from user utterance using a rule-based Natural language unit (NLU) which uses dependency tree based syntactic parsing, stop words and pre-defined rules (as described in appendix, section 6.1.2). We capture the search context by including the history of actions taken by the user and the agent in the state representation. The state at a turn in the conversation comprises of agent and user actions in last \\u2018k\\u2019 turns. Since a search episode can extend indefinitely and suitability & dependence of action taken by the agent can go beyond last \\u2018k\\u2019 turns, we include an LSTM in our model which aggregates the local context represented in state (\\u2018local\\u2019 in terms of state including only the recent user and agent actions) into a global context to capture such long term dependencies. We analyse the trend in reward and state values obtained by comparing it with the case when we do not include the history of actions is state and let the LSTM learn the context alone (section 4.1.3).\\n\\nOur system does not generate utterances, it instead selects an utterance based on the action taken by the agent from a corpus of possible utterances. This is because we train our agent to assist user in their search through optimising dialogue strategy and not actual dialogue utterances made by the agent. Though we aim to pursue this as future work where we generate agent utterances and train NLU for obtaining user action in addition to optimising dialogue strategy (which we have done in our current work).\\n\\nSince we aim to optimise dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success, extrinsic rewards based on feedback signals from the user and auxiliary rewards based on performance on auxiliary tasks. These rewards have been modelled numerically on a relative scale.\\n\\nWe have evaluated our model through humans and updated the paper, please refer to section 4.3 for human evaluation results and appendix (section 6.2) for conversations between actual users and trained agent.\"}",
"{\"title\": \"lack of details\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper describes reinforcement learning techniques for digital asset search. The RL techniques consist of A3C and DQN. This is an application paper since the techniques described already exist. Unfortunately, there is a lack of detail throughout the paper and therefore it is not possible for someone to reproduce the results if desired. Since there is no corpus of message response pairs to train the model, the paper trains a simulator from logs to emulate user behaviours. Unfortunately, there is no description of the algorithm used to obtain the simulator. The paper explains that the simulator is obtained from log data, but this is not sufficient. The RL problem is described at a very high level in the sense that abstract states and actions are listed, but there is no explanation about how those abstract states are recognized from the raw text and there is no explanation about how the actions are turned into text. There seems to be some confusion in the notion of state. After describing the abstract states, it is explained that actions are selected based on a history of states. This suggests that the abstract states are really abstract observations. In fact, this becomes obvious when the paper introduces the RNN where a hidden belief is computed by combining the observations. The rewards are also described at a hiogh level, but it is not clear how exactly they are computed. The digital search application is interesting, however a detailed description with comprehensive experiments are needed for the publication of an application paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Details of User Model\", \"comment\": \"Due to legal issues, we cannot not share the query session logs data. We have tried to provide details of our algorithm which can be used for obtaining user model from any given session logs data. The mapping between interactions in session log data and user actions which the agent can understand has been discussed in table 3. Using these mapping, we obtain a probabilistic user model (algorithm has been described in section 3.5). Figure 1 in the paper demonstrates how interactions in a session can be mapped to user actions.\\n\\nKindly mention the sections which are lacking details and missing information in the algorithm for user model which will help us in improving our paper.\"}",
"{\"title\": \"Q-learning and A3C System Modeling\", \"comment\": \"Q-Learning Model:\\nWe experimented with Q-learning approach in order to obtain baseline results for the task defined in the paper since RL has not been applied before for providing assistance in searching digital assets. The large size of the state space requires large amount training data for model to learn useful representations since number of parameters is directly proportional to the size of state space which is indicative of the complexity of the model. The number of training episodes is not a problem in our case since we leverage the user model to sample interactions between the learning agent and user. This indeed is reflected in figure 6 (left), which shows that the model converges when trained on sufficient number of episodes.\\n\\nSince our state space is discrete, we have used table storage method for Q-learning. Kindly elaborate on what does generalisation of state means in this context so that we may elaborate more and improve our paper.\", \"a3c_model\": \"We capture the search context by including history of actions taken by the user and the agent in last \\u2018k\\u2019 turns explicitly in the state representation. Since a search episode can extend indefinitely and suitability & dependence of action taken by the agent can go beyond last \\u2018k\\u2019 turns, we include an LSTM in our model which aggregates the local context represented in state (\\u2018local\\u2019 in terms of including only the recent user and agent actions) to capture such long term dependencies and analyse the trend in reward and state values obtained by comparing it with the case when we do not include the history of actions in the state and let the LSTM learn the context alone (section 4.1.3).\\n\\nIn varying memory capacity, by LSTM size (100,150,250), we mean dimension of the hidden state h of the LSTM. With more number of units, the LSTM can capture much richer latent representations and long term dependencies. We have explored the impact of varying the hidden state size in the experiments (section 4.1.2).\\n\\n\\nEntropy loss function has been studied to provide exploration ability to the agent while optimising its action strategy in the Actor-Critic Model [1]. While epsilon-greedy policy has been successfully used in many RL algorithms for achieving exploration vs exploitation balance, it is commonly used in off-policy algorithms like Q-learning where the policy is not represented explicitly. The model is trained on observations which are sampled following epsilon-greedy policy which is different from the actual policy learned in terms of state-action value function. \\n\\nThis is in contrast to A3C where we apply an on-policy algorithm such that the agent take actions according to the learned policy and is trained on observations which are obtained using the same policy. This policy is optimized to both maximise the expected reward in an episode as well as to incorporate the exploration behavior (which is enabled by using the exploration loss). Using epsilon-greedy policy will disturb the on-policy behavior of the learned agent since it will then learn on observations and actions sampled according to epsilon-greedy policy which will be different from the actual policy learnt which we represent as explicit output of our A3C model.\\n\\nThe loss described in the paper optimise the policy to maximise the expected reward obtained in an episode where the expectation is taken with respect to different possible trajectories that can be sampled in an episode. In A3C algorithm, the standard policy gradient methods is modified by replacing the reward term by an advantage term which is difference between reward obtained by taking an action and value of the state which is used as a baseline (complete derivation in [2]). The learned baseline enforces that parameters are updated in a way that likelihood of actions that results in rewards better than value of the state is increased while it is decreased for those which provide rewards lower than the average action in that state.\\n\\n\\n\\n[1] : Mnih, Volodymyr, et al. \\\"Asynchronous methods for deep reinforcement learning.\\\" International Conference on Machine Learning. 2016.\\n[2] : Sutton, R. et al., Policy Gradient Methods for Reinforcement Learning with Function Approximation, NIPS, 1999)\"}"
]
} |
BkN_r2lR- | Identifying Analogies Across Domains | [
"Yedid Hoshen",
"Lior Wolf"
] | Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision. | [
"unsupervised mapping",
"cross domain mapping"
] | Accept (Poster) | https://openreview.net/pdf?id=BkN_r2lR- | https://openreview.net/forum?id=BkN_r2lR- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ByECSWv4z",
"HyJww3cEz",
"BkyDnj5VG",
"SJVYXJTHf",
"Hyj4tk1GM",
"Byqw2JyGf",
"SkHatuolz",
"ryhcYB-bG",
"Sk0k9JkfG",
"rklEiy1Mz",
"rJ6aA85QG",
"HJ08-bCef"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1515816395809,
1516058455190,
1516055638897,
1517249404264,
1513187636001,
1513188450517,
1511913916642,
1512294804088,
1513187814392,
1513188135968,
1514987205323,
1512079705086
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper390/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper390/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper390/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper390/Authors"
],
[
"ICLR.cc/2018/Conference/Paper390/Authors"
],
[
"ICLR.cc/2018/Conference/Paper390/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper390/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper390/Authors"
],
[
"ICLR.cc/2018/Conference/Paper390/Authors"
],
[
"ICLR.cc/2018/Conference/Paper390/Authors"
],
[
"ICLR.cc/2018/Conference/Paper390/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your detailed reply. I still think the paper could be much improved with more extensive experiments and better applications. However, I agree that the problem setting is interesting and novel, the method is compelling, and the experiments provide sufficient evidence that the method actually works. Therefore I would not mind seeing this paper accepted into ICLR, and, upon reflection, I think this paper is hovering around the acceptance threshold.\\n\\nI really like the real-world examples listed! I would be excited to see the proposed method applied to some of these problems. I think that would greatly improve the paper. (Although, I would still argue that several of the listed examples are cases where the data would naturally come in a paired format, and direct supervision could be applied.)\\n\\nIt's a good point that previous unsupervised, cross domain GANs were also evaluated on contrived datasets with exact matches available at training time. However, I'd argue that these papers were convincing mainly because of extensive qualitative results on datasets without exact matches. Those qualitative results were enough to demonstrate that unpaired translation is possible. The current paper aims to go further, and show that the proposed method does _better_ at unpaired translation than previous methods. Making a comparison like this is harder than simply showing that the method can work at all, and I think it calls for quantitative metrics on real unpaired problems (like the examples listed in the rebuttal).\\n\\nThere are a number of quantitative ways to evaluate performance on datasets without exact matches. First, user studies could be run on Mechanical Turk. Second, unconditional metrics could be evaluated, such as Inception score or moment matching (do the statistics of the output distribution match the statistics of the target domain?).\\n\\nHowever, I actually think it is fine to evaluate on ground truth matches as long as the training data is less contrived. For example, I would find it compelling if the system were tested on 3D point cloud matching, even if the training data contains exact matches, as long as there is no trivial way of finding these matches.\"}",
"{\"title\": \"New experiments are compelling, recommend accept\", \"comment\": \"I thank the authors for thoroughly responding to my concerns. The 3D alignment experiment looks great, and indeed I did miss the comment about the cell bio experiment. That experiment is also very compelling.\\n\\nI think with these two experiments added to the revision, along with all the other improvements, the paper is now much stronger and should be accepted!\"}",
"{\"title\": \"Additional experiment requested by Reviewer\", \"comment\": \"We are deeply thankful to AnonReviewer2 for holding an open discussion and for acknowledging the significance of the proposed problem setting, the work\\u2019s novelty, and the quality of the experiments.\\nWe are also happy that AnonReviewer2 found the list of possible applications, provided in reply to the challenge posted in the review, to be exciting. We therefore gladly accept the new challenge that was set, to demonstrate the success of our method on one of the proposed applications in the list.\\nSince the reviewer explicitly requested 3D point cloud matching, we have evaluated our method on this task. It should be noted that our method was never tested before in low-D settings, so this experiment is of particular interest.\\nSpecifically, we ran the experiment using the Bunny benchmark, exactly as is shown in \\u201cDiscriminative optimization: theory and applications to point cloud registration\\u201d, CVPR\\u201917 available as an extended version at https://arxiv.org/pdf/1707.04318.pdf, Sec. 6.2.3 . In this benchmark, the object is rotated by a random degree, and we tested the success rate of our model in achieving alignment for various ranges of rotation angles. \\nFor both CycleGAN and our method, the following architecture was used. D is a fully connected network with 2 hidden layers, each of 2048 hidden units, followed by BatchNorm and with Leaky ReLU activations. The mapping function is a linear affine matrix of size 3 * 3 with a bias term. Since in this problem, the transformation is restricted to be a rotation matrix, in both methods we added a loss term that encourages orthonormality of the weights of the mapper. Namely, ||WW^T-I||, where W are the weights of our mapping function.\\nThe table below depicts the success rate for the two methods, for each rotation angle bin, where success is defined in this benchmark as achieving an RMSE alignment accuracy of 0.05.\\nRotation angle | CycleGAN | Ours\\n============================\\n0-30 0.12000 1.00000 \\n30-60 0.12500 1.00000 \\n60-90 0.11538 0.88462 \\n90-120 0.07895 0.78947 \\n120-150 0.05882 0.64706 \\n150-180 0.10000 0.76667\\n \\nComparing to the results reported in Fig. 3 of https://arxiv.org/pdf/1707.04318.pdf, middle column, our results seem to significantly outperform the methods presented there at large angles. Therefore, the proposed method outperforms all baselines and, once again, proves to be effective as well as broadly applicable.\\nP.S. It seems that the comment we posted above, which was titled \\u201cA real-world application of our method in cell biology\\u201d (https://openreview.net/forum?id=BkN_r2lR-¬eId=rJ6aA85QG), went unnoticed. In a way, it already addressed the new challenge by presenting quantitative results on a real-world dataset for which there are no underlying ground truth matches.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper builds on top of Cycle GAN ideas where the main idea is to jointly optimize the domain-level translation function with an instance-level matching objective. Initially the paper received two negative reviews (4,5) and a positive (7). After the rebuttal and several back and forth between the first reviewer and the authors, the reviewer was finally swayed by the new experiments. While not officially changing the score, the reviewer recommended acceptance. The AC agrees that the paper is interesting and of value to the ICLR audience.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response\", \"comment\": \"We thank you for highlighting the novelty and successful motivation of the exemplar-based matching loss.\\n\\nWe think that the exact-analogy problem is very important. Please refer to our comment to AnonReviewer2 for an extensive discussion. \\n\\nFollowing your request, we have added AN-GAN supervised experiments for the edges2shoes and edges2handbags datasets. The results as for the Facades case are very good.\\n\\nThank you for highlighting the inconsistency in L_1 notation and the confusing reference. This has been fixed in the revised version.\"}",
"{\"title\": \"Response to the motivation and experimental comments\", \"comment\": \"Thank you for the detailed and constructive review. It highlighted motivation and experimental protocols that were further clarified in the revised version.\\n\\nThis paper is focused on exact analogy identification. A core question in the reviews was the motivation for the scenario of exact matching, and we were challenged by the reviewer to find real world applications for it. \\n\\nWe believe that finding exact matches is an important problem and occurs in multiple real-world problems. Exact or near-exact matching occurs in: \\n* 3D point cloud matching.\\n* Matching between different cameras panning the same scene in different trajectories (hard if they are in different modalities such as RGB and IR).\\n* Matching between the audio samples of two speakers uttering the same set of sentences.\\n* Two repeats of the same scripted activity (recipe, physics experiment, theatrical show)\\n* Two descriptions of the same news event in different styles (at the sentence level or at the story level).\\n* Matching parallel dictionary definitions and visual collections.\\n* Learning to play one racket sport after knowing to play another, building on the existing set of acquired movements and skills.\\n\\nIn all these cases, there are exact or near exact analogies that could play a major rule in forming unsupervised links between the domains.\\n \\nWe note that on a technical level, most numerical benchmarks in cross domain translation are already built using exact matches, and many of the unsupervised techniques could be already employing this information, even if implicitly. We show that our method is more effective at it than other methods.\\n\\nOn a more theoretical level, cognitive theories of analogy-based reasoning mostly discuss exact analogies from memory (see, e.g., G. Fauconnier, and M. Turner, \\u201cThe way we think\\u201d, 2002 ). For example, a new situation is dealt with by retrieving and adopting a motor action that was performed before. Here, the chances of finding such analogies are high since the source domain is heavily populated due to life experiences. \\n\\nRegarding experiments. We believe that in some cases the requests are conflicting: we cannot provide numerical results in places for which there are no analogies and no metrics for success. We provide a large body of experiments for exact matches and show that our method far surpasses everything else. We have compared with multiple baselines covering all the reasonable successful approaches for matching between domains. \\n\\nThe experiments regarding cases without exact matches are, admittedly, less extensive, added for completeness, and not the focus of this paper.\\n\\nThe reviewer wondered if matching will likely work better with simpler methods. Our baselines test precisely this possibility and show that the simpler methods do not perform well. Specifically edge-based matches are well covered by the more general VGG feature baseline (which uses also low level maps - not just fc7). AN-GAN has easily outperformed this method. If it is possible to hand-craft a successful method for each task individually, these hand-crafted features are unlikely to generalize as well as the multi-scale VGG features or AN-GAN.\\n\\nWe put further clarification in the paper for the motivation for the second \\u201csupervised\\u201d step. In unsupervised semantic matching, larger neural architecture have been theoretically and practically shown to be less successful (due to overfitting and finding it less easy to recover the correct transformation). The distribution matching loss function (e.g. CycleGAN) is adversarial and is therefore less stable and might not optimize the quantity we care about (e.g. L1/L2 loss). Once the datasets are aligned and analogies are identified, however, the cross domain translation becomes a standard supervised deep learning problem where large architectures do well and standard loss functions can be used. This is the reason for the two steps. It might be possible to include the increase in architecture into the alpha-iterations but it\\u2019s non-trivial and we didn\\u2019t find it necessary.\"}",
"{\"title\": \"AN-GAN: match-aware translation of images across domains, new ideas for combining image matching and GANs\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents an image-to-image cross domain translation framework based on generative adversarial networks. The contribution is the addition of an explicit exemplar constraint into the formulation which allows best matches from the other domain to be retrieved. The results show that the proposed method is superior for the task of exact correspondence identification and that AN-GAN rivals the performance of pix2pix with strong supervision.\", \"negatives\": \"1.) The task of exact correspondence identification seems contrived. It is not clear which real-world problems have this property of having both all inputs and all outputs in the dataset, with just the correspondence information between inputs and outputs missing.\\n2.) The supervised vs unsupervised experiment on Facades->Labels (Table 3) is only one scenario where applying a supervised method on top of AN-GAN\\u2019s matches is better than an unsupervised method. More transfer experiments of this kind would greatly benefit the paper and support the conclusion that \\u201cour self-supervised method performs similarly to the fully supervised method.\\u201d\", \"positives\": \"1.) The paper does a good job motivating the need for an explicit image matching term inside a GAN framework\\n2.) The paper shows promising results on applying a supervised method on top of AN-GAN\\u2019s matches.\", \"minor_comments\": \"1. The paper sometimes uses L1 and sometimes L_1, it should be L_1 in all cases.\\n2. DiscoGAN should have the Kim et al citation, right after the first time it is used. I had to look up DiscoGAN to realize it is just Kim et al.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting direction but unconvincing experiments and uncompelling applications\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper adds an interesting twist on top of recent unpaired image translation work. A domain-level translation function is jointly optimized with an instance-level matching objective. This yields the ability to extract corresponding image pairs out of two unpaired datasets, and also to potentially refine unpaired translation by subsequently training a paired translation function on the discovered matches. I think this is a promising direction, but the current paper has unconvincing results, and it\\u2019s not clear if the method is really solving an important problem yet.\\n\\nMy main criticism is with the experiments and results. The experiments focus almost entirely on the setting where there actually exist exact matches between the two image sets. Even the partial matching experiments in Section 4.1.2 only quantify performance on the images that have exact matches. This is a major limitation since the compelling use cases of the method are in scenarios where we do not have exact matches. It feels rather contrived to focus so much on the datasets with exact matches since, 1) these datasets actually come as paired data and, in actual practice, supervised translation can be run directly, 2) it\\u2019s hard to imagine datasets that have exact but unknown matches (I welcome the authors to put forward some such scenarios), 3) when exact matches exist, simpler methods may be sufficient, such as matching edges. There is no comparison to any such simple baselines.\\n\\nI think finding analogies that are not exact matches is much more compelling. Quantifying performance in this case may be hard, and the current paper only offers a few qualitative results. I\\u2019d like to see far more results, and some attempt at a metric. One option would be to run user studies where humans judge the quality of the matches. The results shown in Figure 2 don\\u2019t convince me, not just because they are qualitative and few, but also because I\\u2019m not sure I even agree that the proposed method is producing better results: for example, the DiscoGAN results have some artifacts but capture the texture better in row 3.\\n\\nI was also not convinced by the supervised second step in Section 4.3. Given that the first step achieves 97% alignment accuracy, it\\u2019s no surprised that running an off-the-shelf supervised method on top of this will match the performance of running on 100% correct data. In other words, this section does not really add much new information beyond what we could already infer given that the first stage alignment was so successful.\\n\\nWhat I think would be really interesting is if the method can improve performance on datasets that actually do not have ground truth exact matches. For example, the shoes and handbags dataset or even better, domain adaptation datasets like sim to real.\\n\\nI\\u2019d like to see more discussion of why the second stage supervised problem is beneficial. Would it not be sufficient to iterate alpha and T iterations enough times until alpha is one-hot and T is simply training against a supervised objective (Equation 7)?\", \"minor_comments\": \"1. In the intro, it would be useful to have a clear definition of \\u201canalogy\\u201d for the present context.\\n2. Page 2: a link should be provided for the Putin example, as it is not actually in Zhu et al. 2017.\\n3. Page 3: \\u201cWeakly Supervised Mapping\\u201d \\u2014 I wouldn\\u2019t call this weakly supervised. Rather, I\\u2019d say it\\u2019s just another constraint / prior, similar to cycle-consistency, which was referred to under the \\u201cUnsupervised\\u201d section.\\n4. Page 4 and throughout: It\\u2019s hard to follow which variables are being optimized over when. For example, in Eqn. 7, it would be clearer to write out the min over optimization variables.\\n5. Page 6: The Maps dataset was introduced in Isola et al. 2017, not Zhu et al. 2017.\\n6. Page 7: The following sentence is confusing and should be clarified: \\u201cThis shows that the distribution matching is able to map source images that are semantically similar in the target domain.\\u201d\\n7. Page 7: \\u201cThis shows that a good initialization is important for this task.\\u201d \\u2014 Isn\\u2019t this more than initialization? Rather, removing the distributional and cycle constraints changes the overall objective being optimized.\\n8. In Figure 2, are the outputs the matched training images, or are they outputs of the translation function?\\n9. Throughout the paper, some citations are missing enclosing parentheses.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your positive feedback on the theoretical and experimental merits of this paper.\\n\\nFollowing your feedback on the clarity of presentation of the method. we included a diagram (including example images) illustrating the algorithm. To help keep the length under control, we shortened the introduction and related work section as you suggested.\\n\\nWe further clarified the text of the experiments. Specifically the numbers in Tab 2 are the top-1 accuracy for both directions (A to B and B to A) when 0%, 10% and 25% of examples do not have matches in the other domain. If some details remain unclear, we would be glad to clarify them.\\n\\nWe hope that your positive opinion of the content of the paper with the improvement in clarity of presentation will merit an acceptance.\"}",
"{\"title\": \"Response to the rest of the comments\", \"comment\": \"We thank the reviewer for the extensive style and reference comments. They have been fixed in the revised version:\\n1. A definition of \\u201canalogy\\u201d for the present context added to intro.\\n2. Putin example removed for need of space.\\n3. \\u201cWeakly Supervised Mapping\\u201d previous work section removed and references merged for need of space.\\n4. Optimization variables have been explicitly added to equations.\\n5. Maps dataset citation was changed to Isola et al. 2017\\n6. Removed confusing comment: \\u201cThis shows that the distribution matching is able to map source images that are semantically similar in the target domain.\\u201d\\n7. \\u201cThis shows that a good initialization is important for this task.\\u201d: one way of looking at it, is that the exemplar loss optimizes the matching problem that we care about but is a hard optimization task. The two other losses are auxiliary losses that help optimization converge. Clarification added in text.\\n8. The results shown for inexact matching are as follows: For alpha iterations and ANGAN we show the matches recovered by our methods, The DiscoGAN results are the outputs of the translation function.\\n9. Parentheses added to all citations.\\n\\nWe hope that this has convinced the reviewer of the importance of this work and are keen to answer any further questions.\"}",
"{\"title\": \"A real-world application of our method in cell biology\", \"comment\": \"Two reviewers were concerned that the problem of unsupervised simultaneous cross-domain alignment and mapping, while well suited to the existing ML benchmarks, may not have real-world applications. In our rebuttal, we responded to the challenge posed by AnonReviewer2 to present examples of applications with many important use cases.\\n\\nIn order to further demonstrate that the task has general scientific significance, we present results obtained using our method in the domain of single cell expression analysis. This field has emerged recently, due to new technologies that enable the measurement of gene expression at the level of individual cells. This capability already led to the discovery of quite a few previously unknown cell types and holds the potential to revolutionize cell biology. However, there are many computational challenges since the data is given as sets of unordered measurements. Here, we show how to use our method to map between gene expression of cell samples from two individuals and find interpersonal matching cells.\\n\\nFrom the data of [1], we took the expressions of blood cells (PMBC) extracted for donors A and B (available online at https://support.10xgenomics.com/single-cell-gene-expression/datasets; we used the matrices of what is called \\u201cfiltered results\\u201d). These expressions are sparse matrices, denoting 3k and 7k cells in the two samples and expressions of around 32k genes. We randomly subsampled the 7k cells from donor B to 3k and reduced the dimensions of each sample from 32k to 100 via PCA. Then, we applied our method in order to align the expression of the two donors (find a transformation) and match between the cell samples in each. Needless to say, there is no supervision in the form of matching between the cells of the two donors and the order of the samples is arbitrary. However, we can expect such matches to exist.\", \"we_compare_three_methods\": \"The mean distance between a sample in set A and a sample in set B (identity transformation). \\nThe mean distance after applying a CycleGAN to compute the transformation from A to B (CG for CycleGAN).\\nThe mean distance after applying our complete method.\\n\\nThe mean distance with the identity mapping is 3.09, CG obtains 2.67, and our method 1.18. The histograms of the distances are shown in the anonymous url:\", \"https\": \"//imgur.com/xP3MVmq\\n\\nWe see a great potential in further applying our method in biology with applications ranging from interspecies biological network alignment [2] to drug discovery [3], i.e. aligning expression signatures of molecules to that of diseases.\\n \\n[1] Zheng et al, \\u201cMassively parallel digital transcriptional profiling of single cells\\u201d. Nature Communications, 2017.\\n\\n[2] Singh, Rohit, Jinbo Xu, and Bonnie Berger. \\\"Global alignment of multiple protein interaction networks with application to functional orthology detection.\\\" Proceedings of the National Academy of Sciences 105.35 (2008): 12763-12768.\\n\\n[3] Gottlieb, et al. \\\"PREDICT: a method for inferring novel drug indications with application to personalized medicine.\\\" Molecular systems biology 7.1 (2011): 496.\"}",
"{\"title\": \"The approach is interesting but the paper lacks clarity of presentation\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper presents a method for finding related images (analogies) from different domains based on matching-by-synthesis. The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not). The algorithm is tested on three datasets.\\n\\nWhile the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts), the paper lacks clarity and sufficient details.\\n\\nInstead of the longer intro and related work discussion, I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments. Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level. It is hard to understand sufficiently well what the formalism means without more insight.\\n\\nAlso, the experiments need more details. For example, it is not clear what the numbers in Table 2 mean.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
H1cWzoxA- | Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling | [
"Tao Shen",
"Tianyi Zhou",
"Guodong Long",
"Jing Jiang",
"Chengqi Zhang"
] | Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN. | [
"deep learning",
"attention mechanism",
"sequence modeling",
"natural language processing",
"sentence embedding"
] | Accept (Poster) | https://openreview.net/pdf?id=H1cWzoxA- | https://openreview.net/forum?id=H1cWzoxA- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1x_52JZx4",
"BJedSXH-y4",
"SJl6jn4-yN",
"S1Y3_lTZM",
"BJjyKg-mM",
"rkcETx9lf",
"ryi_ebTWM",
"SJz6VRFlG",
"B1GcmkpBf",
"SyboUpHzz",
"rk9hKgTZz",
"r1Fr-WT-f",
"rkzLzZpbM",
"ryOYfeaef"
],
"note_type": [
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1544776847798,
1543750463766,
1543748772937,
1513060529067,
1514371298611,
1511816497593,
1513062514924,
1511806138108,
1517249417668,
1513637529523,
1513060786007,
1513062720964,
1513062985913,
1512010368111
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"~shen_si_zhe1"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/AnonReviewer4"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/Authors"
],
[
"ICLR.cc/2018/Conference/Paper366/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"\\u4e00\\u4e9b\\u95ee\\u9898\", \"comment\": \"\\u60a8\\u597d\\uff0c\\u6211\\u60f3\\u8bf7\\u6559\\u4e00\\u4e0b\\u3002\\u60a8\\u7684\\u6587\\u7ae0\\u4e2d\\u7ecf\\u5e38\\u63d0\\u5230GPU Memory \\u8fd9\\u4e00\\u9879\\u6307\\u6807\\uff0c\\u4f46\\u662f\\u6211\\u5f88\\u7591\\u60d1\\uff0c\\u5982\\u4f55\\u8ba1\\u7b97\\u6216\\u662f\\u901a\\u8fc7\\u7f16\\u5199\\u4ee3\\u7801\\u83b7\\u5f97\\u8fd9\\u4e00\\u6570\\u503c\\u3002\\u8bf7\\u95ee\\uff0c\\u60a8\\u662f\\u5982\\u4f55\\u505a\\u5230\\u7684\\uff1f\\n\\u8c22\\u8c22\\u60a8\\u3002\"}",
"{\"title\": \"Reply\", \"comment\": \"1. Try to tune the dropout probability of the neural network because CR is a small-scale dataset, and you can also tune the block length for better performance when implemented on a small dataset.\\n2. As the similar issue occurring in my github repo https://github.com/taoshen58/BiBloSA/issues/2 you can refer to that for the solutions.\"}",
"{\"title\": \"I couldn't get the same result as the paper...\", \"comment\": \"This paper introduces bi-directional block self-attention model (Bi-BioSAN) as a general-purpose encoder for sequence modeling tasks in NLP.\\n\\nFor example ,when I use the cr dataset,\\n\\n\\\"python sc_main.py --network_type exp_context_fusion --context_fusion_method wblock --model_dir_suffix training --dataset_type cr --gpu 0 \\\"\\n\\nthe result is not the 84.48 as the paper,I could only get 84.30 after several times.\\n I need your help!\\nThank you !\"}",
"{\"title\": \"Thanks for your strong support! Extending our two-level self-attention to multi-level is worth studying for long documents.\", \"comment\": \"Thank you for your strong support to our work! We will carefully fix the typos you pointed out.\\n\\n- Q1. I am curious how the story would look if one tried to push beyond two levels...? For example, how effective might a further inter-sentence attention level be for obtaining representations for long documents? \\n\\nWe have different answers to this question for sequences with different lengths.\\n\\nFor context fusion or embedding of single sentences (which is the main focus of this paper), a two-level self-attention is usually sufficient to reduce the memory consumption and meanwhile to inherit most power of original SAN in modeling contextual dependencies. Compared to multi-level attention, it preserves the local dependencies in longer subsequence and directly controls the memory utility rate, by using less parameters and computations than multi-level one. \\n\\nFor the context fusion of a document or a passage, which already has a multi-level structure (document-passages-sentences-phrases), it is worth considering to use multi-level self-attention to model the contextual relationship when the memory consumption needs to be small. Recently, self-attention has been applied to long text as a popular context fusion strategy in machine comprehension task [1,2]. In this task, the original self-attention requires lots of memory, and cannot be solely applied due to the difficulty of context fusion for a long passage/document. It is more practical to use LSTM or GRU as context fusion layers and use self-attention as a complementary module capturing the distance-irrelevant dependency. But the recurrent structure of LSTM/GRU leads to inefficiency in computation. Therefore, multi-level self-attention could provide a both memory and time efficient solution. For example, we can design a three-level self-attention structure, which consists of intra-block intra sentence, inter-block intra sentence and inter-sentence self-attentions, to produce context-aware representations of tokens from a passage. Such model can overcome the weaknesses of both RNN/CNN-based SANs (only used as a complimentary module to context fusion layers) and the RNN/CNN-free SANs (with explosion of memory consumption when text length grows).\\n\\n\\n\\nReferences\\n[1] Hu, Minghao, Yuxing Peng, and Xipeng Qiu. \\\"Reinforced mnemonic reader for machine comprehension.\\\" CoRR, abs/1705.02798 (2017).\\n[2] Huang, Hsin-Yuan, et al. \\\"FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension.\\\" arXiv preprint arXiv:1711.07341 (2017).\"}",
"{\"title\": \"Summary of Revision-V2\", \"comment\": \"Dear all reviewers, we upload a revision of this paper that differs from the previous one in that\\n1) As suggested by AnonReviewer3, we implemented the Hierarchical CNN (called Hrchy-CNN in the paper) as a baseline, and we then applied this model to SNLI and SICK datasets, which showed that the proposed model, Bi-BloSAN, still outperforms the Hierarchical CNN by a large margin; \\n2) We fixed some typos.\"}",
"{\"title\": \"Strong support for more efficient attention\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This high-quality paper tackles the quadratic dependency of memory on sequence length in attention-based models, and presents strong empirical results across multiple evaluation tasks. The approach is basically to apply self-attention at two levels, such that each level only has a small, fixed number of items, thereby limiting the memory requirement while having negligible impact on speed. It captures local information into so-called blocks using self-attention, and then applies a second level of self-attention over the blocks themselves.\\n\\nThe paper is well organized and clearly written, modulo minor language mistakes that should be easy to fix with further proof-reading. The contextualization of the method relative to CNNs/RNNs/Transformers is good, and the beneficial trade-offs between memory, runtime and accuracy are thoroughly investigated, and they're compelling.\\n\\nI am curious how the story would look if one tried to push beyond two levels...? For example, how effective might a further inter-sentence attention level be for obtaining representations for long documents?\", \"minor_points\": [\"Text between Eq 4 & 5: W^{(1)} appears twice; one instance should probably be W^{(2)}.\", \"Multiple locations, e.g. S4.1: for NLI, the word is *premise*, not *promise*.\", \"Missing word in first sentence of S4.1: ... reason __ the ...\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Memory may not be reduced effectively if using linguistic segments; removing intra-block self-attention decreases the performance.\", \"comment\": \"==Following above==\\n- Q4. The block splitting (as detailed in appendix) is rather arbitrary in terms of that it potentially divides coherent language segments apart. This is unnatural, e.g., compared with alternatives such as using linguistic segments as blocks.\\n\\nHere are two reasons for not using linguistic segments as blocks in our model. Firstly, the property of significantly reducing memory cannot be guaranteed if using linguistic segments, because either too long or too short segments will lead to expensive memory consumption, and we cannot easily control the length of linguistic segments provided by other tools. For example, in Eq.(19), either large and small block length r is likely to result in large memory. Secondly, the process of achieving linguistic segments potentially increases computation/memory cost, introduces overhead and requires more complex implementation. In addition, although we do not use linguistic segments for block splitting, our model can still capture the dependencies between tokens from different blocks by using the block-level context fusion and feature fusion gate developed in this paper. \\n\\n\\n- Q5. The main originality of paper is the block style. However, the paper doesn\\u2019t analyze how and why the block brings improvement. \\n\\nThe block or two-layer self-attention substantially reduces the memory and computational costs required by previous self-attention mechanisms, which is proportional to the square of sequence length. Meanwhile, it achieves competitive or better accuracy than RNNs/CNNs. We give a formally explanation of how this block idea can reduce the memory in Appendix A.\\n\\n\\n- Q6. If we remove intra-block self-attention (but only keep token-level self-attention), whether the performance will be significantly worse?\\n\\nCompared to test accuracy 85.7% of Bi-BloSAN on SNLI, the accuracy will be decreased to 85.2% if we remove the intra-block attention (keep block-level attention), whereas the accuracy will be decreased to 85.3% if we remove inter-block self-attention (keep token-level self-attention in blocks). Moreover, if we only use token-level self-attention, the model will be identical to the directional self-attention [2]. You can refer to the ablation study at the end of Section 4.1 for more details.\\n\\n\\n\\nReferences\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need. CoRR abs/1706.03762.\\\" (2017).\\n[2] Shen, Tao, et al. \\\"Disan: Directional self-attention network for rnn/cnn-free language understanding.\\\" arXiv preprint arXiv:1709.04696 (2017).\\n[3] Srivastava, Rupesh Kumar, Klaus Greff, and J\\u00fcrgen Schmidhuber. \\\"Highway networks.\\\" arXiv preprint arXiv:1505.00387 (2015).\\n[4] Nie, Yixin, and Mohit Bansal. \\\"Shortcut-stacked sentence encoders for multi-domain inference.\\\" arXiv preprint arXiv:1708.02312 (2017).\\n[5] Jihun Choi, Kang Min Yoo and Sang-goo Lee. \\\"Learning to compose task-specific tree structures.\\\" arXiv preprint arXiv:1707.02786 (2017). \\n[6]Kim, Yoon. \\\"Convolutional neural networks for sentence classification.\\\" arXiv preprint arXiv:1408.5882 (2014).\\n[7] Kaiser, \\u0141ukasz, and Samy Bengio. \\\"Can Active Memory Replace Attention?.\\\" Advances in Neural Information Processing Systems. 2016.\\n[8] Kalchbrenner, Nal, et al. \\\"Neural machine translation in linear time.\\\" arXiv preprint arXiv:1610.10099 (2016).\\n[9] Gehring, Jonas, et al. \\\"Convolutional Sequence to Sequence Learning.\\\" arXiv preprint arXiv:1705.03122 (2017).\"}",
"{\"title\": \"The methodology of the paper is incremental; the evaluation is comprehensive and in general supports the claims.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Pros:\\nThe paper proposes a \\u201cbi-directional block self-attention network (Bi-BloSAN)\\u201d for sequence encoding, which inherits the advantages of multi-head (Vaswani et al., 2017) and DiSAN (Shen et al., 2017) network but is claimed to be more memory-efficient. The paper is written clearly and is easy to follow. The source code is released for duplicability. The main originality is using block (or hierarchical) structures; i.e., the proposed models split the an entire sequence into blocks, apply an intra-block SAN to each block for modeling local context, and then apply an inter-block SAN to the output for all blocks to capture long-range dependency. The proposed model was tested on nine benchmarks and achieve good efficiency-memory trade-off.\", \"cons\": [\"Methodology of the paper is very incremental compared with previous models.\", \"Many of the baselines listed in the paper are not competitive; e.g., for SNLI, state-of-the-art results are not included in the paper.\", \"The paper argues advantages of the proposed models over CNN by assuming the latter only captures local dependency, which, however, is not supported by discussion on or comparison with hierarchical CNN.\", \"The block splitting (as detailed in appendix) is rather arbitrary in terms of that it potentially divides coherent language segments apart. This is unnatural, e.g., compared with alternatives such as using linguistic segments as blocks.\", \"The main originality of paper is the block style. However, the paper doesn\\u2019t analyze how and why the block brings improvement.\", \"-If we remove intra-block self-attention (but only keep token-level self-attention), whether the performance will be significantly worse?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The proposed Bi-BloSAN is a two-levels' block SAN, which has both parallelization efficiency and memory efficiency. The study is thoroughly conducted and well presented.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"More experiments show hierarchical CNN does not perform well on SNLI\", \"comment\": \"To test the performance of hierarchical CNN for context fusion, we implemented it on SNLI dataset. In particular, we used 3-layer 300D CNNs with kernel length 5 (i.e., using n-gram of n=5). By following [1], we also applied \\\"Gated Linear Units (GLU)\\\" [2] and residual connection [3] to the hierarchical CNN. We tuned the keep probability of dropout between 0.65 and 0.85 with step-size 0.05. The code of this hierarchical CNNs can be found at https://github.com/code4review/BiBloSA/blob/master/context_fusion/hierarchical_cnn.py\\n\\nThis model has 3.4M parameters. It spends 343s per training epoch and 2.9s for inference on dev set. Its test accuracy is 83.92% (with dev accuracy 84.15% and train accuracy 91.28%), which slightly outperforms the CNNs with multi-window [4] shown in our paper, but is still worse than other baselines and Bi-BloSAN. We will add these results to the revision.\\n\\n[1] Gehring, Jonas, et al. \\\"Convolutional Sequence to Sequence Learning.\\\" arXiv preprint arXiv:1705.03122 (2017).\\n[2] Dauphin, Yann N., et al. \\\"Language modeling with gated convolutional networks.\\\" arXiv preprint arXiv:1612.08083 (2016).\\n[3] He, Kaiming, et al. \\\"Deep residual learning for image recognition.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\\n[4] Kim, Yoon. \\\"Convolutional neural networks for sentence classification.\\\" arXiv preprint arXiv:1408.5882 (2014).\"}",
"{\"title\": \"Novel context fusion is developed for the two-level self-attention; Bi-BloSAN still outperforms Bi-LSTM when having the similar number of parameters.\", \"comment\": \"Thanks for your comments!\\n\\n- Q1. First, there is not much innovation in the model architecture. The idea of the Bi-BioSAN model simply to split the sentence into blocks and compute self-attention for each of them, and then using the same mechanisms as a pooling operation followed by a fusion level. I think this more counts as careful engineering of the SAN model rather than a main innovation.\\n\\nYes, the idea of using block or two-level attention is simple. In fact, it is similar to the idea behind almost all the hierarchical models. However, it has never been studied on self-attentions based models, especially on attention-only models (as much as we know, Transformer [1] and DiSAN [2] are the merely two published attention-only models), for context fusion. Moreover, it solves a critical problem of previous self-attention mechanisms, i.e., expensive memory consumption, which was a burden of applying attention to long sequences and an inevitable weakness compared to popular RNN models. Hence, it is a simple idea, which leads to a simple model, but effectively solves an important problem.\\n\\nIn addition, given this idea, it is non-trivial to design a neural net architecture for context fusion, we still need to figure out: 1) How to split the sequence so the memory can be effectively reduced? 2) How to capture the dependency between two elements from different blocks? 3) How to produce contextual-aware representation for each element on each level? 4) How to combine the output of different levels so the information from lower level does not fade out? For example, on top of Figure 3, we duplicate the block features e_i to each element as its high-level representation, use skip (highway [3]) connections to achieve its lower level representations x_i and h_i, and then design a fusion gate to combine the three representations. This design assigns each element with both high-level and low-level representations and combine them on top of the model to produce a contextual-aware representation per input element. Without it, the two-level attention can only give us e_i, which cannot explicitly model the dependency between elements from different blocks, and cannot be used for context fusion. This method has not been used in construction of attention-based models because multi-level self-attention had not been studied before.\\n\\n\\n- Q2. Second, the model introduces much more parameters. In the experiments, it can easily use 2 times parameters than the commonly used encoders. What if we use the same amount of parameters for Bi-LSTM encoders? Will the gap between the new model and the commonly used ones be smaller?\\n\\nAs suggested by you, we studied two cases in which Bi-LSTM and Bi-BloSAN have similar number of parameters. The gap does not change in both cases. We will add these new results to our revision. \\n\\n1) We increase the number of hidden units in Bi-LSTM encoders from 600 to 800. This increases the number of parameters from 2.9M to 4.8M, which is more than 4.1M of Bi-BloSAN. We implement this 800D Bi-LSTM encoder on the SNLI dataset which is the largest benchmark dataset used in this paper. After tuning of the hyperparameters (e.g., dropout keep probability is increased from 0.65 to 0.80 with step 0.05 in case of overfitting), the best test accuracy is 84.95% (with dev accuracy of 85.67%).\\n\\n2) We decrease the number of hidden units in Bi-BloSAN from 600 to 480. This reduces the number of parameters from 4.1M to 2.8M, which is similar to that of the commonly used encoders. Interestingly, without tuning the keep probability of dropout, the test accuracy of this 480D Bi-BloSAN is 85.66% (with dev accuracy 86.08% and train accuracy 91.68%). \\n\\nAdditionally, a recent NLP paper [4] shows that increasing the dimension of an RNN encoder from 128D to 2048D does not result in substantially improvement of the performance (from 21.50 to 21.86 of BLEU score on newstest2013 for machine translation). This is consistent with the results above. \\n\\n\\n\\nReferences\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need. CoRR abs/1706.03762.\\\" (2017).\\n[2] Shen, Tao, et al. \\\"Disan: Directional self-attention network for rnn/cnn-free language understanding.\\\" arXiv preprint arXiv:1709.04696 (2017).\\n[3] Srivastava, Rupesh Kumar, Klaus Greff, and J\\u00fcrgen Schmidhuber. \\\"Highway networks.\\\" arXiv preprint arXiv:1505.00387 (2015).\\n[4] Britz, Denny, et al. \\\"Massive exploration of neural machine translation architectures.\\\" arXiv preprint arXiv:1703.03906 (2017).\"}",
"{\"title\": \"Novel context fusion is developed for the two-level self-attention; Bi-BloSAN achieves the best accuracy among all sentence-encoding models on SNLI; Hierarchical CNN is costly for long-range dependency.\", \"comment\": \"Thank you for your elaborative comments! We discuss the Cons you pointed out one by one as follows.\\n\\n- Q1. Methodology of the paper is very incremental compared with previous models.\\n\\nYes, the idea of using block or two-level attention is simple. In fact, it is similar to the idea behind almost all the hierarchical models. However, it has never been studied on self-attentions based models, especially on attention-only models (as much as we know, Transformer [1] and DiSAN [2] are the merely two published attention-only models), for context fusion. Moreover, it solves a critical problem of previous self-attention mechanisms, i.e., expensive memory consumption, which was a burden of applying attention to long sequences and an inevitable weakness compared to popular RNN models. Hence, it is a simple idea, which leads to a simple model, but effectively solves an important problem.\\n\\nIn addition, given this idea, it is non-trivial to design a neural net architecture for context fusion, we still need to figure out: 1) How to split the sequence so the memory can be effectively reduced? 2) How to capture the dependency between two elements from different blocks? 3) How to produce contextual-aware representation for each element on each level? 4) How to combine the output of different levels so the information from lower level does not fade out? For example, on top of Figure 3, we duplicate the block features e_i to each element as its high-level representation, use skip (highway [3]) connections to achieve its lower level representations x_i and h_i, and then design a fusion gate to combine the three representations. This design assigns each element with both high-level and low-level representations and combine them on top of the model to produce a contextual-aware representation per input element. Without it, the two-level attention can only give us e_i, which cannot explicitly model the dependency between elements from different blocks, and cannot be used for context fusion. This method has not been used in construction of attention-based models because multi-level self-attention had not been studied before.\\n\\n\\n- Q2. Many of the baselines listed in the paper are not competitive; e.g., for SNLI, state-of-the-art results are not included in the paper. \\n\\nIn the experiment on SNLI, Bi-BloSAN is only used to produce sentence encoding. For a fair comparison, we only compare it with the sentence-encoding based models listed separately on the leaderboard of SNLI. Up to ICLR submission deadline, Bi-BloSAN achieves the best test accuracy among all of them. \\n\\nAfter ICLR submission deadline, the leaderboard has been updated with several new methods. We copy the results of the new methods in the following.\\nThe Proposed Model) 480D Bi-BloSAN\\t2.8M\\t85.7%\\n1) 300D Residual stacked encoders[4]\\t9.7M\\t85.7%\\n2) 600D Gumbel TreeLSTM encoders[5]\\t10.0M\\t86.0%\\n3) 600D Residual stacked encoders[4]\\t29.0M\\t86.0%\\nThese results show that compared to the newly updated methods, Bi-BloSAN uses significantly less parameters but achieves competitive test accuracy.\\n\\n\\n- Q3. The paper argues advantages of the proposed models over CNN by assuming the latter only captures local dependency, which, however, is not supported by discussion on or comparison with hierarchical CNN.\\n\\nThe discussion about CNN in the current version mainly focuses on single layer CNN with multi-window [6], which is widely used in NLP community, and does not mention too much about recent studies on hierarchical CNNs. The hierarchical CNNs in NLP, such as Extended Neural GPU [7], ByteNet [8], and ConvS2S [9], are able to model relatively long-range dependency by using stacking CNNs, which can increase the number of input elements represented in a state. Nonetheless, as mentioned in [1], the number of operations (i.e. CNNs) required to relate signals from two arbitrary input grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions. However, self-attention based method only requires constant number of operations, no matter how far it is between two elements. We will add the discussion on hierarchical CNNs in the revision.\"}",
"{\"title\": \"Summary of Revision-V1\", \"comment\": \"Dear all reviewers, we upload a revision of this paper that differs from the previous one in that\\n1) We found the multi-head attention is very sensitive to the keep probability of dropout due to \\\"Attention Dropout\\\", so we tuned it in interval [0.70:0.05:0.90], resulting in test accuracy on SNLI increasing from 83.3% to 84.2%.\\n2) As suggested by AnonReviewer2, we decreased the hidden units number of Bi-BliSAN from 600 to 480 on SNLI, which leads to the parameters number dropping from 4.1M to 2.8M. The test accuracy of this 480D Bi-BloSAN is 85.66% with dev accuracy 86.08% and train accuracy 91.68%.\\n3) As suggested by AnonReviewer3, we added the discussion on hierarchical CNNs to the introduction.\\n4) We corrected typos and mistakes partly pointed out by AnonReviewer4.\"}",
"{\"title\": \"solid experiments, but the model is not very exciting\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper introduces bi-directional block self-attention model (Bi-BioSAN) as a general-purpose encoder for sequence modeling tasks in NLP. The experiments include tasks like natural language inference, reading comprehension (SquAD), semantic relatedness and sentence classifications. The new model shows decent performance when comparing with Bi-LSTM, CNN and other baselines while running at a reasonably fast speed.\\n\\nThe advantage of this model is that we can use little memory (as in RNNs) and enjoy the parallelizable computation as in (SANs), and achieve similar (or better) performance.\\n\\nWhile I do appreciate the solid experiment section, I don't think the model itself is sufficient contribution for a publication at ICLR. First, there is not much innovation in the model architecture. The idea of the Bi-BioSAN model simply to split the sentence into blocks and compute self-attention for each of them, and then using the same mechanisms as a pooling operation followed by a fusion level. I think this more counts as careful engineering of the SAN model rather than a main innovation. Second, the model introduces much more parameters. In the experiments, it can easily use 2 times parameters than the commonly used encoders. What if we use the same amount of parameters for Bi-LSTM encoders? Will the gap between the new model and the commonly used ones be smaller?\\n\\n====\\n\\nI appreciate the answers the authors added and I change the score to 6.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
S1cZsf-RW | WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling | [
"Hao Zhang",
"Bo Chen",
"Dandan Guo",
"Mingyuan Zhou"
] | To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and autoencoding variational Bayes. The generative network of WHAI has a hierarchy of gamma distributions, while the inference network of WHAI is a Weibull upward-downward variational autoencoder, which integrates a deterministic-upward deep neural network, and a stochastic-downward deep generative model based on a hierarchy of Weibull distributions. The Weibull distribution can be used to well approximate a gamma distribution with an analytic Kullback-Leibler divergence, and has a simple reparameterization via the uniform noise, which help efficiently compute the gradients of the evidence lower bound with respect to the parameters of the inference network. The effectiveness and efficiency of WHAI are illustrated with experiments on big corpora.
| [
"whai",
"weibull hybrid",
"inference",
"inference network",
"deep topic",
"big corpora",
"hierarchy",
"scalable",
"fast"
] | Accept (Poster) | https://openreview.net/pdf?id=S1cZsf-RW | https://openreview.net/forum?id=S1cZsf-RW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1HoJBilG",
"B1gG5N5ez",
"S1-EJRiuG",
"SyESlFoef",
"B1kjsaEMG",
"HJEgNJaHf",
"HJR836NGf",
"BJoIAaEGz"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment"
],
"note_created": [
1511899037154,
1511832071841,
1520324393097,
1511915580338,
1513573270993,
1517249516008,
1513573461760,
1513573971426
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper916/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper916/AnonReviewer2"
],
[
"~Christian_A_Naesseth1"
],
[
"ICLR.cc/2018/Conference/Paper916/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper916/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper916/Authors"
],
[
"ICLR.cc/2018/Conference/Paper916/Authors"
]
],
"structured_content_str": [
"{\"title\": \"a deep Poisson model\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper presents a deep Poisson model where the last layer is the vector of word counts generated by a vector Poisson. This is parameterized by a matrix vector product, and the vector in this parameterizeation is itself generated by a vector Gamma with a matrix-vector parameterization. From there the vectors are all Gammas with matrix-vector parameterizations in a typical deep setup.\\n\\nWhile the model is reasonable, the purpose was not clear to me. If only the last layer generates a document, then what use is the deep structure? For example, learning hierarchical topics as in Figure 4 doesn't seem so useful here since only the last layer matters. Also, since no input is being mapped to an output, what does going deeper mean? It doesn't look like any linear mapping is being learned from the input to output spaces, so ultimately the document itself is coming from a simple linear Poisson model just like LDA and other non-deep methods.\\n\\nThe experiments are otherwise thorough and convincing that quantitative performance is improved over previous attempts at the problem.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"official review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors develop a hybrid amortized variational inference MCMC inference\\nframework for deep latent Dirichlet allocation. Their model consists of a stack of\\n gamma factorization layers with a Poisson layer at the bottom. They amortize \\ninference at the observation level using a Weibull approximation. The structure \\nof the inference network mimics the MCMC sampler for this model. Finally they \\nuse MCMC to infer the parameters shared across data. A couple of questions:\\n\\n1) How effective are the MCMC steps at mixing? It looks like this approach helps a \\nbit with local optima?\\n\\n2) The gamma distribution can be reparameterized via its rejection sampler \\n\\n@InProceedings{pmlr-v54-naesseth17a,\\n title = \\t {{Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms}},\\n author = \\t {Christian Naesseth and Francisco Ruiz and Scott Linderman and David Blei},\\n booktitle = \\t {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics},\\n pages = \\t {489--498},\\n year = \\t {2017}\\n}\\n\\nI think some of the motivation for the Weibull is weakened by this work. Maybe a \\ncomparison is in order?\\n\\n3) Analytic KL divergence can be good or bad. It depends on the correlation between \\nthe gradients of the stochastic KL divergence and the stochastic log-likelihood\\n\\n4) One of the original motivations for DLDA was that the augmentation scheme \\nremoved the need for most non-conjugate inference. However, this approach doesn't \\nuse that directly. Thus, it seems more similar to inference procedure in deep exponential \\nfamilies. Was the structure of the inference network proposed here crucial?\\n\\n5) How much like a Weibull do you expect the posterior to be? This seems unclear.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"regarding rsvi\", \"comment\": \"Very interesting comparison results between Weibull and Gamma for this model! In general I would expect this to be model and data specific: in some cases the posterior is better approximated by a Gamma, and in others Weibull.\\n\\nJust a small comment regarding RSVI, with B=1 the probability of accepting is always higher than 0.95. If you set B=4 it will be higher than 0.99, making the difference between proposal and target very small. For this B you might even achieve better performance by just omitting the extra score function term, which is most likely negligible when compared to the reparameterization term.\"}",
"{\"title\": \"WHAI: WEIBULL HYBRID AUTOENCODING INFERENCE FOR DEEP TOPIC MODELING\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": [\"The authors propose a hybrid Bayesian inference approach for deep topic models that integrates stochastic gradient MCMC for global parameters and Weibull-based multilayer variational autoencoders (VAEs) for local parameters. The decoding arm of the VAE consists of deep latent Dirichlet allocation, and an upward-downward structure for the encoder. Gamma distributions are approximated as Weibull distributions since the Kullback-Leibler divergence is known and samples can be efficiently drawn from a transformation of samples from a uniform distribution.\", \"The results in Table 1 are concerning for several reasons, i) the proposed approach underperfroms DLDA-Gibbs and DLDA-TLASGR. ii) The authors point to the scalability of the mini-batch-based algorithms, however, although more expensive, DLDA-Gibbs, is not prohibitive given results for Wikipedia are provided. iii) The proposed approach is certainly faster at test time, however, it is not clear to me in which settings such speed (compared to Gibbs) would be needed, provided the unsupervised nature of the task at hand. iv) It is not clear to me why there is no test-time difference between WAI and WHAI, considering that in the latter, global parameters are sampled via stochastic-gradient MCMC. One possible explanation being that during test time, the approach does not use samples from W but rather a summary of them, say posterior means, in which case, it defeats the purpose of sampling from global parameters, which may explain why WAI and WHAI perform about the same in the 3 datasets considered.\", \"\\\\Phi is in a subset of R_+, in fact, columns of \\\\Phi are in the P_0-dimensional simplex.\", \"\\\\Phi should have K_1 columns not K.\", \"The first paragraph in Page 5 is very confusing because h is introduced before explicitly connecting it to k and \\\\lambda. Also, if k = \\\\lambda, why introduce different notations?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A detailed response to Reviewer 1's concerns on the results in Table 1.\", \"comment\": \"We thank Reviewer 1 for his/her comments and suggestions. We have revised the paper accordingly, with the revised/added texts highlighted in blue. Below please find our response to Reviewer 1\\u2019s concerns on the results in Table 1.\", \"q1\": \"The proposed approach underperforms DLDA-Gibbs and DLDA-TLASGR.\", \"a\": \"As shown in Fig. 3, WHAI converges faster than WAI, although the final perplexity obtained by averaging over collected samples are similar. While they share the same inference for the neural-network parameters of the auto-encoder, WHAI uses TLASGR-MCMC while WAI uses SGD to update \\\\Phi^{(l)}. We have added Mandt, Hoffman & Blei (2017) to support the practice of using SGD to obtain the approximate posterior samples of W. At the test time, both WHAI and WAI use the same number of samples of the global parameters, and use the auto-encoder of the same structure to generate the latent representation of a test document under each global-parameter sample, which is why WHAI and WAI have the same test time.\", \"q2\": \"The authors point to the scalability of the mini-batch-based algorithms, however, although more expensive, DLDA-Gibbs, is not prohibitive given results for Wikipedia are provided.\", \"q3\": \"The proposed approach is certainly faster at test time, however, it is not clear to me in which settings such speed (compared to Gibbs) would be needed, provided the unsupervised nature of the task at hand.\", \"q4\": \"It is not clear to me why there is no test-time difference between WAI and WHAI, considering that in the latter, global parameters are sampled via stochastic-gradient MCMC. One possible explanation being that during test time, the approach does not use samples from W but rather a summary of them, say posterior means, in which case, it defeats the purpose of sampling from global parameters, which may explain why WAI and WHAI perform about the same in the 3 datasets considered.\", \"newly_added_reference\": \"S. Mandt, M. D. Hoffman, and D. M. Blei. Stochastic gradient descent as approximate Bayesian inference. arXiv:1704.04289, to appear in Journal of Machine Learning Research, 2017.\", \"our_answer_to_the_other_comments\": \"we have now clearly specified the simplex constraint on the columns of \\\\Phi and clearly defined the neural networks for k, \\\\lambda, and h.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper proposes a new approach for scalable training of deep topic models based on amortized inference for the local parameters and stochastic-gradient MCMC for the global ones. The key aspect of the method involves using Weibull distributions (instead of Gammas) to model the variational posteriors over the local parameters, enabling the use of the reparameterization trick. The resulting methods perform slightly worse that the Gibbs-sampling-based approaches but are much faster at test time. Amortized inference has already been applied to topic models, but the use of Weibull posteriors proposed here appears novel. However, there seems to be no clear advantage to using stochastic-gradient MCMC instead of vanilla SGD to infer the global parameters, so the value of this aspect of WHAI unclear.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Clarifications for why it is desired to have a deep structure for topic modeling.\", \"comment\": \"We thank Reviewer 2 for his/her comments and questions. We have made revisions accordingly and highlighted our main changes in blue.\\n\\nIf we only use a single hidden layer, then \\\\{\\\\theta_{nk}\\\\}_{k}, the weights of the topics in document n, follow independent gamma distributions in the prior. By going deep, we are able to construct a much more expressive hierarchical prior distribution, whose marginal is designed to capture the correlations between different topics at multiple hidden layers. From the viewpoint of deep learning, our multilayer deep generative model provides a distributed representation of the data, with a higher layer capturing an increasingly more general concept. Empirically, our experiments consistently show that making a model deeper leads to improved performance. \\nIn Figure 4, without the deep structure, the inferred first-layer topics will have worse qualities, and their relationships will become difficult to understand. \\n\\nOur deep model is a deep generative model that has multiple stochastic layers. It is unsupervised trained to learn how to transform the gamma random noises injected at multiple different hidden layers to generate the correlated topic weights at the first layer, which are further multiplied with the learned topics as the Poisson rates to generate high-dimensional count vectors under the Poisson distribution. Thus, even though the Poisson layer is the same between a shallow model and a deep one, the latter has a much more sophisticated mechanism to generate (correlated) topic weights at the first layer, and infers a network to understand the complex relationships between different topics at multiple different levels.\"}",
"{\"title\": \"Detailed response to Reviewer 3's questions, with newly added comparison to gamma + RSVI for hybrid autoencoding inference\", \"comment\": \"We thank Reviewer 3 for his/her feedback. We have made revisions accordingly, with the main changes highlighted in blue. Below please find our detailed response.\", \"q1\": \"How effective are the MCMC steps at mixing? It looks like this approach helps a bit with local optima?\", \"a\": \"We choose the Weibull distribution to approximate the gamma distributed conditional posterior shown in Equation 5 in the paper. With DLDA-Gibbs or DLDA-TLASGR, in general, the shape parameters in Equation 5 are found to be neither too close to zero nor too large, thus, as suggested by Fig. 1, we expect the Weibull to well approximate the gamma distributed conditional posteriors.\", \"q2\": \"The gamma distribution can be reparameterized via its rejection sampler called rejection sampling variational inference (RSVI) proposed in Naesseth et al. (2017). I think some of the motivation for the Weibull is weakened by this work. Maybe a comparison is in order?\", \"q3\": \"Analytic KL divergence can be good or bad. It depends on the correlation between the gradients of the stochastic KL divergence and the stochastic log-likelihood.\", \"q4\": \"One of the original motivations for DLDA was that the augmentation scheme removed the need for most non-conjugate inference. However, this approach doesn\\u2019t use that directly. Thus, it seems more similar to inference procedure in deep exponential families. Was the structure of the inference network proposed here crucial?\", \"q5\": \"How much like a Weibull do you expect the posterior to be? This seems unclear.\"}"
]
} |
BJjquybCW | The loss surface and expressivity of deep convolutional neural networks | [
"Quynh Nguyen",
"Matthias Hein"
] | We analyze the expressiveness and loss surface of practical deep convolutional
neural networks (CNNs) with shared weights and max pooling layers. We show
that such CNNs produce linearly independent features at a “wide” layer which
has more neurons than the number of training samples. This condition holds e.g.
for the VGG network. Furthermore, we provide for such wide CNNs necessary
and sufficient conditions for global minima with zero training error. For the case
where the wide layer is followed by a fully connected layer we show that almost
every critical point of the empirical loss is a global minimum with zero training
error. Our analysis suggests that both depth and width are very important in deep
learning. While depth brings more representational power and allows the network
to learn high level features, width smoothes the optimization landscape of the
loss function in the sense that a sufficiently wide network has a well-behaved loss
surface with almost no bad local minima. | [
"convolutional neural networks",
"loss surface",
"expressivity",
"critical point",
"global minima",
"linear separability"
] | Invite to Workshop Track | https://openreview.net/pdf?id=BJjquybCW | https://openreview.net/forum?id=BJjquybCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HJ0nIZkfM",
"SJsVLlv-f",
"ByYsQbYGz",
"rkvS6-9gG",
"SyA2UxD-z",
"S136E0hZf",
"BkIW6fYxz",
"BkQsEkaBG",
"rJoC5PTQM"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"decision",
"official_comment"
],
"note_created": [
1513195189900,
1512666675853,
1513849761020,
1511820607119,
1512666806332,
1513051331965,
1511759101582,
1517249690905,
1515186899255
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper493/AnonReviewer4"
],
[
"ICLR.cc/2018/Conference/Paper493/Authors"
],
[
"ICLR.cc/2018/Conference/Paper493/Authors"
],
[
"ICLR.cc/2018/Conference/Paper493/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper493/Authors"
],
[
"ICLR.cc/2018/Conference/Paper493/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper493/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper493/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Interesting direction.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents an analysis of convolutional neural networks from the perspective of how the rank of the features is affected by the kinds of layers found in the most popular networks. Their analysis leads to the formulation of a certain theorem about the global minima with respect to parameters in the latter portion of the network.\\n\\nThe authors ask important questions, but I am not sure that they obtain important answers. On the plus side, I'm glad that people are trying to further our understanding our neural networks, and I think that their investigation is worthy of being published.\\n\\nThey present a collection of assumptions, lemmas, and theorems. They have no choice but to have assumptions, because they want to abstract away the \\\"data\\\" part of the analysis while still being able to use certain properties about the rank of the features at certain layers.\\n\\nMost of my doubts about this paper come from the feeling that equivalent results could be obtained with a more elegant argument about perturbation theory, instead of something like the proof of Lemma A1. That being said, it's easy to voice such concerns, and I'm willing to believe that there might not exist a simple way to derive the same results with an approach more along the line of \\\"whatever your data, pick whatever small epsilon, and you can always have the desired properties by perturbing your data by that small epsilon in a random direction\\\". Have the authors tried this ?\\n\\nI'm not sure if the authors were the first to present this approach of analyzing the effects of convolutions from a \\\"patch perspective\\\", but I think this is a clever approach. It simplifies the statement of some of their results. I also like the idea of factoring the argument along the concept of some critical \\\"wide layer\\\".\\n\\nGood review of the literature.\\n\\nI wished the paper was easier to read. Some of the concepts could have been illustrated to give the reader some way to visualize the intuitive notions. For example, maybe it would have been interesting to plot the rank of features a every layer for LeNet+MNIST ?\\n\\nAt the end of the day, if a friend asked me to summarize the paper, I would tell them :\\n\\n\\\"Features are basically full rank. Then they use a square loss and end up with an over-parametrized system, so they can achieve loss zero (i.e. global minimum) with a multitude of parameters values.\\\"\", \"nitpicking\": \"\\\"This paper is one of the first ones, which studies CNNs.\\\"\\nThis sentence is strange to read, but I can understand what the authors mean.\\n\\n\\\"This is true even if the bottom layers (from input to the wide layer) and chosen randomly with probability one.\\\"\\nThere's a certain meaning to \\\"with probability one\\\" when it comes to measure theory. The authors are using it correctly in the rest of the paper, but in this sentence I think they simply mean that something holds if \\\"all\\\" the bottom layers have random features.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Author's replies\", \"comment\": \"We do not agree with the assessment of novelty and contribution of reviewer 3. Up to our knowledge only the paper of Cohen and Shashua (ICML 2016) analyzes general CNN architectures. As CNN architectures are obviously very important in practice, we think that a better theoretical understanding is urgently needed. Our paper contains two main results. First we show that CNNs used in practice produce linearly independent features (for ImageNet with VGG or Inception architecture) with probability 1 (Theorem 3.5) at the wide layer (first layer in VGG and Inception). We think that this is a very helpful result to understand how and why current CNNs work also with respect to the recent debate around generalization properties of state of the art networks (Zhang et al, 2017). Second, we give necessary and sufficient conditions for global optima under squared loss (Theorem 4.4) and show that all critical points in S_k are globally optimal under the conditions of Theorem 4.5. We think that this is a significant contribution to the theoretical understanding of CNN architectures. In particular, we would like to emphasize that all our results are applicable to the real problem of interest without any simplifying assumptions.\\n\\nWe agree in general with the reviewer that it might be nice to have even stronger results e.g. convergence of gradient descent/SGD to the global optimum. But given that the current state of the art in this regard is limited to one hidden layer together with additional distributional assumptions and does not cover deep CNNs used in practice (multiple filters, overlapping patches, deep architecture), we think that the reviewer demands too much. Even papers which consider just deep linear models have been appreciated in the community and get very good reviews at ICLR 2018.\", \"specific_answers\": \"\\\"Intuitively, (1) is an easy result. Under the assumptions of Theorem 3.5, it is clear that any tiny random perturbation on the weights will make the output linearly independent.\\\"\\n\\nThere are a lot of mathematical results which are intuitive but that does not mean that they are easy to prove.\\n\\n\\\"The result will be more interesting if the authors can show that the smallest eigenvalue of the output matrix is relatively large, or at least not exponentially small.\\\"\\n\\nWe agree that this result would be interesting, but one has to start somewhere (see general comment above).\\n\\n\\\"Result (3) has severe limitations, because: (a) there can be infinitely many critical point not in S_k that are spurious local minima; (b) Even though these spurious local minima have zero Lebesgue measure, the union of their basins of attraction can have substantial Lebesgue measure; (c) inside S_k, Theorem 4.4 doesn't exclude the solutions with exponentially small gradients, but whose loss function values are bounded away above zero. If an optimization algorithm falls onto these solutions, it will be hard to escape.\\\"\\n\\n(a) Yes, but then these critical points not in S_k (the complement of S_k has measure zero) must have either low rank weight matrices in the layers above the wide layer or the features are not linearly independent at the wide layer. We don't see any reason in the properties of the loss which would enforce low rank in the weight matrices of a CNN. Moreover, it seems unlikely that a critical point with a low rank matrix is a suboptimal local minimum as this would imply that all possible full rank perturbations have larger/equal objective (we don't care if the complement of S_k potentially contains additional global minima). Even for simpler models like two layer linear networks, it has been shown by (Baldi and Hornik, 1989) that all the critical points with low rank weight matrices have to be saddle points and thus cannot be suboptimal local minima. See also other parallel submissions at ICLR 2018 for similar results and indications for deep linear models (e.g. Theorem 2.1, 2.2 in https://openreview.net/pdf?id=BJk7Gf-CZ, and Theorem 5 in https://openreview.net/pdf?id=ByxLBMZCb).\\nMoreover, a similar argument applies to the case where one has critical point such that the features are not linearly independent at the wide layer. As any neighborhood of such a critical point contains points which have linearly independent features at the wide layer (and thus it is easy to achieve zero loss), it is again unlikely that this critical point is a suboptimal local minimum.\\nIn summary, if there are any critical points in the complement of S_k, then it is very unlikely that these are suboptimal local minima but they are rather also global minima, saddle points or local maxima.\\n\\n(b/c) We agree that these are certainly interesting questions but the same comment applies as above. Moreover, we see no reason why critical points with low rank weight matrices should be attractors.\"}",
"{\"title\": \"Author's replies\", \"comment\": \"\\\"I like the presentation and writing of this paper. However, I find it uneasy to fully evaluate the merit of this paper, mainly because the \\\"wide\\\"-layer assumption seems somewhat artificial and makes the corresponding results somewhat expected.\\\"\\n\\nPlease note Table 1, where we have listed several state-of-the-art CNN networks, which have such a wide layer (more hidden units than the number of training points) in the case of ImageNet. These are VGG, Inception V3 and Inception V4. Thus we don't see why this wide layer assumption is \\\"artificial\\\" if CNNs which had large practical success fulfill this condition.\\n\\n\\\"The mathematical intuition is that the severe overfitting induced by the wide layer essentially lifts the loss surface to be extremely flat so training to zero/small error becomes easy. This is not surprising.\\\"\\n\\nWe think that our finding that practical CNNs such as VGG/Inception produce linearly independent features at the wide layer for ImageNet for almost any weight configuration up to the wide layer is an interesting finding which fosters the understanding of these CNNs. While the fact that whether the result is surprising or not is rather a matter of personal taste, what we find more relevant and important is if this result can help to advance the theoretical understanding of practical networks using rigorous math, which it does.\\n\\n\\\"It would be interesting to make the results more quantitive, e.g., to quantify the tradeoff between having local minimums and having nonzero training error.\\\"\\n\\nSuch results are currently only available for coarse approximations of neural networks where it is not clear how and if they apply to neural networks used in practice. Meanwhile, our results hold exactly for the architectures used in practice.\"}",
"{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper analyzes the expressiveness and loss surface of deep CNN. I think the paper is clearly written, and has some interesting insights.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Author's replies\", \"comment\": \"Thanks a lot for your reviews. We are happy to answer any additional questions you might have regarding our work.\"}",
"{\"title\": \"The loss surface and expressivity of deep convolutional neural networks\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper analyzes the loss function and properties of CNNs with one \\\"wide\\\" layer, i.e., a layer with number of neurons greater than the train sample size. Under this and some additional technique conditions, the paper shows that this layer can extract linearly independent features and all critical points are local minimums. I like the presentation and writing of this paper. However, I find it uneasy to fully evaluate the merit of this paper, mainly because the \\\"wide\\\"-layer assumption seems somewhat artificial and makes the corresponding results somewhat expected. The mathematical intuition is that the severe overfitting induced by the wide layer essentially lifts the loss surface to be extremely flat so training to zero/small error becomes easy. This is not surprising. It would be interesting to make the results more quantitive, e.g., to quantify the tradeoff between having local minimums and having nonzero training error.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review of \\\"The loss surface and expressivity of deep convolutional neural networks\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper presents several theoretical results on the loss functions of CNNs and fully-connected neural networks. I summarize the results as follows:\\n\\n(1) Under certain assumptions, if the network contains a \\\"wide\\u201c hidden layer, such that the layer width is larger than the number of training examples, then (with random weights) this layer almost surely extracts linearly independent features for the training examples.\\n\\n(2) If the wide layer is at the top of all hidden layers, then the neural network can perfectly fit the training data.\\n\\n(3) Under similar assumptions and within a restricted parameter set S_k, all critical points are the global minimum. These solutions achieve zero squared-loss.\\n\\nI would consider result (1) as the main result of this paper, because (2) is a direct consequence of (1). Intuitively, (1) is an easy result. Under the assumptions of Theorem 3.5, it is clear that any tiny random perturbation on the weights will make the output linearly independent. The result will be more interesting if the authors can show that the smallest eigenvalue of the output matrix is relatively large, or at least not exponentially small.\\n\\nResult (3) has severe limitations, because: (a) there can be infinitely many critical point not in S_k that are spurious local minima; (b) Even though these spurious local minima have zero Lebesgue measure, the union of their basins of attraction can have substantial Lebesgue measure; (c) inside S_k, Theorem 4.4 doesn't exclude the solutions with exponentially small gradients, but whose loss function values are bounded away above zero. If an optimization algorithm falls onto these solutions, it will be hard to escape.\\n\\nOverall, the paper presents several incremental improvement over existing theories. However, the novelty and the technical contribution are not sufficient for securing an acceptance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Dear authors,\\n\\nWhile I appreciate the result that a convolutional layer can have full rank output, this allowing a dataset to be classified perfectly under mild conditions, the fact that all reviewers expressed concern about the statement is an indication that the presentation sill needs quite a bit of work.\\n\\nThus, I recommend it as an ICLR workshop paper.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"Author's replies\", \"comment\": \"We thank reviewer 4 for the detailed comments.\\n\\n\\\"They present a collection of assumptions, lemmas, and theorems. They have no choice but to have assumptions, because they want to abstract away the \\\"data\\\" part of the analysis while still being able to use certain properties about the rank of the features at certain layers.\\\"\\n\\nYes, the reviewer is right, we did not want to make assumptions on the distribution of the training data\\nas these assumptions are very difficult to check. Instead our assumptions can all be easily checked for a given training set and CNN architecture.\\n\\n\\\"Most of my doubts about this paper come from the feeling that equivalent results could be obtained with a more elegant argument about perturbation theory, instead of something like the proof of Lemma A1. That being said, it's easy to voice such concerns, and I'm willing to believe that there might not exist a simple way to derive the same results with an approach more along the line of \\\"whatever your data, pick whatever small epsilon, and you can always have the desired properties by perturbing your data by that small epsilon in a random direction\\\". Have the authors tried this ?\\\"\\n\\nWe don't know but we can prove Lemma A1 for any given dataset (fulfilling the stated assumptions). However, we use a perturbation argument to show that our assumptions on the training data are always fulfilled for an arbitrarily small perturbation of the data (similar to what the reviewer suggests).\\n\\n\\\"I'm not sure if the authors were the first to present this approach of analyzing the effects of convolutions from a \\\"patch perspective\\\", but I think this is a clever approach. It simplifies the statement of some of their results. I also like the idea of factoring the argument along the concept of some critical \\\"wide layer\\\".\\n\\nGood review of the literature.\\\"\\n\\nUp to the best of our knowledge we have not seen that this patch argument has been used before. It is a very convenient tool to analyze even much more general CNN architectures than the ones currently used.\\n\\n\\\"I wished the paper was easier to read. Some of the concepts could have been illustrated to give the reader some way to visualize the intuitive notions. For example, maybe it would have been interesting to plot the rank of features a every layer for LeNet+MNIST ?\\\"\\n\\nWe would be very grateful for pointers where we could improve the readability of the paper. We have added a plot for the architecture of Figure 1, where we vary the number of filters T_1 and plot the rank of the feature at the first convolutional layer. As shown by Theorem 3.5 we get full rank for T_1>=89 which implies n_1>=N for the first convolutional layer. In this case the rank of F_1 is 60000 and training error is zero and the loss is minimized almost up to single precision. We think that this illustrates nicely the result of Theorem 3.5\\n\\n\\\" \\\"This paper is one of the first ones, which studies CNNs.\\\"\\nThis sentence is strange to read, but I can understand what the authors mean.\\\"\", \"we_agree\": \"please check the new uploaded version, where we have changed it to:\\nThis paper is one of the first ones, which theoretically analyzes deep CNNs\\n\\n\\\"\\\"This is true even if the bottom layers (from input to the wide layer) and chosen randomly with probability one.\\\"\\nThere's a certain meaning to \\\"with probability one\\\" when it comes to measure theory. The authors are using it correctly in the rest of the paper, but in this sentence I think they simply mean that something holds if \\\"all\\\" the bottom layers have random features.\\\"\\n\\nWe agree that this can be misunderstood. What we prove is that it holds for almost any weight configuration for the layers from input to the wide layer with respect to the Lebesgue measure (up to a set of measure zero). As in practice the weights are often initialized using e.g. a Gaussian distribution, we wanted to highlight that our result holds with probability 1. In order to clarify this we have added a footnote \\\"are choosen randomly (\\\"with respect to any probability measure which has a density with respect to the Lebesgue measure\\\"). Thus it holds for any probability measure on the weight space which has a density function. We have changed the uploaded manuscript in that way.\"}"
]
} |
Syx6bz-Ab | Seq2SQL: Generating Structured Queries From Natural Language Using Reinforcement Learning | [
"Victor Zhong",
"Caiming Xiong",
"Richard Socher"
] | Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%. | [
"deep learning",
"reinforcement learning",
"dataset",
"natural language processing",
"natural language interface",
"sql"
] | Reject | https://openreview.net/pdf?id=Syx6bz-Ab | https://openreview.net/forum?id=Syx6bz-Ab | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S184E1gyz",
"ryerSypBM",
"H1PC1EX1M",
"r1E6tYEmz",
"SkGbiIKxz",
"ByL2SX9ez",
"S1cptoZXG",
"HJ75M8ogM",
"HJg8aSa0-",
"ByyiYYEXz",
"ByeStYNQM",
"By6SuETCZ",
"SJmanaxJz",
"r1qqBPk1z",
"Hk2vAMlJM"
],
"note_type": [
"comment",
"decision",
"comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment"
],
"note_created": [
1510106158441,
1517249848335,
1510322127467,
1514604987555,
1511774970446,
1511826861716,
1514416578239,
1511903882680,
1509936455917,
1514604950658,
1514604855859,
1509931076699,
1510165690697,
1510073745926,
1510121059777
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"~Florin_Brad1"
],
[
"ICLR.cc/2018/Conference/Paper782/Authors"
],
[
"ICLR.cc/2018/Conference/Paper782/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper782/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper782/Area_Chair"
],
[
"ICLR.cc/2018/Conference/Paper782/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper782/Authors"
],
[
"ICLR.cc/2018/Conference/Paper782/Authors"
],
[
"ICLR.cc/2018/Conference/Paper782/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper782/Authors"
],
[
"ICLR.cc/2018/Conference/Paper782/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Novelty?\", \"comment\": \"The authors cited previous semantic parsing papers using seq2seq models, but ignored all previous reinforcement learning-based Seq2SQL papers. This has already been reminded of by previous conference reviewers, but is completely neglected again in the revision. It is hard to feel the authors' will for making that kind of revision, which (although will significantly diminish the novelty claimed by this paper) however is unavoidable if the authors want to make this paper scientifically sound.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper introduces a new dataset and method for a \\\"semantic parsing\\\" problem of generating logical sql queries from text. Reviews generally seemed to be very impressed by the dataset portion of the work saying \\\"the creation of a large scale semantic parsing dataset is fantastic,\\\" but were less compelled by the modeling aspects that were introduced and by the empirical justification for the work. In particular:\\n\\n- Several reviewers pointed out that the use of RL in particularly this style felt like it was \\\"unjustified\\\", and that the authors should have used simpler baselines as a way of assessing the performance of the system, e.g. \\\"There are far simpler solutions that would achieve the same result, such as optimizing the marginal likelihood or even simply including all orderings as training examples\\\"\\n\\n- The reviewers were not completely convinced that the authors' backed up their claims about the role of this dataset as a novel contribution. In particular there were questions about its structure, e.g. \\\"dataset only covers simple queries in form of aggregate-where-select structure\\\" and about comparisons with other smaller but similar datasets, e.g. \\\"how well does the proposed model work when evaluated on an existing dataset containing full SQL queries, such as ATIS\\\"\\n\\nThere was an additional anonymous discussion about the work not citing previous semantic parsing datasets. The authors noted that this discussion inappropriately brought in previous private reviews. However it seems like the main reviewers issues were orthogonal to this point, and so it was not a major aspect of this decision.\"}",
"{\"title\": \"Related work\", \"comment\": \"Neat work!\\nWe have also released a paper detailing a corpus for language to SQL generation, it might be of interest to you https://arxiv.org/abs/1707.03172\"}",
"{\"title\": \"RE: This is a decent work but contains certain obvious drawbacks\", \"comment\": \"Thank you for your comments.\\n1. We recognize that the queries in WikiSQL are simple. It is not our intention to supplant existing models for SQL generation from natural languages. Our intention is to tackle the problem of generalizing across tables, which we believe is a key barrier to using such systems in practice. Existing tasks in semantic parsing and natural language interfaces focused on generation queries from natural language with respect to a single table. Our task requires performing this on tables not seen during training. We argue that while WikiSQL is not as complex as existing datasets in its query complexity, it is more complex in its generalization task.\\n\\n2. We could not find existing tasks that focus on generalization to unseen tables, but recognize that we may have missed existing work that the reviewer is aware of. We would be happy to apply our methods to such a task.\\n\\n3. We agree that the existing semantic parsing approach we compare against is more general. Our intention is to introduce baselines for the WikiSQL task that generalizes to unseen tables. The baselines are tailored to the particular task of generating SQL queries, but range from general and unstructured (e.g. augmented pointer) to templated and structured (e.g. WikiSQL). In addition, like Guu et al, Mou et al, Yin et al, we use reinforcement learning as a means to address equivalent queries.\"}",
"{\"title\": \"This is a decent work but contains certain obvious drawbacks\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents a new approach to support the conversion from natural language to database queries.\\n\\nOne of the major contributions of the work is the introduction of a new real-world benchmark dataset based on questions over Wikipedia. The scale of the data set is significantly larger than any existing ones. However, from the technical perspective, the reviewer feels this work has limited novelty and does not advance the research frontier by much. The detailed comments are listed below.\\n\\n1) Limitation of the dataset: While the authors claim this is a general approach to support seq2sql, their dataset only covers simple queries in form of aggregate-where-select structure. Therefore, their proposed approach is actually an advanced version of template filling, which considers the expression/predicate for one of the three operators at a time, e.g., (Giordani and Moschitti, 2012).\\n\\n2) Limitation of generalization: Since the design of the algorithms is purely based on their own WikiSQL dataset, the reviewer doubts if their approach could be generalized to handle more complicated SQL queries, e.g., (Li and Jagadish, 2014). The high complexity of real-world SQL stems from the challenges on the appropriate connections between tables with primary/foreign keys and recursive/nested queries. \\n\\n3) Comparisons to existing approaches: Since it is a template-based approach in nature, the author should shrink the problem scope in their abstract/introduction and compare against existing template approaches. While there are tons of semantic parsing works, which grow exponentially fast in last two years, these works are actually handling more general problems than this submission does. It thus makes sense when the performance of semantic parsing approaches on a constrained domain, such as WikiSQL, is not comparable to the proposal in this submission. However, that only proves their method is fully optimized for their own template.\\n\\nAs a conclusion, the reviewer believes the problem scope they solve is much smaller than their claim, which makes the submission slightly below the bar of ICLR. The authors must carefully consider how their proposed approach could be generalized to handle wider workload beyond their own WikiSQL dataset. \\n\\nPS, After reading the comments on OpenReview, the reviewer feels recent studies, e.g., (Guu et al., ACL 2017), (Mou et al, ICML 2017) and (Yin et al., IJCAI 2016), deserve more discussions in the submission because they are strongly relevant and published on peer-reviewed conferences.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good dataset but problematic claims.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This work introduces a new semantic parsing dataset, which focuses on generating SQL from natural language. It also proposes a reinforcement-learning based model for this task.\\n\\nFirst of all, I'd like to emphasize that the creation of a large scale semantic parsing dataset is fantastic, and it is a much appreciated contribution. However, I find its presentation problematic. It claims to supplant existing semantic parsing and language-to-SQL datasets, painting WikiSQL as a more challenging dataset overall. Given the massive simplifications to what is considered SQL in this dataset (no joins, no subqueries, minimal lexical grounding problem), I am reluctant to accept this claim without empirical evidence. For example, how well does the proposed model work when evaluated on an existing dataset containing full SQL queries, such as ATIS? That being said, I am sympathetic to making simplifications to a dataset for the sake of scalability, but it shouldn't be presented as representative of SQL.\\n\\nOn the modeling side, the role of reinforcement learning seems oddly central in the paper, even though though the added complexity is not well motivated. RL is typically needed when there are latent decisions that can affect the outcome in ways that are not known a priori. In this case, we know the reward is invariant to the ordering of the tokens in the WHERE clause. There are far simpler solutions that would achieve the same result, such as optimizing the marginal likelihood or even simply including all orderings as training examples. These should be included as baselines.\\n\\nWhile the data contribution is great, the claims of the paper need to be revised.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Anonymous review\", \"comment\": \"Yes, this is inappropriate to bring out. I will ask reviewers to ignore the fact of private NIPS comments in their reviews.\\n\\nHowever, I do think the resulting discussion on past work is relevant and should be considered. (And also note that some conferences (NIPS->AIStats) do share past negative reviews.)\"}",
"{\"title\": \"Interesting paper, but with limited experiments\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors have addressed the problem of translating natural language queries to SQL queries. They proposed a deep neural network based solution which combines the attention based neural semantic parser and pointer networks. They also released a new dataset WikiSQL for the problem. The proposed method outperforms the existing semantic parsing baselines on WikiSQL dataset.\", \"pros\": \"1. The idea of using pointer networks for reducing search space of generated queries is interesting. Also, using extrinsic evaluation of generated queries handles the possibility of paraphrasing SQL queries.\\n2. A new dataset for the problem.\\n3. The experiments report a significant boost in the performance compared to the baseline. The ablation study is helpful for understanding the contribution of different component of the proposed method.\", \"cons\": \"1. It would have been better to see performance of the proposed method in other datasets (wherever possible). This is my main concern about the paper.\\n2. Extrinsic evaluation can slow down the overall training. Comparison of running times would have been helpful.\\n3. More details about training procedure (specifically for the RL part) would have been better.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"RE: Novelty\", \"comment\": \"Hi Anonymous,\\n\\nCan you please clarify your comment \\\"essentially have done the Seq2SQL thing\\\"? We reference several semantic parsing papers that convert natural language questions to logical forms. Moreover, we reference works that apply semantic parsing or other neural models to tables. We can and will certainly add the recent work you listed to our paper, but perhaps the phrase \\\"neglect all previous studies\\\" is a bit harsh?\\n\\nI am not certain as to whether it is appropriate to mention anonymous NIPS reviews, but the main concern from our NIPS reviews (e.g. the only negative review) was that we do not compare to semantic parsing results. We have since rewritten the paper to clarify this point. Namely, our baseline is a state-of-the-art neural semantic parser by Dong et al., who demonstrated its effectiveness on four semantic parsing datasets. The particular review you mention was actually the most positive, with its conclusion being \\\"the experiment part is solid and convincing\\\" and \\\"I believe release of the datasets will benefit research in this direction.\\\"\", \"regarding_your_concern_about_novelty\": \"Prior work, including those you cite (and which we will certainly add to our references), mainly focus on semantic parsing over knowledge graphs or synthetic datasets. For example, the first work you reference by Liang et al. works on WebQuestionsSP, which has under 5k examples. The second work you reference by Mou et al. uses a dataset by Yin et. al that contains 25k synthetic examples from a single schema (the Olympic games table). Finally, your third reference by Guu et al uses SCONE, which is a synthetic semantic parsing dataset over 14k examples and 3 domains. \\n\\nIn contrast, WikiSQL (which this paper introduces) spans 80k examples and 24k schemas over real-world web tables - orders of magnitude larger than previous efforts. The number of schemas, in particular, poses a difficult generalization challenge. Moreover, WikiSQL contains natural language utterances annotated and verified by humans instead of generated templates. One of the novelties of our approach (Seq2SQL) is that while we operate on SQL tables, we do not observe the content of the table. That is, the rewards our model observes comes from database execution (as oppose to self execution). This also makes policy learning more challenging, because an important part of the environment (e.g. table content) is not observed. This is distinct from prior work, including those you reference, that learn using table content. Our approach forces the model to learn purely from the question and the table schema. This enables our model to act as a thin and scalable natural language interface instead of a database engine because it does not need to see the database content.\\n\\nFinally, as an impartial means to gauge the impact of our work, despite not having been published, WikiSQL is already seeing adoption and Seq2SQL is already being used as a reference baseline by the community (including submissions by other groups to this conference).\"}",
"{\"title\": \"RE: Good dataset but problematic claims.\", \"comment\": \"Thank you for your comments.\\n\\n1. It is not at all our intention to claim that WikiSQL supplants existing datasets. Our intended emphasis is that WikiSQL requires that models generalize to tables not seen during training. We are not aware of a semantic parsing dataset that 1. Provides logical forms 2. Requires generalization to unseen tables/schemas 3. Is based on realistic SQL tables in relational databases. We do recognize that WikiSQL, in its current state, contains only simple SELECT-AGGREGATE-WHERE queries. More complex queries contain, as you said, joins and subqueries. We will take this into account and elaborate on the generation of WikiSQL (which we placed into the appendix due to length considerations). In particular, we will explicitly emphasize the fact that WikiSQL does not contain subqueries nor joins.\\n\\n2. We agree that reinforcement learning seems like a general and complex solution to a specific problem that can be solved in other ways. In fact, another submission to ICLR leverages this insight to incorporate structures into the model to do, say, set prediction of WHERE conditions (https://openreview.net/forum?id=SkYibHlRb¬eId=S12EyE1bz). We chose to use the RL approach as the baseline for WikiSQL because it is easy to generalize this approach to other forms of equivalent queries should we expand WikiSQL in the future. We also found that it is simple to implement in practice. \\nWe agree though that given the current state of WikiSQL, there are simpler approaches to tackle the WHERE clause ordering problem. We incorporated your suggestion of augmenting the training set with all permutations of the WHERE clause ordering. By doing this, we obtained 58.97% execution accuracy and 45.32% logical form accuracy on the test set with the Seq2SQL model without RL. The higher execution accuracy and lower logical form accuracy suggests that annotators were biased and tended to agree with the WHERE clause ordering presented to them in the paraphrasing task. Because we permute the ordering of the WHERE clause in training, the model does not see this bias during training and obtains worse logical form accuracy. With RL and augmented training set, we obtained 59.6% execution accuracy and 45.7% logical form accuracy.\"}",
"{\"title\": \"RE: Interesting paper, but with limited experiments\", \"comment\": \"Thank you for your comments.\\n\\n1. We computed the run time of the model with RL and without RL. There is a subtlety regarding the runtime computation in that we run the evaluation during each batch, which inherently does database lookup (e.g. to calculate the execution accuracy). The result of evaluation is used as reward in the case of reinforcement learning. Because of this, using RL does not really add to the compute cost, apart from propagating the actual policy gradients because reward computation is always done as a part of evaluation. Taking this into account, the per-batch runtime over an epoch for the no-RL model took 0.2316 seconds on average with a standard deviation of 0.1037 seconds, whereas the RL model took 0.2627 seconds on average with a standard deviation of 0.1414 seconds.\\n\\n2. Regarding your main concern (that we compare to other datasets), we are not aware of other datasets for natural language to SQL generation that requires generalization to new table schemas. For example, WikiSQL contains ~20k table schemas while other SQL generation tasks focus on a single table. As a result, we decided to compare our model against an existing state of the art semantic parser on our task instead. We would be happy to study the effect of our proposed method on other datasets.\\n\\n3. Our RL model is initialized with the parameters from the best non-RL model. This RL model is trained with Adam with a learning rate of 1e-4 and a batch size of 100. We use an embedding size of 400 (300 for word embeddings, 100 for character ngrams) and a hidden state size of 200. Each BiLSTM has 2 layers. Please let us know if there are any particular points you would like us to elaborate on.\"}",
"{\"title\": \"Novelty?\", \"comment\": \"It's amazing how the authors neglect all previous studies that essentially have done the Seq2SQL thing at least 3 times (arXiv:1612.01197; arXiv:1612.02741; arXiv:1704.07926). It's also amazing how the authors neglect the NIPS reviews which have already pointed this out.\"}",
"{\"title\": \"NIPS\", \"comment\": \"Regarding the different conclusions drawn by this NIPS review and the other anonymous reviewer, perhaps the authors should consider the possibility that the other anonymous reviewer did not write the NIPS review in question? In any event, I find it disturbing, albeit slightly amusing, that one would bring out recent and anonymous (and private, nonetheless) NIPS reviews in public like this. The area chair should make note of this and consider whether it is appropriate.\"}",
"{\"title\": \"Errata\", \"comment\": [\"There is a typo we will fix in the analysis of the WHERE clause in section 4.2. The example question should be \\\"which males\\\" instead of \\\"which men\\\". It is impossible for the model to generate the word \\\"men\\\" because it is not in the question nor the schema.\"]}",
"{\"title\": \"RE: Novelty\", \"comment\": \"We thank the anonymous reviewer for the feedback and respectfully disagree regarding the novelty of our work. We refer readers back to our earlier comment regarding how our contribution is distinct from prior art. Once again, we regret not citing the anonymous reviewer's prior work (which we believe, while important, is distinct from ours). To the anonymous reviewer, I emphasize that we are not maliciously ignoring your work. We focused our efforts on addressing the main concern of the only negative review, which was that it was unclear how our model compares to to existing semantic parsing models. We have since addressed this in the fashion described by my previous comment.\"}"
]
} |
rkONG0xAW | Recursive Binary Neural Network Learning Model with 2-bit/weight Storage Requirement | [
"Tianchan Guan",
"Xiaoyang Zeng",
"Mingoo Seok"
] | This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for embedded and mobile devices having a limited amount of on-chip data storage such as hundreds of kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip, achieving better classification accuracy. Such efficient use of on-chip storage reduces off-chip storage accesses, improving energy-efficiency and speed of training. We verified the proposed training model with deep and convolutional neural network classifiers on the MNIST and voice activity detection benchmarks. For the deep neural network, our model achieves data storage requirement of as low as 2 bits/weight, whereas the conventional binary neural network learning models require data storage of 8 to 32 bits/weight. With the same amount of data storage, our model can train a bigger network having more weights, achieving 1% less test error than the conventional binary neural network learning model. To achieve the similar classification error, the conventional binary neural network model requires 4× more data storage for weights than our proposed model. For the convolution neural network classifier, the proposed model achieves 2.4% less test error for the same on-chip storage or 6× storage savings to achieve the similar accuracy.
| [
"model",
"data storage",
"weights",
"training",
"less test error",
"storage requirement",
"learning model"
] | Reject | https://openreview.net/pdf?id=rkONG0xAW | https://openreview.net/forum?id=rkONG0xAW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"H1suSbq7z",
"ry9evkaSz",
"H11OyNqgM",
"HyA3vW57z",
"BkYwge9ef",
"HyJRxZ9mf",
"SkMJBHOez"
],
"note_type": [
"official_comment",
"decision",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1514964339298,
1517250289900,
1511829351317,
1514964917734,
1511813218180,
1514963143296,
1511703770211
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper443/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper443/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper443/Authors"
],
[
"ICLR.cc/2018/Conference/Paper443/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper443/Authors"
],
[
"ICLR.cc/2018/Conference/Paper443/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Hardware implementation and application on CNNs and other benchmarks are added\", \"comment\": \"Thank you very much for your insightful comments. We really appreciate your comments which help us to improve the draft. We fixed typos, revised the paper so as to reduce confusion, and also add relevant references including those suggested by the reviewer. Below are our answers to your other questions.\\n\\nAbout the topology of the generated neural network, firstly, we correct our presentation of \\\"fully-connected\\\" based on the fact that our RBNN trained the fully-connected structure for the 1-hidden layer case and the tiled structure for the 2-hidden layer case. However, we'd like to point out that in all the experiments in the paper, the results of conventional BNNs that are compared to ones of the proposed model are all fully-connected. We tested the RBNN to train the fully-connected structure for the 2-hidden layer case, but we do not see much difference in terms of the accuracy and storage-requirement trade-off. Still we added this results to Fig. 8. \\n\\nFor hardware implementation, we added Appendix A to illustrate the hardware implementation of memory reallocation. It describes multi-weight operations where each weight takes one to k bits during the training process based on the RBNN model. The main idea is to fetch multiple weights packed in one 8-bit word from data storage (SRAM) and to use a mask to separate already-trained weights (bits) and plastic weights (bits). After finishing training, we use XOR operation to pack once-plastic bits and the fixed bits into a word and store it in the data storage. This mapping requires bit-wise AND and XOR operations, which are supported in CPU,GPU, custom circuits and also FPGAs, at the very beginning and the end of each training epoch. Therefore, the extra energy consumption is minimal. It also allows us to use the existing SRAM macros without modification. We added this discussion to Sec 5. 3 in the revised paper. \\n\\nThe CNN with the proposed RBNN model is tested and the results are shown the revised paper (Appendix B and C), We added an experimental result on the application of our RBNN on the LeNet CNN performing MNIST benchmark. We also added the experimental result on the application of our RBNN on the MLP-like DNN performing voice activity detection benchmark (AURORA 4). These new results confirm that our RBNN can improve the trade-off between weight storage requirement and accuracy trade-off by the similar amount as the original results from the MLP and the MINST test case.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This is an interesting paper and addresses an important problem of neural networks with memory constrains. New experiments have been added that add to the paper, but the full impact of the paper is not yet realised, needing further exploration of models of current practice, wider set of experiments and analysis, and additional clarifying discussion.\"}",
"{\"title\": \"Nice trick on reusing non-sign bits to recursively add more weights during training, but high computation cost and ideally need more experiments\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Summary: The paper addresses the issue of training feed-forward neural networks with memory constraints. The idea is to start by training a very small network, binarise this network, then reuse the non-signed bits of the binarised weights to add/store new weights, and recursively repeat these steps. The cost of reducing the memory storage is the extra computation. An experiment on MNIST shows the efficacy of the proposed recursive scheme.\", \"quality_and_significance\": \"The proposed method is a combination of the binarised neural network (BNN) architecture of Courbariaux et al. (2015; 2016) with a network growing scheme to reduce the number of bits per weight. However, the computation complexity is significantly larger. The pitch of the paper is to reduce the \\\"high overhead of data access\\\" when training NNs on small devices and indeed this seems to be the case as shown in the experiment. However, if the computation is that large compared to the standard BNNs, I wonder if training is viable on small devices after all. Perhaps all aspects (training cost [computation + time], accuracy and storage) should be plotted together to see what methods form the frontier. This is probably out of scope for ICLR but to really test these methods, they should be trained/stored on a real small device and trained/fine-tuned using user data to see what would work best.\\n\\nThe experiment is also limited to MNIST and fully connected neural networks.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Discussion on computation payment, hardware implementation and more experiments are added\", \"comment\": \"Thank you for your insightful reviews helping us improve our paper. More analysis and discussion about computation payment and hardware implementation are added in the revised paper. And we answer your questions as follows.\\n\\nThe computation payment of the proposed RBNN is very small compared to the energy saving it brings. First of all, from the results on arithmetic complexity Table 2, it is noticed that the extra computation brought by the proposed RBNN are shift and add, and multiplication computation are the same for RBNN and conventional BNN. Since multiplication has much more overhead than add and shift, the final computation increase is not significant. Secondly, it has been proved that for fully connected NN systems, data access costs the majority of the energy overhead. The proposed RNN model reduce the data storage requirement so the system only need to fetch data from on-chip SRAM during training. According to the quantitative analysis added in Table2 and Section 5.3, this saves around 100x energy compared to conventional BNN which has to fetch weights from off-chip DRAM. \\n\\nThe single-bit manipulation can be implemented by very simple hardware logic. We added Appendix A to the revised paper to illustrate the implementation of bit-wise operation of weights. The main idea is fetching complete weights from weight storage. And use mask code to separate fixed bits and plastic bits. After plastic bits are updated, they are concatenated to fixed bits through XOR operation and write back to data storage. This implementation only requires simple AND and XOR operation at the very beginning and end of each training epoch, so the extra energy consumption is very small. \\n\\nThe results of applying the proposed RBNN model to CNNs on MNIST benchmark and to MLP-like DNNs on AURORA 4 benchmark are added in Appendix B and C, respectively. We really appreciate your suggestions to validate the proposed RBNN model more.\"}",
"{\"title\": \"Not ready yet; needs more work\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"There could be an interesting idea here, but the limitations and applicability of the proposed approach are not clear yet. More analysis should be done to clarify its potential. Besides, the paper seriously needs to be reworked. The text in general, but also the notation, should be improved.\\n\\nIn my opinion, the authors should explain how to apply their algorithm to more general network architectures, and test it, in particular to convnets. An experiment on a modern dataset beyond MNIST would also be a welcome addition.\", \"some_comments\": [\"The method is present as a fully-connected network training procedure. But the resulting network is not really fully-connected, but modular. This is clear in Fig. 1 and in the explanation in Sect. 3.1. The newly added hidden neurons at every iteration do not project to the previous pool of hidden neurons. It should be stressed that the networks end up with this non-conventional \\u201ctiled\\u201d architecture. Are there studies where the capacity of such networks is investigated, when all the weights are trained concurrently.\", \"It wasn\\u2019t clear to me whether the memory reallocation could be easily implemented in hardware. A few references or remarks on this issue would be welcome.\", \"The work \\u201cEfficient supervised learning in networks with binary synapses\\u201d by Baldassi et al. (PNAS 2007) should be cited. Although usually ignored by the deep learning community, it actually was a pioneering study on the use of low resolution weights during inference while allowing for auxiliary variables during learning.\", \"Coming back my main point above, I didn\\u2019t really get the discussion on Sect. 5.3. Why didn\\u2019t the authors test their algorithm on a convnet? Are there any obstacles in doing so? It seems quite important to understand this point, as the paper appeals to technical applications and convolution seems hard to sidestep currently.\", \"Fig. 3: xx-axis: define storage efficiency and storage requirement.\", \"Fig. 4: What\\u2019s an RSBL? Acronyms should be defined.\", \"Overall, language and notation should really be refined. I had a hard time reading Algorithm 1, as the notation is not even defined anywhere. And this problem extends throughout the paper.\", \"For example, just looking at Sect. 4.1, \\u201ctraining and testing data x is normalized\\u2026\\u201d, if x is not properly defined, it\\u2019s best to omit it; \\u201c\\u2026 2-dimentonal\\u2026\\u201d, at least major typos should be scanned and corrected.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The extra computation complexity can be easily offset by the notably less amount of off-chip data storage access. And the proposed model can achieve ~100X energy saving for training. New experiments on CNN are also added.\", \"comment\": \"Thank you for your insightful comments. Your concern on the extra computation overhead of our proposed model is valid. However, we'd like to point out that it is not as significant compared to our benefit in data access. In terms of energy, compared to conventional BNN, the proposed model needs notably less amount of off-chip data storage access, which easily offsets the extra computation cost with a large margin. To elaborate this issue more, we added quantitive analysis in Sec. 5.3, Table 2 and Table3 in the revised paper.\\n\\nFirst, in a BNN, the main bottleneck is data access overhead rather than computation. This is because the use of binary information of weights reduces computational complexity. The proposed model reduces the data storage size so that it can store all the weights in the on-chip SRAM. This reduces energy consumption significantly because accessing data from off-chip DRAM and FLASH consumes at least 2 orders of magnitude more energy than SRAM. Conventional BNN systems have to store and fetch data from off-chip DRAM and FLASH. Our quantitative energy analysis, added in Sec. 5.3, shows the proposed RBNN can save at least 100X training energy compared to conventional BNN.\\n\\nSecond, the proposed model only increases the number of add and shift operations roughly two times for the neural networks having the same number of hidden units (Table 2), whereas it does not increase the number of multi-bit multiplications as compared to conventional BNNs. Note that this multi-bit multiplication is used to calculate gradients. In both RBNN and BNN, the multiplications between inputs/activations and weights are replaced with sign change operations. Multiplication is much more costly than add and shift operations. Thus, it is important not to increase the number of multiplications. \\n\\nAnd the evaluation of the proposed model on CNN classifying MNIST benchmark and DNN classifying AURORA 4 VAD benchmark is added in the Appendix B and C in the revised paper, respectively\"}",
"{\"title\": \"This work suggest how to train a NN in incremental way so for the same performance less memory is needed or for the same memory higher performance can be achieved.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The idea of this work is fairly simple. Two main problems exist in end devices for deep learning: power and memory. There have been a series of works showing how to discretisize neural networks. This work, discretisize a NN incrementally. It does so in the following way: First, we train the network with the memory we have. Once we train and achieve a network with best performance under this constraint, we take the sign of each weight (and leave them intact), and use the remaining n-1 bits of each weight in order to add some new connections to the network. Now, we do not change the sign weights, only the new n-1 bits. We continue with this process (recursively) until we don't get any improvement in performance.\\n\\nBased on experiments done by the authors, on MNIST, having this procedure gives the same performance with 3-4 times less memory or increase in performance of 1% for the same memory as regular network. \\n\\nI like the idea, and I think it is indeed a good idea for IoT and end devices. The main problem with this method that there is undiscussed payment with current hardware architectures. I think there is a problem with optimizing the memory after each stage was trained. Also, current architectures do not support a single bit manipulations, but is much more efficient on large bits registers. So, in theory this might be a good idea, but I think this idea is not out-of-the-box method for implementation.\\n\\nAlso, as the authors say, more experiments are needed in order to understand the regime in which this method is efficient. To summarize, I like this idea, but more experiments are needed in order to understand this method merits.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
B1CQGfZ0b | Learning to select examples for program synthesis | [
"Yewen Pu",
"Zachery Miranda",
"Armando Solar-Lezama",
"Leslie Pack Kaelbling"
] | Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, that maps the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, it is commonly formulated as a constraint satisfaction problem, where input-output examples are expressed constraints, and solved with a constraint solver. A key challenge of this formulation is that of scalability: While constraint solvers work well with few well-chosen examples, constraining the entire set of example constitutes a significant overhead in both time and memory. In this paper we address this challenge by constructing a representative subset of examples that is both small and is able to constrain the solver sufficiently. We build the subset one example at a time, using a trained discriminator to predict the probability of unchosen input-output examples conditioned on the chosen input-output examples, adding the least probable example to the subset. Experiment on a diagram drawing domain shows our approach produces subset of examples that are small and representative for the constraint solver. | [
"program synthesis",
"program induction",
"example selection"
] | Reject | https://openreview.net/pdf?id=B1CQGfZ0b | https://openreview.net/forum?id=B1CQGfZ0b | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BycOZvTXf",
"Bycm6ytgf",
"Syz7_waXf",
"By8NGl0xG",
"SJC_Polgz",
"SkzWAUTQG",
"B1RSUyTHG"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision"
],
"note_created": [
1515184498103,
1511746849878,
1515186202385,
1512075822185,
1511204725828,
1515183610419,
1517250118343
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper786/Authors"
],
[
"ICLR.cc/2018/Conference/Paper786/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper786/Authors"
],
[
"ICLR.cc/2018/Conference/Paper786/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper786/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper786/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"response 2\", \"comment\": \"I'd also appreciate a discussion of relationships between this approach and what is done in the active learning literature.\\n\\n=> Which work would you see as most similar to our work? I am seeing CEGIS most closely relates to the line of work that ask for labels for the input that lies most \\\"close to the decision boundary\\\" for learning a SVM. However I am in a setting where all labels are already given but are too many to process. If you can give a few pointers/papers on what would be good related work in this space it would be very well appreciated. \\n\\n\\nIt's not generally scalable to build a neural network whose size scales with the number\\nof possible inputs. I can't see how this approach would be tractable in more standard program\\nsynthesis domains where inputs might be lists of arrays or strings, for example. It seems that this\\napproach only works due to the peculiarities of the formulation of the only task that is considered,\\nin which the program maps a pixel location in 32x32 images to a binary value.\\n\\n=> You are right. In the particular experiments we use a conv-net of a 7x7 window size so it would scale to arbitrary large images (to the point that the constraint synthesizer is the bottleneck). However in general it is definitely true such encoding will not scale. We are working on a rnn architecture that do not take in the entire input space at once.\\n\\n\\n- This paper is poor in the reproducibility category. The architecture is never described,\\nit is light on details of the training objective, it's not entirely clear what the DSL used in the\\nexperiments is (is Figure 1 the DSL used in experiments), and it's not totally clear how the random\\nimages were generated (I assume values for the holes in Figure 1 were sampled from some\\ndistribution, and then the program was executed to generate the data?).\\n\\n=> We'll do a better job next time explaining the architecture and the DSL. The random images are generated by uniformly sampling inter values (between some range bounds) on the wholes in Figure 1, and the draw program is executed to generate a 32x32 image.\\n\\n\\n- Experiments are only presented in one domain, and it has some peculiarities relative to \\nmore standard program synthesis tasks (e.g., it's tractable to enumerate all possible inputs). It'd\\nbe stronger if the approach could also be demonstrated in another domain.\\n\\n=> We do intend to take our work to a different domain and have some in mind. However, if you have any domain where you would like to see us try this approach please let us know, it would be very instructive.\\n\\n\\n- Technical point: it's not clear to me that the training procedure as described is consistent\\nwith the desired objective in sec 3.3. Question for the authors: in the limit of infinite training\\ndata and model capacity, will the neural network training lead to a model that will reproduce the\\nprobabilities in 3.3?\\n\\n=> Yes it will. The neural network in that case would act like a \\\"soft\\\" dictionary of counts keeping track of all the instances a new input x is mapped to y conditioned on all the past observed input/outputs. Thus, for the same reason the explicit count formulation approaches the desired probability, the neural network would as well.\", \"overall\": \"Need better explaination on the neural network architecture, a new domain is needed (with a better architecture that can scale)\"}",
"{\"title\": \"Interesting formulation, but execution lets the paper down\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents a method for choosing a subset of examples on which to run a constraint solver\\nin order to solve program synthesis problems. This problem is basically active learning for\\nprogramming by example, but the considerations are slightly different than in standard active\\nlearning. The assumption here is that labels (aka outputs) are easily available for all possible\\ninputs, but we don't want to give a constraint solver all the input-output examples, because it will\\nslow down the solver's execution.\\n\\nThe main baseline technique CEGIS (counterexample-guided inductive synthesis) addresses this problem\\nby starting with a small set of examples, solving a constraint problem to get a hypothesis program,\\nthen looking for \\\"counterexamples\\\" where the hypothesis program is incorrect.\\n\\nThis paper instead proposes to learn a surrogate function for choosing which examples to select. The\\npaper isn't presented in exactly these terms, but the idea is to consider a uniform distribution\\nover programs and a zero-one likelihood for input-output examples (so observations of I/O examples\\njust eliminate inconsistent programs). We can then compute a posterior distribution over programs\\nand form a predictive distribution over the output for all the remaining possible inputs. The paper\\nsuggests always adding the I/O example that is least likely under this predictive distribution\\n(i.e., the one that is most \\\"surprising\\\").\\n\\nForming the predictive distribution explicitly is intractable, so the paper suggests training a\\nneural net to map from a subset of inputs to the predictive distribution over outputs. Results show\\nthat the approach is a bit faster than CEGIS in a synthetic drawing domain.\\n\\nThe paper starts off strong. There is a start at an interesting idea here, and I appreciate the\\nthorough treatment of the background, including CEGIS and submodularity as a motivation for doing\\ngreedy active learning, although I'd also appreciate a discussion of relationships between this approach \\nand what is done in the active learning literature.Once getting into the details of the proposed approach, \\nthe quality takes a downturn, unfortunately.\", \"main_issues\": \"- It's not generally scalable to build a neural network whose size scales with the number\\nof possible inputs. I can't see how this approach would be tractable in more standard program\\nsynthesis domains where inputs might be lists of arrays or strings, for example. It seems that this\\napproach only works due to the peculiarities of the formulation of the only task that is considered,\\nin which the program maps a pixel location in 32x32 images to a binary value.\\n\\n- It's odd to write \\\"we do not suggest a specific neural network architecture for the\\nmiddle layers, one should seelect whichever architecture that is appropriate for the domain at\\nhand.\\\" Not only is it impossible to reproduce a paper without any architectural details, but the\\nresult is then that Fig 3 essentially says inputs -> \\\"magic\\\" -> outputs. Given that I don't even\\nthink the representation of inputs and outputs is practical in general, I don't see what the \\ncontribution is here.\\n\\n- This paper is poor in the reproducibility category. The architecture is never described,\\nit is light on details of the training objective, it's not entirely clear what the DSL used in the\\nexperiments is (is Figure 1 the DSL used in experiments), and it's not totally clear how the random\\nimages were generated (I assume values for the holes in Figure 1 were sampled from some\\ndistribution, and then the program was executed to generate the data?).\\n\\n- Experiments are only presented in one domain, and it has some peculiarities relative to \\nmore standard program synthesis tasks (e.g., it's tractable to enumerate all possible inputs). It'd\\nbe stronger if the approach could also be demonstrated in another domain.\\n\\n- Technical point: it's not clear to me that the training procedure as described is consistent\\nwith the desired objective in sec 3.3. Question for the authors: in the limit of infinite training\\ndata and model capacity, will the neural network training lead to a model that will reproduce the\\nprobabilities in 3.3?\", \"typos\": \"- The paper needs a cleanup pass for grammar, typos, and remnants like \\\"Figure blah shows our \\nneural network architecture\\\" on page 5.\", \"overall\": \"There's the start of an interesting idea here, but I don't think the quality is high enough\\nto warrant publication at this time.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"response 3\", \"comment\": \"Only 1024 examples are considered, which is by no means large.\\n\\n=> Indeed this is not large compared to a standard vision task, but if all are taken together, can be quite significant for the constraint solver to reason with. We believe what you meant by \\u201cnot large\\u201d is in a sense that the entire _input space_ is quite small, and we do intend to address this problem so that the input-outputs are not total in the dataset, but rather a sample of input-output that lives in a much bigger space.\\n\\n\\nEven then, the authors approach selects the highest number of\\nexamples (figure 4). CEGIS both selects fewer examples and has a shorter median\\ntime for complete synthesis. Intuitively, the authors' method should scale\\nbetter, but they fail to show this -- a missed opportunity to make the paper\\nmuch more compelling. This is especially true as a more challenging benchmark\\ncould be created very easily by simply scaling up the image.\\n\\n=> We tuned some weights and have better results (better median time).\", \"https\": \"//imgur.com/a/JyZor\\n\\nOur approach selected examples in a way that the synthesizer returned a correct program with 0 or 1 additional cegis examples on top. Meaning the original set of example chosen by the NN is forcing a set of constraints strong enough so the correctly synthesized program cannot be ambiguous. However, a better metric would be to explicitly measure this ambiguity.\\n\\n\\nThe above heuristic is obviously specific to the domain, but similar\\nheuristics could be easily constructed for other domains. I feel that this is\\nsomething the authors should at least compare to in the empirical evaluation.\\n\\n=> We will incorporate the boarder heuristic as another baseline to compare against (one issue with this heuristic is all boarders is clearly a lot of input-output examples, do you suggest to keep all of them? Or you stop collecting at some point, and if so what is a good stopping criteria if you intend to do so without any learning but rely on a heuristic?) We will include experiments from other domain(s) such that it will convince the reader that there will be cases where heuristics are hard to construct.\", \"overall\": \"Quantify the \\\"representativeness\\\" of the set of examples better, perhaps explicitly. Incorporate new domains and show that learning to select examples is more reasonable than hacking a heuristic for each domain.\"}",
"{\"title\": \"Good idea, but some misgivings about accepting in current state.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"General-purpose program synthesizers are powerful but often slow, so work that investigates means to speed them up is very much welcome\\u2014this paper included. The idea proposed (learning a selection strategy for choosing a subset of synthesis examples) is good. For the most paper, the paper is clearly-written, with each design decision justified and rigorously specified. The experiments show that the proposed algorithm allows a synthesizer to do a better job of reliably finding a solution in a short amount of time (though the effect is somewhat small).\\n\\nI do have some serious questions/concerns about this method:\\n\\nPart of the motivation for this paper is the goal of scaling to very large sets of examples. The proposed neural net setup is an autoencoder whose input/output size is proportional to the size of the program input domain. How large can this be expected to scale (a few thousand)? \\n\\nThe paper did not specify how often the neural net must be trained. Must it be trained for each new synthesis problem? If so, the training time becomes extremely important (and should be included in the \\u201cNN Phase\\u201d time measurements in Figure 4). If this takes longer than synthesis, it defeats the purpose of using this method in the first place.\\nAlternatively, can the network be trained once for a domain, and then used for every synthesis problem in that domain (i.e. in your experiments, training one net for all possible binary-image-drawing problems)? If so, the training time amortizes to some extent\\u2014can you quantify this?\\nThese are all points that require discussion which is currently missing from the paper.\\n\\nI also think that this method really ought to be evaluated on some other domain(s) in addition to binary image drawing. The paper is not an application paper about inferring drawing programs from images; rather, it proposes a general-purpose method for program synthesis example selection. As such, it ought to be evaluated on other types of problems to demonstrate this generality. Nothing about the proposed method (e.g. the neural net setup) is specific to images, so this seems quite readily doable.\", \"overall\": \"I like the idea this paper proposes, but I have some misgivings about accepting it in its current state.\", \"what_follows_are_comments_on_specific_parts_of_the_paper\": \"In a couple of places early in the paper, you mention that the neural net computes \\u201cthe probability\\u201d of examples. The probability of what? This was totally unclear until fairly deep into Section 3.\\n - Page 2: \\u201cthe neural network computes the probability for other examples not in the subset\\u201d\\n - Page 3: \\u201cthe probability of all the examples conditioned on\\u2026\\u201d\\n\\nOn a related note, I don\\u2019t like the term \\u201cSelection Probability\\u201d for the quantity it describes. This quantity is \\u2018the probability of an input being assigned the correct output.\\u2019 That happens to be (as you\\u2019ve proven) a good measure by which to select examples for the synthesizer. The first property (correctness) is a more essential property of this quantity, rather than the second (appropriateness as an example selection measure).\", \"page_5\": \"\\u201ca feed-forward auto-encoder with N input neurons\\u2026\\u201d Previously, N was defined as the size of the input domain. Does this mean that the network can only be trained when a complete set of input-output examples is available (i.e. outputs for all possible inputs in the domain)? Or is it fine to have an incomplete example set?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting work, but underwhelming empirical evaluation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes a method for identifying representative examples for program\\nsynthesis to increase the scalability of existing constraint programming\\nsolutions. The authors present their approach and evaluate it empirically.\\n\\nThe proposed approach is interesting, but I feel that the experimental section\\ndoes not serve to show its merits for several reasons. First, it does not\\ndemonstrate increased scalability. Only 1024 examples are considered, which is\\nby no means large. Even then, the authors approach selects the highest number of\\nexamples (figure 4). CEGIS both selects fewer examples and has a shorter median\\ntime for complete synthesis. Intuitively, the authors' method should scale\\nbetter, but they fail to show this -- a missed opportunity to make the paper\\nmuch more compelling. This is especially true as a more challenging benchmark\\ncould be created very easily by simply scaling up the image.\\n\\nSecond, there is no analysis of the representativeness of the found sets of\\nconstraints. Given that the results are very close to other approaches, it\\nremains unclear whether they are simply due to random variations, or whether the\\nproposed approach actually achieves a non-random improvement.\\n\\nIn addition to my concerns about the experimental evaluation, I have concerns\\nabout the general approach. It is unclear to me that machine learning is the\\nbest approach for modeling and solving this problem. In particular, the\\nselection probability of any particular example could be estimated through a\\nheuristic, for example by simply counting the number of neighbouring examples\\nthat have a different color, weighted by whether they are in the set of examples\\nalready, to assess its \\\"borderness\\\", with high values being more important to\\nachieve a good program. The border pixels are probably sufficient to learn the\\nprogram perfectly, and in fact this may be exactly what the neural net is\\nlearning. The above heuristic is obviously specific to the domain, but similar\\nheuristics could be easily constructed for other domains. I feel that this is\\nsomething the authors should at least compare to in the empirical evaluation.\\n\\nAnother concern is that the authors' approach assumes that all parameters have\\nthe same effect. Even for the example the authors give in section 2, it is\\nunclear that this would be true.\\n\\nThe text says that rand+cegis selects 70% of examples of the proposed approach,\\nbut figure 4 seems to suggest that the numbers are very close -- is this initial\\nexamples only?\\n\\nOverall the paper appears rushed -- the acknowledgements section is left over\\nfrom the template and there is a reference to figure \\\"blah\\\". There are typos and\\ngrammatical mistakes throughout the paper. The reference to \\\"Model counting\\\" is\\nincomplete.\\n\\nIn summary, I feel that the paper cannot be accepted in its current form.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"response 1\", \"comment\": \"Part of the motivation for this paper is the goal of scaling to very large sets of examples. The proposed neural net setup is an autoencoder whose input/output size is proportional to the size of the program input domain. How large can this be expected to scale (a few thousand)?\\n\\n=> This is a fair point. We also believe the current architecture is both badly explained and badly constructed for other kind of tasks. We failed to mention that the particular architecture for the drawing example is a conv-net with a 7x7 window size, so there is an additional independence assumption based on location: pixel values far away from each other are uncorrelated. For that particular task local informations such as the shape of a line / square is already sufficient for picking good examples for synthesis, and scales well potentially to very large images. We also hope to include experiments on a textual domain in the future, which will use a recurrent neural network architecture that sequentially process the input-output rather than all at once.\\n\\n\\n\\nThe paper did not specify how often the neural net must be trained. Must it be trained for each new synthesis problem?\\n\\n=> It is trained once as a kind of \\u201ccompilation\\u201d if you will for a domain. Once trained it can be used repeatedly without additional training.\\n\\n\\nI also think that this method really ought to be evaluated on some other domain(s) in addition to binary image drawing.\\n\\n=> Indeed! We really hoped for it too but could not quite get it working in the time of the deadline. We agree that a general-purpose paper would benefit from a additional domains. \\n \\n\\nOverall the specific neural network architectures need to be better explained, with potentially a different architecture for a different domain to show that it can scale to potentially large input-spaces. We will take these suggestions to make the work more solid. Thanks!\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The reviewers were largely agreed that the paper presented an interesting idea and has potential but needs a better empirical evaluation. It seems that the authors largely agree and are working to improve it.\", \"pros\": \"1. Improving the speed of program synthesis is a useful problem\\n2. Good treatment of related work, e.g. CEGIS\", \"cons\": \"1. The approach likely does not scale\\n2. The architecture is underspecified making it hard to reproduce\\n3. Only 1 domain for evaluation\"}"
]
} |
SJQO7UJCW | Adversarial Learning for Semi-Supervised Semantic Segmentation | [
"Wei-Chih Hung",
"Yi-Hsuan Tsai",
"Yan-Ting Liou",
"Yen-Yu Lin",
"Ming-Hsuan Yang"
] | We propose a method for semi-supervised semantic segmentation using the adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve the performance on semantic segmentation by coupling the adversarial loss with the standard cross entropy loss on the segmentation network. In addition, the fully convolutional discriminator enables the semi-supervised learning through discovering the trustworthy regions in prediction results of unlabeled images, providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images without any annotation to enhance the segmentation model. Experimental results on both the PASCAL VOC 2012 dataset and the Cityscapes dataset demonstrate the effectiveness of our algorithm. | [
"semantic segmentation",
"adversarial learning",
"semi-supervised learning",
"self-taught learning"
] | Reject | https://openreview.net/pdf?id=SJQO7UJCW | https://openreview.net/forum?id=SJQO7UJCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BJJhDZakf",
"BkXRaopxz",
"ByPmwO0bf",
"HkbN6Aplz",
"BJ_zPCggz",
"SJRdYLhgM",
"H1Op4eqlM",
"B1exQRalz",
"HkbyHyaBM",
"ByN58_AWf",
"B1B1w_AWf",
"r1RDwROeG",
"B1YQsg61M",
"H1wcIfh1z",
"HydNvqQgf"
],
"note_type": [
"comment",
"comment",
"official_comment",
"comment",
"comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_comment"
],
"note_created": [
1510967207421,
1512058314610,
1513158431266,
1512070441173,
1511216912322,
1511971190420,
1511814336091,
1512067816425,
1517249752573,
1513158284506,
1513158373205,
1511741286155,
1510964000774,
1510905486862,
1511397168421
],
"note_signatures": [
[
"~Mohit_Sharma2"
],
[
"~Mohit_Sharma2"
],
[
"ICLR.cc/2018/Conference/Paper125/Authors"
],
[
"~Mohit_Sharma2"
],
[
"~Mohit_Sharma2"
],
[
"ICLR.cc/2018/Conference/Paper125/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper125/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper125/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper125/Authors"
],
[
"ICLR.cc/2018/Conference/Paper125/Authors"
],
[
"ICLR.cc/2018/Conference/Paper125/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper125/Authors"
],
[
"~Mohit_Sharma2"
],
[
"ICLR.cc/2018/Conference/Paper125/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Segmentation Network details\", \"comment\": \"Thanks for your reply. I'll get back to you if I need more help with the experiments.\"}",
"{\"title\": \"Adversarial Training\", \"comment\": \"Thanks for your comments.\\n\\n I was working on stabilizing the GAN training. I couldn't reproduce a significant improvement in mIoU by incorporating adversarial training. I was only able to go up from 68.86% to 68.96% for one of the baseline model. From my side, I have tried to include all the details from the paper. \\n\\nThis is my training scheme if you want to have a look. https://gist.github.com/mohitsharma916/c950864e68f719d69a4fbcae3077cf8f\\n\\nand the complete implementation is here\", \"https\": \"//github.com/mohitsharma916/Adversarial-Semisupervised-Semantic-Segmentation\\n\\nIn the meanwhile, I will move on to the semi-supervised training.\\n\\nLooking forward to getting my hands on your implementation to see what I missed. Thanks again for your work.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the comments and address the raised questions below.\\n\\nQ1. Why do the authors present two largely independent ideas?\\n\\nThe novelty of this work is to incorporate adversarial learning for dense predictions under the semi-supervised setting without image synthesis. The adversarial learning and semi-supervised learning are not independent in our work. Without the successfully trained discriminator network, the proposed semi-supervised learning does not work well. The ablation study in Table 6 shows that without adversarial loss, the discriminator would treat most of the prediction pixels with low confidence of, providing noisy masks and leading to degenerated performance (drops from 68.8% to 65.7%).\\n\\nQ2. Why don\\u2019t the author use the full DeepLab model?\\n\\nWe implement our baseline model based on the DeepLab in PyTorch for the flexibility in training the adversarial network. We did not use the multi-scale mode in the DeepLab due to the memory concern in section 4.2, in which the modern GPU cards such as Nvidia TitanX with 12 GB memory are not affordable to train the network with a proper batch size. Although this issue may be addressed by the accumulated gradient (e.g., iter_size in Caffe), in PyTorch the accumulated gradient implementation still has issues (ref: https://discuss.pytorch.org/t/how-to-implement-accumulated-gradient/3822/12). We have also verified that it does not work in the current PyTorch version.\\n\\nHowever, our main point of the paper is to demonstrate the effectiveness of proposed method against our baseline model shown in Table 1 and 2. In fact, our baseline model already performs better than other existing works in Table 3 and 4.\"}",
"{\"title\": \"Adversarial Training\", \"comment\": \"Oh! That makes total sense. Thanks a lot for taking time to go through my code.\\nI will make the change. \\n\\nAlso, did you use any strategies like, one-sided label smoothing, label flipping etc for stabilizing the GAN training? Or it should work with the settings mentioned in the paper?\"}",
"{\"title\": \"Adversarial Training Setup\", \"comment\": \"Based on your suggestions, I changed my upsampling layers from learnable transposed convolution to simple bilinear upsampling and achieved a mIoU of 69.78. ( As far as I know, now the only difference I have from your submission is using MS COCO pre-trained weights for segmentation network instead of Imagenet. I think I have good enough baseline to continue to the adversarial and semi-supervised training and see if I get a boost by incorporating them on top of my current baseline.) I feel that, because the choice of the upsampling method was so critical in achieving the reported performance of the segmentation network, it would be really helpful if this detail is included in the paper. Anyways, thanks again for giving out the details.\\n\\nI would like to ask a few things about the adversarial training used in the paper. \\n\\n1> What scheme did you use for the adversarial-training?\", \"my_current_idea_is_something_along_this_line\": \"Take a minibatch of the training set. Perform one forward pass of the segmentation network on this minibatch and update the segmentation-network parameters. For discriminator, calculate the discriminator loss on the class-probability map produced by the segmentation network for the current mini-batch. Then, calculate the discriminator loss on the ground-truth label for the same minibatch. Aggregate the two loss (sum or mean?) and update the discriminator parameters.\\n\\n2> I am not sure about the parameters for the discriminator optimizer. Did you use Nesterov acceleration with Adam? What is the weight decay used (same as generator?)? (I only have a superficial understanding of Adam optimizer. So, I might be missing something obvious. )\\n\\nThanks.\"}",
"{\"title\": \"review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper presents an alternative adversarial loss function for image segmentation, and an additional loss for unlabeled images.\\n\\n+ well written\\n+ good evaluation\\n+ good performance compared to prior state of art\\n- technical novelty\\n- semi-supervised loss does not yield significant improvement\\n- missing citations and comparisons\\n\\nThe paper is well written, structured, and easy to read.\\nThe experimental section is extensive, and shows a significant improvement over prior state of the art in semi-supervised learning.\\nUnfortunately, it is unclear what exactly lead to this performance increase. Is it a better baseline model? Is the algorithm tuned better, or is there something fundamentally different compared to prior work (e.g. Luc 2016).\\n\\nFinally, it would help if the authors could highlight their technical difference compared to prior work. The presented adversarial loss is similar to Luc 2016 and \\\"Image-to-Image Translation with Conditional Adversarial Networks, Isola etal 2017\\\". What is different, and why is it important?\\nThe semi-supervised loss is similar to Pathak 2015a, it would help to highlight the difference, and show experimentally why it matters.\\n\\nIn summary, the authors should highlight the difference to prior work, and show why the proposed changes matter.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"No title\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposed an approach for semi-supervised semantic segmentation based on adversarial training. Built upon a popular segmentation network, the paper integrated adversarial loss to incorporate unlabeled examples in training. The outputs from the discriminator are interpreted as indicators for the reliability of label prediction, and used to filter-out non-reliable predictions as augmented training data from unlabeled images. The proposed method achieved consistent improvement over existing state-of-the-art on two challenging segmentation datasets.\\n\\nAlthough the motivation is reasonable and the results are impressive, there are some parts that need more clarification/discussion as described below.\\n\\n1) Robustness of discriminator output:\\nThe main contribution of the proposed model is exploiting the outputs from the discriminator as the confidence score maps of the predicted segmentation labels. However, the outputs from the discriminator indicate whether its inputs are from ground-truth labels or model predictions, and may not be directly related to \\u2018correctness\\u2019 of the label prediction. For instance, it may prefer per-pixel score vectors closed to one-hot encoded vectors. More thorough analysis/discussions are required to show how outputs from discriminator are correlated with the correctness of label prediction. \\n\\n2) Design of discriminator\\nI wonder if conditional discriminator fits better for the task. i.e. D(X,P) instead of D(P). It may prevent the model generating label prediction P non-relevant to input X by adversarial training, and makes the score prediction from the discriminator more meaningful. Some ablation study or discussions would be helpful.\\n\\n3) Presentations\\nThere are several abused notations; notations for the ground-truth label P and the prediction from the generator S(X) should be clearly separated in Eq. (1) and (4). Also, it would better to find a better notation for the outputs from D instead of D^(*,0) and D^(*,1). \\nTraining details in semi-supervised learning would be helpful. For instance, the proposed semi-supervised learning strategy based on Eq. (5) may be suffered by noise outputs from the discriminator in early training stages. I wonder how authors resolved the issues (e.g. training the generator and discriminator are with the labeled example first and extending it to training with unlabeled data.)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Adversarial Training\", \"comment\": \"Hi Mohit,\\n\\nI found an issue with your implementation. When generating the probability maps, we use SoftMax() instead of LogSoftmax(). If you use LogSoftmax(), the output range will not be 0-1, and the discriminator could easily judge whether the input comes from ground truth or prediction. You can observe the loss of the discriminator whether it is stabilized or not. In our case, the discriminator loss ranges from 0.2-0.4 throughout the training process.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper presents a reasonable idea, probably an improved version of method (combination of GAN and SSL for semantic segmentation) over the existing works. Novelty is not ground-breaking (e.g., discriminator network taking only pixel-labeling predictions, application of self-training for semantic segmentation---each of this component is not highly novel by itself). It looks like a well-engineered model that manages to get a small improvement with a semi-supervised learning setting. However, given that the focus of the paper is on semi-supervised learning, the improvement from the proposed loss (L_semi) is fairly small (0.4-0.8%).\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the comments and address the raised questions below.\\n\\nQ1. What is the major novelty of this work?\\n\\nThe novelty of this work is to incorporate adversarial learning for dense predictions under the semi-supervised setting without image synthesis. To facilitate the semi-supervised learning, we propose a fully-convolutional discriminator network that provides confident predictions spatially for training the segmentation network, thereby allowing us to better model the uncertainty of unlabeled images in the pixel level. Our model achieves improvement over the baseline model by incorporating this semi-supervised strategy.\\n\\nQ2. What are the major differences between this work and Luc2016?\", \"the_major_differences_between_our_work_and_luc2016_are_listed_below\": \"- We propose a unified discriminator network structure for various datasets, while Luc2016 designs one network for each dataset.\\n- We show that the simplest one-hot encoding of ground truth works well with adversarial learning. The \\u201cscale\\u201d encoding proposed in Luc2016 does not lead to a performance gain in our experiments.\\n- We propose a semi-supervised method coupled with adversarial learning using unlabeled data.\\n- We conduct extensive parameter analysis on both adversarial learning and semi-supervised learning, showing that our proposed method performs favorably against Luc2016 with the proper balance between supervised loss, adversarial loss, and semi-supervised loss. \\n\\nQ3. Differences between this work and Pix2Pix (Isola 2017)?\\n\\nOur discriminator network works on probability space, while Pix2Pix and other GAN works are on the RGB space. In addition, the target task of Pix2Pix is image translation, and ours is semantic segmentation.\\n\\nQ4. Difference between this work and constrained CNN (Pathak 2015a)?\\n\\nIn Constrained CNN (CCNN), the setting is weak supervision where image labels are required during training. In our work, we use completely unlabeled images in a semi-supervised setting. Thus, the constraints used by CCNN are not applicable to our scenario where image labels are not available. \\n\\nIn CCNN, they design a series of linear constraints on the label maps, such as those on the segment size and foreground/background ratio, to iteratively re-train the segmentation network. Our framework is more general than CCNN in the sense that we do not impose any hand-designed constraints that need careful designs for specific datasets. Take the Cityscapes dataset as an example, the fg/bg constraint in CCNN does not work in this dataset since there is no explicit background label. The minimum segment size constraint does not make sense either, especially for thin and small objects that frequently appear in road scenes. In contrast, we propose a discriminator with adversarial learning to automatically generate the confident maps, thereby providing useful information to train the segmentation network using unlabeled data.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"We thank the comments and address the raised questions below.\\n\\nQ1. How are outputs from discriminator correlated with the correctness of label prediction?\\n\\nT_semi, # of Selected Pixels (%), Average Pixel Accuracy (%)\\n0, 100%, 92.65%\\n0.1, 36%, 99.84%\\n0.2, 31%, 99.91%\\n0.3, 27%, 99.94% \\n\\nIn the above table on the Cityscapes dataset, we show the average numbers and the average accuracy rates of the selected pixels with different values of T_semi as in (5) of the paper. With a higher T_semi, the discriminator outputs are more confident (similar to ground truth label distributions) and lead to more accurate pixel predictions. Also, as a trade-off, the higher threshold (T_semi), the fewer pixels are selected for back-propagation. This trade-off could also be observed in Table 5 in the paper. We will add more analysis to the paper.\\n\\nQ2. What\\u2019s the performance of D(X,P) compared to D(P)?\\n\\nWe conduct the experiment using D(X,P) instead of D(P) by concatenating the RGB channels with the class probability maps as the input to the discriminator. However, the performance drops to 72.6% on the PASCAL dataset (baseline: 73.6%). We observe that the discriminator loss stays high during the optimizing process and could not produce meaningful gradients. One reason could be that the RGB distributions between real and fake ones are highly similar, and adding this extra input could lead to optimization difficulty for the discriminator network. Therefore, it is reasonable to let the segmentation network consider RGB inputs for segmentation predictions, while the discriminator focuses on distinguishing label distributions. Note that, in Luc2016, similarly the discriminator structure on PASCAL does not include RGB images as inputs. We will add more results and discussions in the paper.\\n\\nQ3. The notation P in (1) and (4) is not clear.\\n\\nThanks for the recommendation. We revise (1) and (4) in the paper for better presentations.\\n\\nQ4. What are the training details in semi-supervised learning?\\n\\nWe include the details of semi-supervised training algorithm in the revised paper. As the reviewer points out, initial inputs may be noisy, and we tackle this issue by applying the semi-supervised learning after 5k iterations.\"}",
"{\"title\": \"not enough for a first-tier conference\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper describes techniques for training semantic segmentation networks. There are two key ideas:\\n\\n- Attach a pixel-level GAN loss to the output semantic segmentation map. That is, add a discriminator network that decides whether each pixel in the label map belongs to a real label map or not. Of course, this loss alone is unaware of the input image and would drive the network to produce plausible label maps that have no relation to the input image. An additional cross-entropy loss (the standard semantic segmentation loss) is used to tie the network to the input and the ground-truth label map, when available.\\n\\n- Additional unlabeled data is utilized by using a trained semantic segmentation network to produce a label map with associated confidences; high-confidence pixels are used as ground-truth labels and are fed back to the network as training data.\\n\\nThe paper is fine and the work is competently done, but the experimental results never quite come together. The technical development isn\\u2019t surprising and doesn\\u2019t have much to teach researchers working in the area. Given that the technical novelty is rather light and the experimental benefits are not quite there, I cannot recommend the paper for publication in a first-tier conference.\", \"some_more_detailed_comments\": \"1. The GAN and the semi-supervised training scheme appear to be largely independent. The GAN can be applied without any unlabeled data, for example. The paper generally appears to present two largely independent ideas. This is fine, except they don\\u2019t convincingly pan out in experiments.\\n\\n2. The biggest issue is that the experimental results do not convincingly indicate that the presented ideas are useful.\\n2a. In the \\u201cFull\\u201d condition, the presented approach does not come close to the performance of the DeepLab baseline, even though the DeepLab network is used in the presented approach. Perhaps the authors have taken out some components of the DeepLab scheme for these experiments, such as multi-scale processing, but the question then is \\u201cWhy?\\u201d. These components are not illegal, they are not cheating, they are not overly complex and are widely used. If the authors cannot demonstrate an improvement with these components, their ideas are unlikely to be adopted in state-of-the-art semantic systems, which do use these components and are doing fine.\\n2b. In the 1/8, 1/4, and 1/2 conditions, the performance of the baselines is not quoted. This is wrong. Since the authors are evaluating on the validation sets, there is no reason not to train the baselines on the same amount of labeled data (1/8, 1/4, 1/2) and report the results. The training scripts are widely available and such training of baselines for controlled experiments is commonly done in the literature. The reviewer is left to suspect, with no evidence given to the contrary, that the presented approach does not outperform the DeepLab baseline even in the reduced-data conditions.\\n\\nA somewhat unflattering view of the work would be that this is another example of throwing a GAN at everything to see if it sticks. In this case, the experiments do not indicate that it did.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Segmentation Network details\", \"comment\": \"Hi Mohit,\\n\\nThanks for interesting in our work. Here are some details that can help yo reproduce our baseline:\\n\\n1. Upsampling module: We use 2D bilinear upsampling in our segmentation model (essentially nn.upsample in PyTorch). We use one upsampling module with 8x instead of using three 2x layers. In your case, it would be equivalent to 3 ConvTranspose2D layers with their coefficients initialized as the bilinear kernel with zero learning rate. Intuitively, having upsampling layers with learnable parameters might have better performance due to larger model capacity. But in both our experiments and the original FCN paper from Long et al., learning upsampling does not show significant improvement but introduce much computational overhead in training process.\\n\\n2. As mentioned in the paper, we use the Resnet-101 model that is pretrained on the ImageNet. We use the mean and variance for data normalization as the same during pretraining. If you choose to use the torchvision models from PyTorch, the standard data processing transforms are listed in their official docs.\\n\\nWe wish the information can help you in your experiments. Let us know if you encounter any issue. Good luck on the challenge!\"}",
"{\"title\": \"Segmentation Network details\", \"comment\": \"Thanks a lot for your work. I was trying to reproduce the results of your submission as part of the Reproducibility Challenge. For the baseline model, I have achieved a 52 % mIoU so far. I would like to clarify a few details that might be helpful in replicating the results:\\n\\n1> What method have you used during the training for upsampling the output map of the DeepLab-v2 network to size 321x321 (input image size for training in PASCALVOC). Currently, I have 3 ConvTranspose2D layers (corresponding to each downsampling layer in the DeepLap-v2 network), each upsampling by a factor of 2. \\n\\n2> Did you use any other common data preprocessing (like Normalization to 0 mean and 1 variance) ?\\n\\nIs there any other significant detail that would be helpful in improving the results to match those in the paper?\\n\\nThanks again for your work.\"}",
"{\"title\": \"Adversarial Training Setup\", \"comment\": \"Hi Mohit,\\n\\nThanks for the suggestion. We will add the upsampling details in the following revision. For your information, we will release the source code after the review process.\", \"regarding_your_questions\": \"1. Yes, we think the way you are implementing it is the same to ours.\\n\\n2. Yes, the weight decay/momentum of the discriminator are the same with the generator.\\n\\nThanks.\"}"
]
} |
ryZERzWCZ | The Information-Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Modeling | [
"Shengjia Zhao",
"Jiaming Song",
"Stefano Ermon"
] | A variety of learning objectives have been recently proposed for training generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, VAE, $\beta$-VAE, adversarial autoencoders, AVB, and InfoVAE, are Lagrangian duals of the same primal optimization problem. This generalization reveals the implicit modeling trade-offs between flexibility and computational requirements being made by these models. Furthermore, we characterize the class of all objectives that can be optimized under certain computational constraints.
Finally, we show how this new Lagrangian perspective can explain undesirable behavior of existing methods and provide new principled solutions. | [
"Generative Models",
"Variational Autoencoder",
"Generative Adversarial Network"
] | Reject | https://openreview.net/pdf?id=ryZERzWCZ | https://openreview.net/forum?id=ryZERzWCZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BJ8bKuOlM",
"S1ufxZqlG",
"BJbD8ypBG",
"SJ2PZA-XM",
"B1A1_t67z",
"HJqtmElmf",
"SkugmHtgf",
"rkckcaWXM",
"BycXcabXz"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1511717117835,
1511817231988,
1517250136697,
1514426723849,
1515194342224,
1514320769673,
1511768815907,
1514424801689,
1514424865958
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper1045/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper1045/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper1045/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper1045/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1045/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1045/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper1045/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1045/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Contains some interesting results but the presentation is not focused\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Thank you for the feedback, I have read it.\\n\\nI do think that developing unifying frameworks is important. But not all unifying perspective is interesting; rather, a good unifying perspective should identify the behaviour of existing algorithms and inspire new algorithms.\\n\\nIn this perspective, the proposed framework might be useful, but as noted in the original review, the presentation is not clear, and it's not convincing to me that the MI framework is indeed useful in the sense I described above.\\n\\nI think probably the issue is the lack of good evaluation methods for generative models. Test-LL has no causal relationship to the quality of the generated data. So does MI. So I don't think the argument of preferring MI over MLE is convincing.\\n\\nSo in summary, I will still keep my original score. I think the paper will be accepted by other venues if the presentation is improved and the advantage of the MI perspective is more explicitly demonstrated.\\n\\n==== original review ====\\n\\nThank you for an interesting read.\\n\\nThe paper presented a unifying framework for many existing generative modelling techniques, by first considering constrained optimisation problem of mutual information, then addressing the problem using Lagrange multipliers.\\n\\nI see the technical contribution to be the three theorems, in the sense that it gives a closure of all possible objective functions (if using the KL divergences). This can be useful: I'm tired of reading papers which just add some extra \\\"regularisation terms\\\" and claim they work. I did not check every equation of the proof, but it seems correct to me.\\n\\nHowever, an imperfection is, the paper did not provide a convincing explanation on why their view should be preferred compared to the original papers' intuition. For example in VAE case, why this mutual information view is better than the traditional view of approximate MLE, where q is known to be the approximate posterior? A better explanation on this (and similarly for say infoGAN/infoVAE) will significantly improve the paper.\\n\\nContinuing on the above point, why in section 4 you turn to discuss relationship between mutual information and test-LL? How does that relate to the main point you want to present in the paper, which is to prefer MI interpretation if I understand it correctly?\", \"term_usage\": \"we usually *maximize* the ELBO and *minimise* the variational free-energy (VFE).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Not clear what specific insights exist or what problem this solves\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"EDIT: I have read the authors' rebuttals and other reviews. My opinion has not been changed. I recommend the authors significantly revise their work, streamlining the narrative and making clear what problems and solutions they solve. While I enjoy the perspective of unifying various paths, it's unclear what insights come from a simple reorganization. For example, what new objectives come out? Or given this abstraction, what new perspectives or analysis is offered?\\n\\n---\\n\\nThe authors propose an objective whose Lagrangian dual admits a variety of modern objectives from variational auto-encoders and generative adversarial networks. They describe tradeoffs between flexibility and computation in this objective leading to different approaches. Unfortunately, I'm not sure what specific contributions come out, and the paper seems to meander in derivations and remarks that I didn't understand what the point was.\\n\\nFirst, it's not clear what this proposed generalization offers. It's a very nuanced and not insightful construction (eq. 3) and with a specific choice of a weighted sum of mutual informations subject to a combinatorial number of divergence measure constraints, each possibly held in expectation (eq. 5) to satisfy the chosen subclass of VAEs and GANs; and with or without likelihoods (eq. 7). What specific insights come from this that isn't possible without the proposed generalization?\\n\\nIt's also not clear with many GAN algorithms that reasoning with their divergence measure in the limit of infinite capacity discriminators is even meaningful (e.g., Arora et al., 2017; Fedus et al., 2017). It's only true for consistent objectives such as MMD-GANs.\\n\\nSection 4 seems most pointed in explaining potential insights. However, it only introduces hyperparameters and possible combinatorial choices with no particular guidance in mind. For example, there are no experiments demonstrating the usefulness of this approach except for a toy mixture of Gaussians and binarized MNIST, explaining what is already known with the beta-VAE and infoGAN. It would be useful if the authors could make the paper overall more coherent and targeted to answer specific problems in the literature rather than try to encompass all of them.\\n\\nMisc\\n+ The \\\"feature marginal\\\" is also known as the aggregate posterior (Makhzani et al., 2015) and average encoding distribution (Hoffman and Johnson, 2016); also see Tomczak and Welling (2017).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper provides a constrained mutual information objective function whose Lagrangian dual covers several existing generative models. However reviewers are not convinced of the significance or usefulness of the proposed unifying framework (at least from the way results are presented currently in the paper). Authors have not taken any steps towards revising the paper to address these concerns. Improving the presentation to bring out the significance/utility of the proposed unifying framework is needed.\"}",
"{\"title\": \"thank you for your feedback\", \"comment\": \"Thank you for your feedback.\\n\\nCould you add experiments that optimises the Lagrange multiplier as well? It would help strengthen the paper.\"}",
"{\"title\": \"Solution by Bounding Mutual Information\", \"comment\": \"Thank you for your comment. The solution proposed in our paper is to bound the mutual information rather than direct optimization of the Lagrangian multipliers. Direct maximization would lead to maximizing it to infinity for infeasible problems. Our experiments show that bounding the mutual information can solve the problem: as soon as mutual information reaches the preset bound, log likelihood starts to improve.\"}",
"{\"title\": \"Clarification on Significance\", \"comment\": \"We thank the reviewers for their time and valuable feedback.\\n\\n\\u201cIt would be useful if the authors could make the paper overall more coherent and targeted to answer specific problems in the literature rather than try to encompass all of them.\\u201d\\n\\nWe respectfully disagree. We strongly believe that identifying connections between existing methods and developing general frameworks and theories that encompass as many existing methods as possible is a fundamental scientific goal. Machine learning research is not only about developing new methods and beating benchmarks, but also achieving a deeper understanding of the strengths, weaknesses, and relationships of existing techniques. \\n\\n\\n\\u201cWhat specific insights come from this that isn't possible without the proposed generalization?\\u201d\\n\\nBeyond providing an organizational principle for learning objectives (highlighting their information maximization/minimization properties and trade-offs between computational requirements and flexibility) our new perspective is useful for several reasons:\\n\\n1. We are able to characterize **all** learning objectives that can be optimized under given computational constraints (likelihood based optimization; unary likelihood free optimization; binary likelihood free optimization) providing a \\u201cclosure\\u201d result. Even though we do not introduce a new learning objective, we show that (slightly generalized versions) of ten (already known) \\u201cbase classes\\u201d encompass all possible objectives in each category. Therefore, in a certain sense, we show that there do not exist \\u201cnew\\u201d objectives under our stated assumptions on how objectives can be constructed. \\n\\n2. We show that several problems are revealed by the Lagrangian perspective and hold across the entire model family: \\n\\na. Correct optimization of the Lagrangian dual requires maximization over the Lagrangian parameters. However, all existing methods use fixed (arbitrarily chosen) Lagrangian parameters. We show failure cases where this does not correctly optimize the primal problem. For example, when the primal objective is information maximization under constraints of distributional consistency, optimization with fixed Lagrangian parameters can maximize mutual information indefinitely without ever encouraging distributional consistency. As a result, data fit (distributional consistency) may even get worse during training (for example, resulting in lower test log likelihood) as mutual information is maximized. We show that this also happens in practice. \\n\\nb. The Lagrangian perspective allows us to explicitly weight (\\u201cprice\\u201d) different (conflicting) terms in the objective. For example, suppose the input x has more dimensions than the feature space z. Then for the same per-dimension loss, the input space is weighted more than the latent space (because it has more dimensions). We show in the paper that increasing the weight on matching marginals on z can solve the problem and leads to better performance. In general, we can write out the desired preference in Lagrangian form, and then convert it into a familiar model and optimization method (in our example, this corresponds to InfoVAE with a specific hyper-parameter choice.)\"}",
"{\"title\": \"Good framework for learning generative models, but significance/consequence of the results is unclear\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Update after rebuttal\\n==========\\nThanks for your response on my questions. The stated usefulness of the method unfortunately do not answer my worry about the significance. It remains unclear to me how much \\\"real\\\" difference the presented results would make to advance the existing work on generative models. Also, the authors did not promised any major changes in the final version in this direction, which is why I have reduced my score.\\n\\nI do believe that this work could be useful and should be resubmitted. There are two main things to improve. First, the paper need more work on improving the clarity. Second, more work needs to be added to show that the paper will make a real difference to advance/improve existing methods.\\n\\n==========\\nBefore rebuttal\\n==========\\nThis paper proposes an optimization problem whose Lagrangian duals contain many existing objective functions for generative models. Using this framework, the paper tries to generalize the optimization problems by defining computationally-tractable family which can be expressed in terms of existing objective functions. \\n\\nThe paper has interesting elements and the results are original. The main issue is that the significance is unclear. The writing in Section 3 is unclear for me, which further made it challenging to understand the consequences of the theorems presented in that section. \\n\\nHere is a big-picture question that I would like to know answer for. Do the results of sec 3 help us identify a more useful/computationally tractable model than exiting approaches? Clarification on this will help me evaluate the significance of the paper.\\n\\nI have three main clarification points. First, what is the importance of T1, T2, and T3 classes defined in Def. 7, i.e., why are these classes useful in solving some problems? Second, is the opposite relationship in Theorem 1, 2, and 3 true as well, e.g., is every linear combination of beta-ELBO and VMI is equivalent to a likelihood-based computable-objective of KL info-encoding family? Is the same true for other theorems?\\n\\nThird, the objective of section 3 is to show that \\\"only some choices of lambda lead to a dual with a tractable equivalent form\\\". Could you rewrite the theorems so that they truly reflect this, rather than stating something which only indirectly imply the main claim of the paper.\", \"some_small_comments\": [\"Eq. 4. It might help to define MI to remind readers.\", \"After Eq. 7, please add a proof (may be in the Appendix). It is not that straightforward to see this. Also, I suppose you are saying Eq. 3 but with f from Eq. 4.\", \"Line after Eq. 8, D_i is \\\"one\\\" of the following... Is it always the same D_i for all i or it could be different? Make this more clear to avoid confusion.\", \"Last line in Para after Eq. 15, \\\"This neutrality corresponds to the observations made in..\\\" It might be useful to add a line explaining that particular \\\"observation\\\"\", \"Def. 7, the names did not make much sense to me. You can add a line explaining why this name is chosen.\", \"Def. 8, the last equation is unclear. Does the first equivalence impy the next one?\", \"Writing in Sec. 3.3 can be improved. e.g., \\\"all linear operations on log prob.\\\" is very unclear, \\\"stated computational constraints\\\" which constraints?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clarification on the Main Concerns\", \"comment\": \"We thank the reviewers for their time and valuable feedback.\\n\\n\\u201cThe main issue is that the significance is unclear.\\u201d\\n\\nBeyond providing an organizational principle for learning objectives (highlighting their information maximization/minimization properties and trade-offs between computational requirements and flexibility) our new perspective is useful for several reasons (Sections 3 and 4):\\n\\n1. We are able to characterize **all** learning objectives that can be optimized under given computational constraints (likelihood based optimization; unary likelihood free optimization; binary likelihood free optimization) providing a \\u201cclosure\\u201d result. Even though we do not introduce a new learning objective, we show that (slightly generalized versions) of ten (already known) \\u201cbase classes\\u201d encompass all possible objectives in each category. Therefore, in a certain sense, we show that there do not exist \\u201cnew\\u201d objectives under our stated assumptions on how objectives can be constructed. \\n\\n2. We show that several known problems are revealed by the Lagrangian perspective and hold across the entire model family: \\n\\na. Correct optimization of the Lagrangian dual requires maximization over the Lagrangian parameters. However, all existing methods use fixed (arbitrarily chosen) Lagrangian parameters. We show failure cases where this does not correctly optimize the primal problem. For example, when the primal objective is information maximization under constraints of distributional consistency, optimization with fixed Lagrangian parameters can maximize mutual information indefinitely without ever encouraging distributional consistency. As a result, data fit (distributional consistency) may even get worse during training (for example, resulting in lower test log likelihood) as mutual information is maximized. We show that this also happens in practice. \\n\\nb. The Lagrangian perspective allows us to explicitly weight (\\u201cprice\\u201d) different (conflicting) terms in the objective. For example, suppose the input x has more dimensions than the feature space z. Then for the same per-dimension loss, the input space is weighted more than the latent space (because it has more dimensions). We show in the paper that increasing the weight on matching marginals on z can solve the problem and leads to better performance. In general, we can write out the desired preference in Lagrangian form, and then convert it into a familiar model and optimization method (in our example, this corresponds to InfoVAE with a specific hyper-parameter choice.)\\n\\n\\u201cWhat is the importance of T1, T2, and T3 classes defined in Def. 7, i.e., why are these classes useful in solving some problems?\\u201d\\n\\nIt has been observed experimentally that T1 T2 and T3 are increasingly more challenging in terms of optimization stability, sensitivity to hyper-parameters, and outcome of optimization (Arjovsky et al., 2017). In particular, T1 (likelihood based, e.g. VAE) is highly stable and converges quickly, while T2/T3 methods (such as GANs) suffer from issues such as optimization stability, non-convergence. T3 is slightly more challenging than T2 because BiGAN/ALI (Dumoulin et al., 2016a; Donahue et al., 2016) tend to suffer from inaccurate inference. \\n\\n\\u201cIs the opposite relationship in Theorem 1, 2, and 3 true as well, e.g., is every linear combination of beta-ELBO and VMI is equivalent to a likelihood-based computable-objective of KL info-encoding family? Is the same true for other theorems?\\u201d\\n\\n\\nYes the opposite relationship is true as well. The existing objectives enumerated in Theorem 1, 2, 3 are exactly equivalent to T1/T2/T3 computably objectives respectively.\\n\\n\\u201cThird, the objective of section 3 is to show that only some choices of lambda lead to a dual with a tractable equivalent form. Could you rewrite the theorems so that they truly reflect this, rather than stating something which only indirectly imply the main claim of the paper.\\u201d\\n\\nThe statement we supported with Theorems 1/2/3 is: only some parameters choices lead to objectives in each computability (T1/T2/T3) classes (easy vs hard to optimize). For example, only parameters choices that correspond to beta-VAE/VMI can have a likelihood-based computable equivalent form. Most objectives cannot be equivalently transformed to become a likelihood based computable objective. We have revised the paper to make the statement more clear. \\n\\n\\u201cSome small comments\\u201d\\n\\nThank you. We have revised the writing according to the advice.\"}",
"{\"title\": \"Clarification on Main Concerns\", \"comment\": \"We thank the reviewers for their time and valuable feedback.\\n\\n\\u201cHowever, an imperfection is, the paper did not provide a convincing explanation on why their view should be preferred compared to the original papers' intuition.\\u201d For example in VAE case, why this mutual information view is better than the traditional view of approximate MLE, where q is known to be the approximate posterior? A better explanation on this (and similarly for say infoGAN/infoVAE) will significantly improve the paper. Continuing on the above point, why in section 4 you turn to discuss relationship between mutual information and test-LL? How does that relate to the main point you want to present in the paper, which is to prefer MI interpretation if I understand it correctly?\\u201c\\n\\nOur view (optimize mutual information under distribution matching constraint) provides several understandings traditional perspectives do not provide. First, several attributes of an objective are revealed by the Lagrangian form: information preference, possible optimization methods (likelihood based or likelihood free), closure (most generic form) of model family, etc. In addition Section 4 proceeds to demonstrate two applications where the Lagrangian perspective reveal problems/features that are difficult to identify from traditional perspectives. \\n\\n\\n1.Correct optimization of the Lagrangian dual requires maximization over the Lagrangian parameters. However, all existing methods use fixed (arbitrarily chosen) Lagrangian parameters. We show failure cases where this does not correctly optimize the primal problem. For example, when the primal objective is information maximization under constraints of distributional consistency, optimization with fixed Lagrangian parameters can maximize mutual information indefinitely without ever encouraging distributional consistency. As a result, data fit (distributional consistency) may even get worse during training (for example, resulting in lower test log likelihood) as mutual information is maximized. We show that this also happens in practice. \\n\\n2.The Lagrangian perspective allows us to explicitly weight (\\u201cprice\\u201d) different (conflicting) terms in the objective. For example, suppose the input x has more dimensions than the feature space z. Then for the same per-dimension loss, the input space is weighted more than the latent space (because it has more dimensions). We show in the paper that increasing the weight on matching marginals on z can solve the problem and leads to better performance. In general, we can write out the desired preference in Lagrangian form, and then convert it into a familiar model and optimization method (in our example, this corresponds to InfoVAE with a specific hyper-parameter choice.)\"}"
]
} |
BkJ3ibb0- | Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models | [
"Pouya Samangouei",
"Maya Kabkab",
"Rama Chellappa"
] | In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. | [
"generative models",
"classifiers",
"adversarial attacks",
"recent years",
"neural network approaches",
"machine learning tasks",
"classification",
"vulnerable",
"adversarial perturbations",
"small perturbations"
] | Accept (Poster) | https://openreview.net/pdf?id=BkJ3ibb0- | https://openreview.net/forum?id=BkJ3ibb0- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SyS0aJ8Xz",
"S1c64RJzz",
"rJOVWxjez",
"Bkw8Ck8QG",
"By-CxBKgz",
"r1D5pJ87f",
"Hy120kU7f",
"BympCwwgf",
"S1bEhkU7G",
"B1wgPVOzG",
"r1MMCyImM",
"SJwPXJaHG",
"Bkbgpk87z",
"ryW5rcl-f",
"SkdMUQaAZ",
"H17TwR4rM",
"SkbvmBamf"
],
"note_type": [
"official_comment",
"comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"decision",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment"
],
"note_created": [
1514696140841,
1513247937694,
1511878963676,
1514696270787,
1511768264561,
1514696078875,
1514696358583,
1511648955344,
1514695721303,
1513797359488,
1514696201925,
1517249374984,
1514695913416,
1512248713481,
1509926415577,
1516722107113,
1515176793268
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper714/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"ICLR.cc/2018/Conference/Paper714/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"ICLR.cc/2018/Conference/Paper714/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper714/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper714/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Answer to anonymous commenter\", \"comment\": \"We thank the anonymous commenter. The paper referred to by the commenter deals with a synthetic spheres dataset which we believe is not applicable to the use of GANs. Our focus is on real-life datasets collected from real examples. Furthermore, due to the recentness of the paper, we have not had the time to analyze it in detail.\"}",
"{\"title\": \"Please use some meaningful attacks!!\", \"comment\": \"https://arxiv.org/pdf/1711.08478.pdf\\nInstead of doing gradient descent, it might just help to attack directly.\\nSee how easily APE-GAN cracks!!!!\"}",
"{\"title\": \"A novel idea with room for future work.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The authors describe a new defense mechanism against adversarial attacks on classifiers (e.g., FGSM). They propose utilizing Generative Adversarial Networks (GAN), which are usually used for training generative models for an unknown distribution, but have a natural adversarial interpretation. In particular, a GAN consists of a generator NN G which maps a random vector z to an example x, and a discriminator NN D which seeks to discriminate between an examples produced by G and examples drawn from the true distribution. The GAN is trained to minimize the max min loss of D on this discrimination task, thereby producing a G (in the limit) whose outputs are indistinguishable from the true distribution by the best discriminator.\\n\\nUtilizing a trained GAN, the authors propose the following defense at inference time. Given a sample x (which has been adversarially perturbed), first project x onto the range of G by solving the minimization problem z* = argmin_z ||G(z) - x||_2. This is done by SGD. Then apply any classifier trained on the true distribution on the resulting x* = G(z*). \\n\\nIn the case of existing black-box attacks, the authors argue (convincingly) that the method is both flexible and empirically effective. In particular, the defense can be applied in conjunction with any classifier (including already hardened classifiers), and does not assume any specific attack model. Nevertheless, it appears to be effective against FGSM attacks, and competitive with adversarial training specifically to defend against FGSM. \\n\\nThe authors provide less-convincing evidence that the defense is effective against white-box attacks. In particular, the method is shown to be robust against FGSM, RAND+FGSM, and CW white-box attacks. However, it is not clear to me that the method is invulnerable to novel white-box attacks. In particular, it seems that the attacker can design an x which projects onto some desired x* (using some other method entirely), which then fools the classifier downstream.\\n\\nNevertheless, the method is shown to be an effective tool for hardening any classifier against existing black-box attacks \\n(which is arguably of great practical value). It is novel and should generate further research with respect to understanding its vulnerabilities more completely.\", \"minor_comments\": \"The sentence starting \\u201cUnless otherwise specified\\u2026\\u201d at the top of page 7 is confusing given the actual contents of Tables 1 and 2, which are clarified only by looking at Table 5 in the appendix. This should be fixed.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Answer to anonymous commenter\", \"comment\": \"We thank the anonymous commenter.\\nWe have modified the title of Appendix B to reflect our claim that attacks based on gradient-descent are difficult to perform. \\nRegarding the modified CW optimization attack, our understanding is that the commenter is suggesting the following:\\n\\nMinimize (over x*, z*) CW loss(x, x*, G(z*)) + 0.1 ||G(z*) - x*||\\n\\nFirst of all, this problem is significantly more difficult to solve than the original CW formulation due to the dependence on x, x*, and G(z*). \\nSecond, this formulation does not guarantee that when x* is input to the system, z* will be the output of the GD block, and an example \\u201cclose\\u201d to an adversarial example is not necessarily adversarial itself. \\nLastly, the random initialization of z in the GD block serves to add robustness and change the output every time.\\n\\nAll in all, we are extremely interested in further investigating new attack strategies as Defense-GAN was shown to be robust to existing attack models.\"}",
"{\"title\": \"Interesting but hard to conclude decisively from the current experiments\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"This paper presents Defense-GAN: a GAN that used at test time to map the input generate an image (G(z)) close (in MSE(G(z), x)) to the input image (x), by applying several steps of gradient descent of this MSE. The GAN is a WGAN trained on the train set (only to keep the generator). The goal of the whole approach is to be robust to adversarial examples, without having to change the (downstream task) classifier, only swapping in the G(z) for the x.\", \"The paper is easy to follow.\", \"It seems (but I am not an expert in adversarial examples) to cite the relevant litterature (that I know of) and compare to reasonably established attacks and defenses.\", \"Simple/directly applicable approach that seems to work experimentally, but\", \"A missing baseline is to take the nearest neighbour of the (perturbed) x from the training set.\", \"Only MNIST-sized images, and MNIST-like (60k train set, 10 labels) datasets: MNIST and F-MNIST.\", \"Between 0.043sec and 0.825 sec to reconstruct an MNIST-sized image.\", \"? MagNet results were very often worse than no defense in Table 4, could you comment on that?\", \"In white-box attacks, it seems to me like L steps of gradient descent on MSE(G(z), x) should be directly extended to L steps of (at least) FGSM-based attacks, at least as a control.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Answer to AnonReviewer1\", \"comment\": \"We thank the reviewer for the insightful comments and discussions.\\n\\nA) Defense-GAN vs. MagNet vs. other generative approaches:\\nWe believe that the MagNet auto-encoder suffers lower accuracy compared to Defense-GAN due to the fact that the \\u201creconstruction\\u201d step in MagNet is a feed-forward network as opposed to an optimization-based projection as in Defense-GAN. Overall, the combination of MagNet and the classifier can be seen as one deeper classification network, and has a wide attack surface compared to Defense-GAN.\\nAs suggested by the reviewer, if the MagNet decoder (or another generative approach) was treated as a generative model, and the same optimization-based projection approach was followed, the model with more representative power would perform better. From our experience, GANs tend to have more representative power, but this is still an active area of research and discussion. We believe that, since GANs are specifically designed to optimize for generative tasks, using a GAN in conjunction with our proposed optimization-based projection would outperform an encoder with the same projection method. However, this would be an interesting future research direction. In addition, we were able to show some theoretical guarantees regarding the use and representative power of GANs in equation (7).\\n\\nB) Black- and white-box attacks:\\nIn our work and previous literature, it is assumed that in black-box scenarios the attacker does not know the classifier network nor the defense mechanism (and any parameters thereof). The only information the attacker can use is the classifier output. \\nIn white-box scenarios, the attacker knows the entire system including the classifier network, defense mechanisms, and all parameters (which in our case, include GAN parameters). By \\u201cdefense network\\u201d in Experiments bullet 2, we mean the generator network. \\n\\nC) Computational complexity:\\nDefense-GAN adds inference-time complexity to the classifier. As discussed in Appendix G (Appendix F in the original version of the paper), this complexity depends on L, the number of GD steps used to reconstruct images, and (to a lesser extent) R, the number of random restarts. At training time, Defense-GAN requires training a GAN, but no retraining of the classifier is necessary.\\nIn comparison, MagNet also adds inference-time complexity. However, the time overhead is much smaller than Defense-GAN as MagNet is simply a feedforward network. At training time, the overhead is similar to Defense-GAN (training the encoder, no retraining of the classifier).\\nAdversarial training adds no inference-time complexity. However, training time can be significantly larger than for other methods since re-training the classifier is required (preceded by generating the adversarial examples to augment the training dataset).\"}",
"{\"title\": \"Answer to anonymous commenter\", \"comment\": \"We thank the anonymous commenter.\\nWe have added some additional results on the CelebA dataset in Appendix F.\\nRegarding the suggested new attack methods, we note that:\\n1- We believe that this same exact point was raised by AnonReviewer3, and we kindly refer the commenter to part A of our reply to AnonReviewer3. \\n2- It is not clear to us how to \\u201coutput a wrong set of Z_L\\u201d and how to find an input x that will meet this criterion. \\n(If by \\u201coutput a wrong set of Z_L\\u201d the reviewer means to inject adversarial noise directly on the set of Z_L, then the attacker has gained access and infiltrated an intermediate step of the system and might as well directly modify the classifier output. This type of attacks was never considered in this literature).\\n3- We believe that the commenter mistakenly assumes the seed to be an external input accessible to and modifiable by the attacker. Even though, in Figure 1, the seed is depicted as input to the system, it is never assumed that the attacker can modify the random seed.\"}",
"{\"title\": \"review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"This paper presents a method to cope with adversarial examples in classification tasks, leveraging a generative model of the inputs. Given an accurate generative model of the input, this approach first projects the input onto the manifold learned by the generative model (the idea being that inputs on this manifold reflect the non-adversarial input distribution). This projected input is then used to produce the classification probabilities. The authors test their method on various adversarially constructed inputs (with varying degrees of noise).\", \"Questions/Comments:\", \"I am interested in unpacking the improvement of Defense-GAN over the MagNet auto-encoder based method. Is the MagNet auto-encoder suffering lower accuracy because the projection of an adversarial image is based on an encoding function that is learned only on true data? If the decoder from the MagNet approach were treated purely as a generative model, and the same optimization-based projection approach (proposed in this work) was followed, would the results be comparable?\", \"Is there anything special about the GAN approach, versus other generative approaches?\", \"In the black-box vs. white-box scenarios, can the attacker know the GAN parameters? Is that what is meant by the \\\"defense network\\\" (in experiments bullet 2)?\", \"How computationally expensive is this approach take compared to MagNet or other adversarial approaches?\"], \"quality\": \"The method appears to be technically correct.\", \"clarity\": \"This paper clearly written; both method and experiments are presented well.\", \"originality\": \"I am not familiar enough with adversarial learning to assess the novelty of this approach.\", \"significance\": \"I believe the main contribution of this method is the optimization-based approach to project onto a generative model's manifold. I think this kernel has the potential to be explored further (e.g. computational speed-up, projection metrics).\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Answer to AnonReviewer3\", \"comment\": \"We thank the reviewer for the constructive review and comments.\\n\\nA) Regarding the effectiveness against white-box attacks:\\nAs the reviewer has pointed out, we have shown the robustness of our method to existing white-box attacks such as FGSM, RAND+FGSM, and CW. Indeed, a good attack strategy could be to design an x which projects onto a desired x* = G(z*). However, this requires solving for:\\n\\nFind x s.t. the output of the gradient-descent block is z*. \\n\\nPer our understanding, the reviewer\\u2019s suggestion is the following:\\nFind a desired x* in the range of the generator which fools the classifier.\\nFind an x which projects onto x*, i.e., such that the output of the GD block is z*, where G(z*) = x*. \\nStep 1 is a more challenging version of existing attacks, due to the constraint that the adversarial example should lie in the range of the generator. While step 1 could potentially be solvable, the real difficulty lies in step 2. In fact, it is not obvious how to find such an x given x*. What comes to mind is attempting to solve step 2 using an optimization framework, e.g.:\\nMinimize (over x, z*) 1\\nSubject to G(z*) = x*\\n z* is the output of the GD block after L steps.\\n\\nWe have shown in Appendix B that solving this problem using GD gets more and more prohibitive as L increases.\\nFurthermore, since we use random initializations of z, if the random seed is not accessible by the attacker, there is no guarantee that a fixed x will result in the same fixed z every time after L steps of GD on the MSE. \\nDue to these factors, we believe that our method is robust to a wide range of gradient-based white-box attacks. However, we are very much interested in further research of novel attack methods.\\n\\nB) We have fixed the minor comments by specifically mentioning the classifier and substitute models for every Table and Figure throughout the paper.\"}",
"{\"title\": \"What would this defense do on the concentric spheres dataset?\", \"comment\": \"This paper shows that models trained on a synthetic dataset are vulnerable to small adversarial perturbations which lie on the data manifold. Thus at least for this dataset it seems like a perfect generator would not perturb the adversarial example at all. Can the authors comment what their proposed defense would do to fix these adversarial examples?\", \"https\": \"//openreview.net/forum?id=SyUkxxZ0b\"}",
"{\"title\": \"Answer to anonymous commenter\", \"comment\": \"We thank the anonymous commenter. Due to the recentness of the paper referred to by the commenter, we have not had the time to analyze it in detail. However, as noted in the paper (page 3), the attacks are actually generated using gradient descent as is the case in all attacks used in our paper.\\nThe mechanism considered in APE-GAN and that considered in our paper are very different. While MagNet and APE-GAN use a feedforward architecture for their \\u201creconstruction\\u201d step, Defense-GAN employs an optimization based projection onto the range of the generator, which holds a good representation of the true data.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper studied defenses against adversarial examples by training a GAN and, at inference time, finding the GAN-generated sample that is nearest to the (adversarial) input example. Next, it classifies the generated example rather than the input example. This defense is interesting and novel. The CelebA experiments the authors added in their revision suggest that the defense can be effective on high-resolution RGB images.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Answer to AnonReviewer2\", \"comment\": \"We appreciate the constructive criticism and detailed analysis of our paper.\\n\\nA) Nearest-neighbor baseline:\\nTaking the nearest neighbor of the potentially perturbed x from the training set can be seen as a simple way of removing adversarial noise, and is tantamount to a 1-nearest-neighbor (1-NN) classifier. On MNIST, a 1-NN classifier achieves an 88.6% accuracy on FGSM adversarial examples with epsilon = 0.3, found using the B substitute network. Defense-GAN-Rec and Defense-GAN-Orig average about 92.5% across the four different classifier networks when the substitute model is fixed to B. Similar trends are found for other substitute models. There is an improvement of about 4% by using Defense-GAN. It is also worth noting that in the case of MNIST, a 1-NN classifier works reasonably well (achieving around 95% on clean images). This is not the case for more complex datasets: for example, if the problem at hand is face attributes classification, nearest neighbors may not necessarily belong to the same class, and therefore NN classifiers will perform poorly.\\n\\nB) Only MNIST-sized images:\\nBased on the reviewer\\u2019s suggestion, we have added additional white-box results on the Large-scale CelebFaces Attributes (CelebA) dataset in the appendix of the paper. The results show that Defense-GAN can still be used with more complex datasets including larger and RGB images. For further details, please refer to Appendix F in the revised version.\\n\\nC) Time to reconstruct images:\\nWe agree with the reviewer that Defense-GAN introduces additional inference time by reconstructing images using GD on the MSE loss. However, we show its effectiveness against various attacks, especially in comparison to other simpler defenses. Furthermore, we have not optimized the running time of our algorithm, as it was not the focus of this work. This is a worthwhile effort to pursue in the future by trying to better utilize computational resources. \\nPer the reviewer\\u2019s comment, we have timed some reconstruction steps for CelebA images (which are 15.6 times larger than MNIST/F-MNIST). For R = 2, we have:\\nL = 10, 0.132 sec\\nL = 25, 0.106 sec\\nL = 50, 0.210 sec\\nL = 100, 0.413 sec\\nL = 200, 0.824 sec\\nThe reconstruction time for CelebA did not scale with the size of the image.\\n\\nD) MagNet results are sometimes worse than no defense in Table 4:\\nEven though it seems counter-intuitive that a defense mechanism can sometimes cause a decrease in performance, this stems from the fact that white-box attackers also know the exact defense mechanism used. In the case of MagNet, the defense mechanism is another feedforward network which, in conjunction with the original classifier, can be viewed as a new deeper feedforward network. Attacks on this bigger network can sometimes be more successful than attacks on the original network. Furthermore, MagNet was not designed to be robust against white-box attacks.\\n\\nE) Using L steps of white-box FGSM:\\nPer our understanding, the reviewer is suggesting using iterative FGSM. We do agree that for a fair comparison, L steps of iterative FGSM could be used. However, we note that CW is an iterative optimization-based attack, and is more powerful than iterative FGSM. Since we have shown robustness against CW attacks in Table 4, we believe iterative FGSM results will be similar.\"}",
"{\"title\": \"CW is an optimization based attack\", \"comment\": \"In your appendix you claim the combined model is hard to attack, but I suspect that might not be the case.\\n\\n1. CW is an optimization based attack. \\n\\n2. If you just set up the CW optimization attack, and find some local minima for z* that corresponds to an adversarial attack -- I suspect it might be pretty close to the z* you converge on after a few steps of GD. Perhaps worth a shot trying to just combine the two models and add ||G(z)-x|| as another term in the optimization objective. I suspect CW would work pretty well then. \\n\\nminimize CW loss function + 0.1*||z*-x|| \\n\\nsubject y=f(x)\\n z*=G(z) or something like this.\"}",
"{\"title\": \"Testing on Datasets Other than MNIST/Adversarial Examples of Generator\", \"comment\": \"Have you tested your method on other datasets? I wonder if it works with datasets such as CIFAR.\\n\\nMoreover, it's not clear whether this method can defend against existing attacks, without introducing new vulnerabilities. Here are some possible new attack methods:\\n\\n1- The generator can certainly output examples that are adversarial for the classifier. Hence, the attacker only needs to find out such examples and perturb the input image to make it similar to them.\\n\\n2- The attacker can target the minimization block, which uses \\\"L steps of Gradient Descent.\\\" By forcing it to output a wrong set of Z_L, the rest of the algorithm (combination of generator/classifier) becomes ineffective, i.e., the minimization block can be the bottleneck. \\n\\n3- The algorithm takes as input a seed, along with the image. Since for a given seed, the random number generator is deterministic, the attacker can test different seeds and use the one for which the algorithm fails. This attack may work even without perturbing the image.\"}",
"{\"title\": \"changed my 5 into 6\", \"comment\": \"B) C) Thanks for the additional experiments, I think they make the paper stronger. In particular they validate that scaling is proportional to L but not (linear in) to image size, and that the method works in RGB.\\nD) OK.\\nA) E) I still think that these additional experiments would help, but I am now marginally convinced that the authors expectations are correct.\"}",
"{\"title\": \"Revision\", \"comment\": \"We have posted a revision with an additional Appendix (F) for new white-box experiments on the CelebA dataset, as well as minor changes to the text.\"}"
]
} |
Hki-ZlbA- | Ground-Truth Adversarial Examples | [
"Nicholas Carlini",
"Guy Katz",
"Clark Barrett",
"David L. Dill"
] | The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network's robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with a provably-minimal distance from a given input point. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement. We use this technique to assess recently suggested attack and defense techniques.
| [
"adversarial examples",
"neural networks",
"formal verification",
"ground truths"
] | Reject | https://openreview.net/pdf?id=Hki-ZlbA- | https://openreview.net/forum?id=Hki-ZlbA- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1Q_cbqxf",
"HkcuHyarM",
"H1TnZzcgz",
"Sy5sYncgM"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1511819883468,
1517249906207,
1511821749320,
1511864738263
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper554/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper554/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper554/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Theoretically interesting but practically maybe limited\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Summary: The paper proposes a method to compute adversarial examples with minimum distance to the original inputs, and to use the method to do two things: Show how well heuristic methods do in finding \\\"optimal/minimal\\\" adversarial examples (how close the come to the minimal change that flips the label) and to assess how a method that is designed to make the model more robust to adversarial examples actually works.\", \"pros\": \"I like the idea and the proposed applications. It is certainly highly relevant, both in terms of assessing models for critical use cases as well as a tool to better understand the phenomenon.\\n\\nSome of the suggested insights in the analysis of defense techniques are interesting.\", \"cons\": \"The is not much technical novelty. The method boils down to applying Reluplex (Katz et al. 2017b) in a binary search (although I acknowledge the extension to L1 as distance metric).\\n\\nThe practical application of the method is very limited since the search is very slow and is only feasible at all for relatively small models. State-of-the-art practical models that achieve accuracy rates that make them interesting for deployment in potentially safety critical applications are out of reach for this analysis. The network analysed here does not reach the state-of-the-art on MNIST from almost two decades ago. The analysis also has to be done for each sample. The long runtime does not permit to analyse large amounts of input samples, which makes the analysis in terms of the increase in robustness rather weak. The statement can only be made for the very limited set of tested samples.\\n\\nIt is also unclear whether it is possible to include distance metrics that capture more sophisticated attacks that fool network even under various transformations of the input.\\nThe paper does not consider the more recent and highly relevant Moosavi-Dezfooli et al. \\u201cUniversal Adversarial Perturbations\\u201d CVPR 2017.\\n\\nThe distance metrics that are considered are only L_inf and L1, whereas it would be interesting to see more relevant \\u201cperceptual losses\\u201d such as those used in style transfer and domain adaptation with GANs.\", \"minor_details\": [\"I would consider calling them \\u201cminimal adversarial samples\\u201d instead of \\u201cground-truth\\u201d.\", \"I don\\u2019t know if the notation in the Equation in the paragraph describing Carlini & Wagner comes from the original paper, but the inner max would be easier to read as \\\\max_{i \\\\neq t} \\\\{Z(x\\u2019)_i \\\\}\", \"Page 3 \\u201cNeural network verification\\u201d: I dont agree with the statement that neural networks commonly are trained on \\u201ca small set of inputs\\u201d.\", \"Algorithm 1 is essentially only a description of binary search, which should not be necessary.\", \"What is the timeout for the computation, mentioned in Sec 4?\", \"Page 7, second paragraph: I wouldn\\u2019t say the observation is in line with Carlini & Wagner, because they take a random step, not necessarily one in the direction of the optimum? That\\u2019s also the conclusion two paragraphs below, no?\", \"I don\\u2019t fully agree with the conclusion that the defense of Madry does not overfit to the specific method of creating adversarial examples. Those were not created with the CW attack, but are related because CW was used to initialize the search.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper describes a method to generate provably 'optimal' adversarial examples, leveraging the so-called 'Reluplex' technique, which can evaluate properties of piece-wise linear representations.\\nReviewers agreed that incorporating optimality certificates into adversarial examples is a promising direction to follow, but were also concerned about the lack of empirical justification the current paper provides and missed discussion about the relevance of choosing Lp distances. They all recommended pushing experiments to more challenging datasets before the paper can be accepted, and the AC shares the same advice.\"}",
"{\"title\": \"Interesting but not too convincing\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors propose to employ provably minimal-distance examples as a tool to evaluate the robustness of a trained network. This is demonstrated on a small-scale network using the MNIST data set.\\n\\nFirst of all, I find it striking that a trained network with 97% accuracy (as claimed by the authors) seems extremely brittle -- considering the fact that all the adversarial examples in Figure 1 are hardly borderline examples at all, at least to my eyes. This does reinforce the (well-known?) weakness of neural networks in general. I therefore find the authors' statement on page 3 disturbing: \\\"... they are trained over a small set of inputs, and can then perform well, in general, on previously-unseen inputs\\\" -- which seems false (with high probability over all possible worlds).\\n\\nSecondly, the term \\\"ground truth\\\" example seems very misleading to me. Perhaps \\\"closest misclassified examples\\\"?\\n\\nFinally, while the idea of \\\"closest misclassified examples\\\" seems interesting, I am not convinced that they are the right way to go when it comes to both building and evaluating robustness. All such examples shown in the paper are indeed within-class examples that are misclassified. But we could equally consider another extreme, where the trained network is \\\"over-regularized\\\" in the sense that the closest misclassified examples are indeed from another class, and therefore \\\"correctly\\\" misclassified. Adding these as adversarial examples could seriously degrade the accuracy.\\n\\nAlso, for building robustness, one could argue that adding misclassified examples that are \\\"furthest\\\" (i.e. closest to the true decision boundary) is a much more efficient training approach, since a few of these can possibly subsume a large number of close examples.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novel idea, but more experiments needed to support findings\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper describes a method for generating so called ground truth adversarial examples: adversaries that have minimal (L1 or L_inf) distance to the training example used to generate them. The technique uses the recently developed reluplex, which can be used to verify certian properties of deep neural networks that use ReLU activations. The authors show how the L1 distance can be formulated using a ReLU and therefore extend the reluplex also work with L1 distances. The experiments on MNIST suggest that the C&W attack produces close to optimal adversarial examples, although it is not clear if these findings would transfer to larger more complex networks. The evaluation also suggests that training with iterative adversarial examples does not overfit and does indeed harden the network to attacks in many cases.\\n\\nIn general, this is a nice idea, but it seems like the inherent computational cost will limit the applicability of this approach to small networks and datasets for the time being. Incidentally, it would have been useful if the authors provided indicative information on the computational cost (e.g. in the form of time on a standard GPU) for generating these ground truths and carrying out experiments.\\n\\nThe experiments are quite small scale, which I expect is due to the computational cost of generating the adversarial examples. It is difficult to say how far the findings can be generalized from MNIST to more realistic situations. Tests on another dataset would have been welcomed.\\n\\nAlso, while interesting, are adversarial examples that have minimal L_p distance from training examples really that useful in practice? Of course, it's nice that we can find these, but it could be argued that L_p norms are not a good way of judging the similarity of an adversarial example to a true example. I think it would be more useful to investigate attacks that are perceptually insignificant, or attacks that operate in the physical world, as these are more likely to be a concern for real world systems. \\n\\nIn summary, while I think the paper is interesting, I suspect that the applicability of this technique is possibly limited at present, and I'm unsure how much we can really read into the findings of the paper when the experiments are based on MNIST alone.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
SyqAPeWAZ | CNNs as Inverse Problem Solvers and Double Network Superresolution | [
"Cem TARHAN",
"Gözde BOZDAĞI AKAR"
] | In recent years Convolutional Neural Networks (CNN) have been used extensively for Superresolution (SR). In this paper, we use inverse problem and sparse representation solutions to form a mathematical basis for CNN operations. We show how a single neuron is able to provide the optimum solution for inverse problem, given a low resolution image dictionary as an operator. Introducing a new concept called Representation Dictionary Duality, we show that CNN elements (filters) are trained to be representation vectors and then, during reconstruction, used as dictionaries. In the light of theoretical work, we propose a new algorithm which uses two networks with different structures that are separately trained with low and high coherency image patches and show that it performs faster compared to the state-of-the-art algorithms while not sacrificing from performance. | [
"superresolution",
"convolutional neural network",
"sparse representation",
"inverse problem"
] | Reject | https://openreview.net/pdf?id=SyqAPeWAZ | https://openreview.net/forum?id=SyqAPeWAZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SysQi-Fff",
"rkDK2NwgG",
"HJincWtMf",
"Hk7WiZFGG",
"rke8ggtxG",
"rkHX_Bjlf",
"HyouBy6HG"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"decision"
],
"note_created": [
1513851682793,
1511636094877,
1513851571527,
1513851642797,
1511747655773,
1511901212961,
1517249907230
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper603/Authors"
],
[
"ICLR.cc/2018/Conference/Paper603/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper603/Authors"
],
[
"ICLR.cc/2018/Conference/Paper603/Authors"
],
[
"ICLR.cc/2018/Conference/Paper603/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper603/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Answers and Corrections\", \"comment\": \"Thank you very much for the detailed review of the manuscript.\\n\\nWe have revisited the manuscript to reflect all the reviewers\\u2019 comments. The proposed Representation Dictionary Duality concept is explained in detail and the notation inconsistencies throughout the text are corrected. In addition the current literature is updated and the differences between the proposed understanding and the literature is made clear.\\n\\nIntroduction/literature review-> We have referred to the Generative Networks in the revised manuscript.\\n\\nNotation/readability -> We fixed the notations together with few typos. We added a figure detailing the referenced CNN algorithm SRCNN (Dong et. al.). Also a figure for CNN training procedure is added.\\n\\nSection 3-> In Daubechies et. al. from our references, the nature of matrix K is defined as a bounded operator between two hilbert spaces. Boundedness is defined according to the formula: for any given vector f from a Hilbert space, if the inequality ||Kf|| \\\\leq C||f|| is satisfied, where C is a constant, then the operator is bounded. The iterative shrinkage algorithm we have referenced from Daubechies et. al. have addressed this issue directly, for cases when the null space of K for a vector f is not zero and its inversion is ill-posed or even ill-conditioned. We have shown that a neuron filter solves the same equation during training and since a library D also satisfies boundedness assumption we know that it will reach to the optimum solution. We now made this clearer in the text.\\n\\nRepresentation-dictionary duality concept -> We have moved the appendix A into the text. We assert that, CNN operates as a layered DLB during training and during testing. We have shown that the mechanism by which the CNN learns is through solving an inverse problem. The inverse problem constitutes a bounded operator, matrix D, which is composed of LR patches. Even though the matrix D is different in structure from conventional inverse problem operators, it satisfies the constraints to be used as an operator. The cost function that is minimized by CNN training yields a representation vector as the neuron filter, for which the dictionary is matrix D and the target is HR image patch. Neuron parameters (filters) being the representation vectors instead of an output from a network is a new understanding in the literature. Resulting representation vectors (filters) from a layer of neuron filters turn into a dictionary upon which the reconstruction of HR image is carried out during testing (scoring) phase. This is the core understanding of RDD concept. Using RDD we are able to demystify how a CNN is able to learn and apply reconstruction of HR images for SR problem.\\n\\nFinal proposed algorithm -> We have used strength, coherence and angle information to divide data into 38 networks initially. We have discovered that networks that are trained with low strength data (which are almost flat patches) won\\u2019t converge to a meaningful state. We couldn\\u2019t handle the separation of angle information while aggregating all the results. Also this was not a feasible network structure to be implemented for a real time, possible video application. So we reduced to using two networks with low and high coherence. The reviewer is absolutely right in asking why 4 or 8 networks have not been used. This was simply due to lack of time. We will strongly consider doing an analysis on this in near future.\\n\\nResults -> We ran out of space so we had to get rid of all redundant information. We have now added a page of comparison in the appendices. The proposed solution is faster because splitting the data enabled us to train lighter networks, even though one of the networks is as long as the original reference paper (20 layers). We have touched on the subject briefly on chapter 3. We have now added more discussion as to why the proposed solution is faster. And the sole reason we are trying to speed up the algorithm is because we have real time video superresolution application in our future plans. We have not mentioned this in the text plainly because we have not done anything to address multiframe SR yet.\"}",
"{\"title\": \"Interesting paper bringing up different domains. It could be written more reader friendly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes an understanding of the relation between inverse problems, CNNs and sparse representations. Using the ground work for each proposes a new competitive super resolution technique using CNNs. Overall I liked authors' endeavors bringing together different fields of research addressing similar issues. However, I have significant concerns regarding how the paper is written and final section of the proposed algorithm/experiments etc.\\n\\nIntroduction/literature review-> I think paper significantly lacks literature review and locating itself where the proposed approach at the end stands in the given recent SR literature (particularly deep learning based methods) --similarities to other techniques, differences from other techniques etc. There have been several different ways of using CNNs for super resolution, how does this paper\\u2019s architecture differs from those? Recent GAN based methods are very promising and how does the proposed technique compares to them? \\n\\nNotation/readability -> I do respect the author\\u2019s mentioning different research field\\u2019s notations and understand the complication of building a single framework. However I still think that notations could be a lot more simplified\\u2014to make them look in the same page. It is very confusing for readers even if you know the mentioned sub-fields and their notations. Figure 1 was very useful to alleviate this problem. More visuals like figure 1 could be used for this problem. For example different network architecture figures (training/testing for CNNs) could be used to explain in a compact way instead of plain text. \\n\\nSection 3-> I liked the way authors try to use the more generalized Daubechies et. al. However I do not understand lots of pieces still. For example using the low resolution image patches as a basis\\u2014more below. In the original solution Daubechies et. al. maps data to the orthonormal Hilbert space, but authors map to the D (formed by LR patches). How does this affect the provability? \\n\\nRepresentation-dictionary duality concept -> I think this is a very fundamental piece for the paper and don\\u2019t understand why it is in the appendix. Using images as D in training and using filters as D in scoring/testing, is very unintuitive to me. Even after reading second time. This requires better discussion and examples. Comparison/discussion to other CNN/deep learning usage for super-resolution methods is required exactly right here.\\n\\nFinal proposed algorithm -> Splitting the data for high and low coherence makes sense however coherence is a continues variable. Why to keep the quantization at binary? Why not 4,8 or more? Could this be modeled in the network?\\n\\nResults -> I understand the numerical results and comparisons to the Kim et. Al\\u2014and don\\u2019t mind at all if they are on-par or slightly better or worse. However in super-resolution paper I do expect a lot more visual comparisons. There has been only Figure 5. Authors could use appendix for this purpose. Also I would love to understand why the proposed solution is significantly faster. This is particularly critical in super-resolution as to apply the algorithms to videos and reconstruction time is vital.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Answers and Corrections\", \"comment\": \"Thank you very much for the detailed review of the manuscript.\\nWe have revisited the manuscript to reflect all the reviewers\\u2019 comments. The proposed Representation Dictionary Duality (RDD) concept is explained in detail and the notation inconsistencies throughout the text are corrected. In addition the current literature is updated and the differences between the proposed understanding and the literature is made clear.\\n1) As the reviewer suggests Generative Network (GN) based algorithms do not depend (solely) on PSNR metric. Due to the lack of MSE control, the output is not loyal to the input image. Since textures are created from input images, seemingly randomly, this might cause problems in video streams. Since it is trivial to add Perceptual Loss (PL) minimization to the training procedure, in the future we plan to add PL and conduct experiments.\\n2) We have modified the text to be more comprehensible. We have used the variables g, f and D throughout the text, we have put subscript L for learning (training), and subscript R for reconstruction (testing) phase.\\n3) Similar to 2) we changed section 2.1 to be more comprehensible. We have referred all variables as f.\\n4) The reviewer is correct, not using parenthesis was a typo, thanks for pointing out. It is corrected in the revised text.\\n5) What we meant by \\u201cinstead of approaching the problem as inverse problem\\u201d was to draw attention to the difference of solution approaches of inverse problem solutions and DL based solutions. To avoid misunderstandings we have named the subsections as \\u201cAnalytic Approaches\\u201d and \\u201cData Driven Approaches\\u201d. We described dictionary based learning in revised manuscript. Also we added explanations on how Yang et. al. have used LR and HR library for reconstruction.\\n6) The reviewer is correct, this was a typo that we corrected in the revised text.\\n7) We have discussed the effect of size mismatch in the training procedure. Residual learning which we have borrowed from Kim et. al. automatically zero pads the input boundaries and even the outer pixels turn out to be unspoiled. This is added into the text.\\n8) We added a compact image detailing the training of a neural network in appendix.\\n9,10) In Daubechies et. al. from our references, the nature of matrix K is defined as a bounded operator between two hilbert spaces. Boundedness is defined according to the formula: for any given vector f from a Hilbert space, if the inequality ||Kf|| \\\\leq C||f|| is satisfied, where C is a constant, then the operator is bounded. Library D does not violate this assumption, we have added more explanation into the text.\\n11)The RDD concept is tool for explaining how we have incorporated inverse problem and sparse representation mathematics into the CNN training/testing procedure. We have shown that the method, by which the CNN learns, is through solving an inverse problem. The inverse problem constitutes a bounded operator, matrix D, which is composed of LR patches. Even though the matrix D is different in structure from conventional inverse problem operators, it satisfies the constraints to be used as an operator. The cost function that is minimized by CNN training yields a representation vector as the neuron filter, for which the dictionary is matrix D and the target is HR image patch. Neuron parameters (filters) being the representation vectors instead of an output from a network is a new understanding in the literature. Resulting representation vectors (filters) from a layer of neuron filters turn into a dictionary upon which the reconstruction of HR image is carried out during testing phase. This is the core understanding of RDD concept. We moved the explanations given in appendix A into the text.\\n12) For training same 291 images from Kim et. al. have been used in similar fashion, with different rotations and scales. Then we have separated images into two sets by using coherence values from LR patches. We added this information into the text. We will strongly consider jointly optimizing two networks in near future since we already had a goal of finding a better aggregation method.\\n13) For VDSR algorithm Barbara image had 26.2078 dB PSNR and 0.8039 SSIM values whereas our DNSR achieved 26.6600 dB PSNR and 0.8091 SSIM. Cross entropy loss had a minor effect in this improvement.\\n14) Filters might not appear predominant due to the residual learning of the network or because of instanced filters\\u2019 size (3x3).\\n15) We have used a foundational paper for mathematical background (Daubechies et. al. 2004) and we have used a state of the art paper covering all previous work including Gregor et. al.\\u2019s work (Papyan et. al. 2016,2017). We commented on Gregor et. al.\\u2019s paper inside the text and highlight the differences from our approach in revised text. Mainly we show that trained neuron filters become the representation vectors.\"}",
"{\"title\": \"Answers and Corrections\", \"comment\": \"Thank you very much for the detailed review of the manuscript.\\n\\nWe have revisited the manuscript to reflect all the reviewers\\u2019 comments. The proposed Representation Dictionary Duality concept is explained in detail and the notation inconsistencies throughout the text are corrected. In addition the current literature is updated and the differences between the proposed understanding and the literature is made clear.\\n\\n-We have highlighted the differences of our understanding from that of Papyan et. al. We have not included more foundational papers including Gregor et. al. inside the text plainly to simplify the text. That was a clear mistake and we have now included references into the revised paper. To discuss the differences of our work from what is already published, we highlight few points: \\n--Gregor et. al. have used ISTA algorithm and they have successfully implemented iterative algorithm with a time unfolded recursive neural network, which can be seen as a feed-forward network. Then the architecture is fine-tuned with experimental results\\n--Bronstein et. al. have worked on a shift of understanding in that, what they present with a neural network is not a regressor that is approximating an iterative algorithm, but itself is a full featured sparse coder. \\n-Our work diverges from theirs in showing how a convolutional neural network is able to learn image representation and reconstruction for SR problem. We have united inverse problem approaches, Deep Learning Based and Dictionary Based methods in a representation-dictionary duality concept. We have showed that during training, neuron filters learn from input images as if the input patches constituted a dictionary for representation. Therefore different from literature the neuron parameters (filters) become representations themselves. And we show that during testing (scoring) learned filters become the dictionaries for reconstruction. This is now made clearer in the text.\\n\\n-L1 norm minimization is not the crucial part of our work since only capture the mathematical background and optimality of the solutions. We were only repeating how L2 norm minimization based algorithms have defended their reasoning from changing from L1 norm to L2 norm. We edited this part.\\n\\n-Figure 1 is not wrong, but previous notation changes could have confused the reviewers and we fixed this in revised paper. The f is high res data that is blurred and downsampled with K. The g is the observation therefore we are trying to estimate highres data by estimating f. This figure is used to sum up the different parts that we have brought together. We hoped it would be useful in understanding the crux of the paper.\\n\\n-Describing the results as \\u201cresembling the training data\\u201d was an unfortunate choice of words. The purpose of the experiment was to visualize RDD concept which really states that the Network learns predominant features from the training set, not the images themselves. Since we have reduced the training set to a narrow orientation single edged image database, first layer filters tend to be oriented in the same direction which is a visualization of RDD. This does not correspond to resemblance of filters to the data set itself. We have corrected this in the text.\\n\\n-We corrected the typos.\"}",
"{\"title\": \"Review of: CNNs as Inverse Problem Solvers and Double Network Superresolution\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper discusses using neural networks for super-resolution. The positive aspects of this work is that the use of two neural networks in tandem for this task may be interesting, and the authors attempt to discuss the network's behavior by drawing relations to successful sparsity-based super-resolution. Unfortunately I cannot see any novelty in the relationship the authors draw to LASSO style super-resolution and dictionary learning beyond what is already in the literature (see references below), including in one reference that the authors cite. In addition, there are a number of sloppy mistakes (e.g. Equation 10 as a clear copy-paste error) in the manuscript. Given that much of the main result seems to already be known, I feel that this work is not novel enough at this time.\", \"some_other_minor_points_for_the_authors_to_consider_for_future_iterations_of_this_work\": [\"The authors mention the computational burden of solving L1-regularized optimizations. A lat of work has been done to create fast, efficient solvers in many settings (e.g. homotopy, message passing etc.). Are these methods still insufficient in some applications? If so, which applications of interest are the authors considering?\", \"In figure 1, it seems that under \\\"superresolution problem\\\": 'f' should be 'High res data' and 'g' should be 'Low res data' instead of what is there. I'm also not sure how this figure adds to the information already in the text.\", \"In the results, the authors mention how some network features represented by certain neurons resemble the training data. This seems like over-training and not a good quality for generalization. The authors should clarify if, and why, this might be a good thing for their application.\", \"Overall a heavy editing pass is needed to fix a number of typos throughout.\"], \"references\": \"[1] K. Gregor and Y. LeCun , \\u201cLearning fast approximations of sparse coding,\\u201d in Proc. Int. Conf. Mach. Learn., 2010, pp. 399\\u2013406.\\n[2] P. Sprechmann, P. Bronstein, and G. Sapiro, \\u201cLearning efficient structured sparse models,\\u201d in Proc. Int. Conf. Mach. Learn., 2012, pp. 615\\u2013622.\\n[3] M. Borgerding, P. Schniter, and S. Rangan, ``AMP-Inspired Deep Networks for Sparse Linear Inverse Problems [pdf] [arxiv],\\\" IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4293-4308, Aug. 2017.\\n[4] V. Papyan*, Y. Romano* and M. Elad, Convolutional Neural Networks Analyzed via Convolutional Sparse Coding, accepted to Journal of Machine Learning Research, 2016.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The method proposes a new architecture for solving image super-resolution task. They provide an analysis that connects aims to establish a connection between how CNNs for solving super resolution and solving sparse regularized inverse problems.\\n\\nThe writing of the paper needs improvement. I was not able to understand the proposed connection, as notation is inconsistent and it is difficult to figure out what the authors are stating. I am willing to reconsider my evaluation if the authors provide clarifications.\\n\\nThe paper does not refer to recent advances in the problem, which are (as far as I know), the state of the art in the problem in terms of quality of the solutions. This references should be added and the authors should put their work into context.\\n\\n1) Arguably, the state of the art in super resolution are techniques that go beyond L2 fitting. Specifically, methods using perceptual losses such as:\\n\\nJohnson, J. et al \\\"Perceptual losses for real-time style transfer and super-resolution.\\\" European Conference on Computer Vision. Springer International Publishing, 2016.\\n\\nLedig, Christian, et al. \\\"Photo-realistic single image super-resolution using a generative adversarial network.\\\" arXiv preprint arXiv:1609.04802 (2016).\\n\\nPSNR is known to not be directly related to image quality, as it favors blurred solutions. This should be discussed.\\n\\n2) The overall notation of the paper should be improved. For instance, in (1), g represents the observation (the LR image), whereas later in the text, g is the HR image. \\n\\n3) The description of Section 2.1 is quite confusing in my view. In equation (1), y is the signal to be recovered and K is just the downsampling plus blurring. So assuming an L1 regularization in this equation assumes that the signal itself is sparse. Equation (2) changes notation referring y as f. \\n\\n4) Equation (2) seems wrong. The term multiplying K^T is not the norm (should be parenthesis).\\n\\n5) The first statement of Section 2.2. seems wrong. DL methods do state the super resolution problem as an inverse problem. Instead of using a pre-defined basis function they learn an over-complete dictionary from the data, assuming that natural images can be sparsely represented. Also, this section does not explain how DL is used for super resolution. The cited work by Yang et al learns a two coupled dictionaries (one for LR and HL), such that for a given patch, the same sparse coefficients can reconstruct both HR and LR patches. The authors just state the sparse coding problem.\\n\\n6) Equation (10) should not contain the \\\\leq \\\\epsilon.\\n\\n7) In the second paragraph of Section 3, the authors mention that the LR image has to be larger than the HR image to prevent border effects. This makes sense. However, with the size of the network (20 layers), the change in size seems to be quite large. Could you please provide the sizes? When measuring PSNR, is this taken into account? \\n\\n8) It would be very helpful to include an image explaining the procedure described in the second paragraph of Section 3.\\n\\n9) I find the description in Section 3 quite confusing. The authors relate the training of a single filter (or neuron) to equation (7), but they define D, that is not used in all of Section 2.1. And K does not show in any of the analysis given in the last paragraph of page 4. However, D and K seem two different things (it is not just one for the other), see bellow.\\n\\n10) I cannot understand the derivation that the authors do in the last paragraph of page 4 (and beginning of page 5). What is phi_l here? K in equation (7) seems to match to D here, but D here is a collection of patches and in (7) is a blurring and downsampling operator. I cannot review this section. I will wait for the author's response clarifications.\\n\\n11) The authors describe a change in roles between the representations and atoms in the training and testing phase respectively. I do not understand this. If I understand correctly, the final algorithm, the authors train a CNN mapping LR to HR images. The network is used in the same way at training and testing.\\n\\n12) It would be useful to provide more details about the training of the network. Please describe the training set used by Kim et al. Are the two networks trained independently? One could think of fine-tuning them jointly (including the aggregation).\\n\\n13) The authors show the advantage of separating networks on a single image, Barbara. It would be good to quantify this better (maybe in terms of PSNR?). This observation might be true only because the training loss, say than the works cited above. Please comment on this.\\n\\n14) In figures 3 and 4, the learned filters are those on the top (above the yellow arrow). It is not obvious to me that the reflect the predominant structure in the data. (maybe due to the low resolution).\\n\\n15) This work is related to (though clearly different) that of LISTA (Learned ISTA) type of networks, proposed in:\\n\\nGregor, K., & LeCun, Y. (2010). Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning (ICML) \\n\\nWhich connect the network architecture with the optimization algorithm used for solving the sparse coding problem. Follow up works have used these ideas for solving inverse problems as well.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper addresses the question of how to solve image super-resolution, building on a connection between sparse regularization and neural networks.\\nReviewers agreed that this paper needs to be rewritten, taking into account recent work in the area and significantly improving the grammar. The AC thus recommends rejection at this time.\"}"
]
} |
S1DWPP1A- | Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration | [
"Alexandre Péré",
"Sébastien Forestier",
"Olivier Sigaud",
"Pierre-Yves Oudeyer"
] | Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations. | [
"exploration; autonomous goal setting; diversity; unsupervised learning; deep neural network"
] | Accept (Poster) | https://openreview.net/pdf?id=S1DWPP1A- | https://openreview.net/forum?id=S1DWPP1A- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"B1JbLzTQz",
"HJcQvaVef",
"rywYwMaXz",
"SkrQDzp7M",
"SJwTz16Sf",
"ByvGgjhez",
"Sk-lOfTmz",
"ByeHdGpXM",
"S1h1Bz6Qf",
"Bk9oIe5gG"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1515165175465,
1511474977824,
1515165566541,
1515165469849,
1517249214990,
1511989262647,
1515165673224,
1515165752107,
1515164899968,
1511814817555
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper132/Authors"
],
[
"ICLR.cc/2018/Conference/Paper132/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper132/Authors"
],
[
"ICLR.cc/2018/Conference/Paper132/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper132/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper132/Authors"
],
[
"ICLR.cc/2018/Conference/Paper132/Authors"
],
[
"ICLR.cc/2018/Conference/Paper132/Authors"
],
[
"ICLR.cc/2018/Conference/Paper132/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Specific responses to reviewer 3\", \"comment\": \"> R3 \\\"does not include significant explanation for the results\\\", \\\"The figure captions are all very \\\"matter-of-fact\\\" and, while they explain what each figure shows, provide no explanation of the results.\\\"\\nWe agree. We have added several more detailed explanations of the results.\\n\\n> R3 \\\"why many of the deep representation techniques do not perform very well.\\\"\\nWe think this comment is due to our unclear explanation of our main target combined with the use of a misleading measure (MSE). We hope the new explanation we provide, as well as the focus on exploration measures based on the KL divergence will enable to make it more clear that on the contrary several deep learning approaches are performing very well, some systematically outperforming the use of handcrafted goal space features (see the common answer to all reviewers).\\n\\n> R3 \\\"The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI?\\\"\\n\\nWe agree that this is a very important point. We have in the new version included results when one gives algorithms the right number of dimensions (2 for arm-ball, 3 for arm-arrow), and showing that providing more dimensions to IMGEP-UGL algorithms than the \\\"true\\\" dimensionality of the phenomenon can actually be beneficial (and we provide an explanation why this is the case). \\n\\n> \\\"The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided?\\\"\\n\\nFor each environments, we trained the networks with a dataset of 10.000 elements uniformly sampled in the underlying state-space. This corresponds to 100 samples per dimension for the 'armball' environment, and around 20 per dimension for the 'armarrow' environment. This is not far from the number of samples considered in the dsprite dataset, in which around 30 samples per dimensions are considered. Moreover, our early experiments showed that for those two particular problems, adding more data did not change the exploration results.\\n\\n> \\\"- The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper.\\\"\\n\\nWe have tried to better explain in words the general principles of this algorithm. \\n\\n> \\\"The authors need to be clearer about their notation in a number of places. For instance, they use gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup.\\\"\\n\\nWe have tried to correct these problems in notations.\\n\\n> \\\"It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained).\\\"\\n\\nyes indeed. We have added some new explanations.\"}",
"{\"title\": \"Review of Unsupervised Learning of Goal Spaces for Intrinsically Motivated Exploration\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper introduces a representation learning step in the Intrinsically Motivated Exploration Process (IMGEP) framework.\\n\\nThough this work is far from my expertise fields I find it quite easy to read and a good introduction to IMGEP.\\nNevertheless I have some major concerns that prevent me from giving an acceptance decision.\\n\\n1) The method uses mechanisms than can project back and forth a signal to the \\\"outcome\\\" space. Nevertheless only the encoder/projection part seems to be used in the algorithm presented p6. For example the encoder part of an AE/VAE is used as a preprocesing stage of the phenomenon dynamic D. It should be obviously noticed that the decoder part could also be used for helping the inverse model I but apparently that is not the case in the proposed method.\\n\\n2) The representation stage R seems to be learned at the beginning of the algorithm and then fixed. When using DNN as R (when using AE/VAE) why don't you propagate a gradient through R when optimizing D and I ? In this way, learning R at the beginning is only an old good pre-training of DNN with AE.\\n\\n3) Eventually, Why not directly considering R as lower layers of D and using up to date techniques to train it ? (drop-out, weight clipping, batch normalization ...).\\nWhy not using architecture adapted to images such as CNN ?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Specific responses to reviewer 1 (part 2)\", \"comment\": \"> R1 \\\"The representation learning is only a preprocessing step requiring a magic first phase.\\n> -> Representation is not updated during exploration\\\"\\n> \\\"- only the ball/arrow was in the input image, not the robotic arm. I understand this because in phase 1 the robot would not move, but this connects to the next point:\\\"\\n\\nIndeed, representation is not updated during exploration, and as mentioned in the conclusion we think doing this is a very important direction for future work. However, we have two strong justification for this decomposition, that we added in the paper.\\n\\nFirst, we do not believe the preliminary pre-processing step is \\\"magical\\\". Indeed, if one studies the work from the developmental learning perspective outlined in the introduction, where one takes inspiration from the processes of learning in infants, then this decomposition corresponds to a well-known developmental progression: in their first few weeks, motor exploration in infants is very limited (due to multiple factors), while they spend a considerable amount of time observing what is happening in the outside world with their eyes (e.g. observing images of others producing varieties of effects on objects). During this phase, a lot of perceptual learning happens, and this is reused later on for motor learning (infant perceptual development often happens ahead of motor development in several important ways). In the article, the concept of \\\"social guidance\\\" presented in the introduction, and the availability of a database of observations of visual effects that can happen in the world, can be seen as a model of this first phase of infant learning by passively observing what is happening around them.\\n\\nA second justification for this decomposition is more methodological. It is mainly an experimental tool for better understanding what is happening. Indeed, the underlying algorithmic mechanisms are already quite complex, and analyzing what is happening when one decomposes learning in these two phases (representation learning, then exploration) is an important scientific step. Presenting in the same article another study where representations would be updated continuously would result in too much material to be clearly presented in a conference paper.\\n\\n> R1 \\\"the input space was very simple in all experiments, not suitable for distinguishing between the algorithms, for instance, ISOMap typically suffers from noise and higher dimensional manifolds\\\"\\n\\nThe use of the term \\\"simple\\\" depends on the perspective. From the perspective of a classical goal exploration process that would use the 4900 raw pixels as input, not knowing they are pixels and considering them similarly as when engineered representations are provided, then this is a complicated space and exploration is very difficult. At the same time, from the point of view of representation learning algorithms, this is indeed a moderately complex input space (yet, we on purpose did not consider convolutionnal auto-encoders so that the task is not too simplified and results could apply to other modalities such as sound or proprioception). Third, if one considers the dimensionality of the real sensorimotor manifold in which action is happening (2 for arm-ball, 3 for arm-arrow), this does not seem to us to be too unrealistic as many of real world sensorimotor tasks are actually happening in low-dimensional task spaces (e.g. rigid object manipulation happens in a 6D task space). So, overall we have chosen these experimental setups as we belive they are a good compromise between simplicity (enabling us to understand well what is happening) and complexity (if one considers the learner does not already knows that the stimuli are pixels of an image).\"}",
"{\"title\": \"Specific responses to reviewer 1 (part 1)\", \"comment\": \"> R1 \\\"an agent that has no intrinsic motivation other than trying to achieve random goals.\\\"\\n\\\"There is nothing new with the intrinsically motivated selection of goals here, just that they are in another space. Also, there is no intrinsic motivation. I also think the title is misleading.\\\"\\n\\nThe concept of \\\"intrinsically motivated learning and exploration\\\" is not yet completely well-defined across (even computionational) communities, and we agree that the use of the term \\\"intrinsically motivated exploration\\\" in this article may seem unusual for some readers. However, we strongly think it makes sense to keep it for the following reasons.\\n\\nThere are several conceptual approaches to the idea of \\\"intrinsically motivated learning and exploration\\\", and we believe our use of the term intrinsic-motivation is compatible with all of them:\\n\\n- Focus on task-independance and self-generated goals: one approach of intrinsic motivation, rooted in its conceptual origins in psychology, is that it designates the set of mechanisms and behaviours of organized exploration which are not directed towards a single extrinsically imposed goal/problem (or towards fullfilling physiological motivations like food search), but rather are self-organized towards intrinsically defined objectives and goals (independant of physiological motivations like food search). From this perspective, mechanisms that self-generate goals, even randomly, are maybe the simplest and most prototypical form of intrinsically motivated exploration. \\n \\n- Focus on information-gain or competence-gain driven exploration: Other approaches consider that intrinsically motivated exploration specifically refers to mechanisms where choices of actions or goals are based on explicit measures of expected information-gain about a predictive model, or novelty or surprise of visited states, or competence gain for self-generated goals. In the IMGEP framework, this corresponds specifically to IMGEP implementations where the goal sampling procedure is not random, but rather based on explicit estimations of expected competence gain, like in the SAGG-RIAC architecture or in modular IMGEPs of (Forestier et al., 2017). In the experiments presented in this article, the choice of goals is made randomly as the focus is not on the efficiency of the goal sampling policy. However, it would be straightforward to use a selection of goals based on expected competence gain, and thus from this perspective the proposed algorithm adresses the general problem of how to learn goal representations in IMGEPs.\\n\\n- Focus on noverly/diversity search mechanisms: Yet another approach to intrinsically motivated learning and exploration is one that refers to mechanisms that organize the learner's exploration so that exploration of novel or diverse behaviours is fostered. A difference with the previous approach is that here one does not necessarily use internally a measure of novelty or diversity, but rather one uses it to characterize the dynamics of the behaviour. And an interesting property of random goal exploration implementations of IMGEPs is that while it does not measure explicitly novelty or diversity, it does in fact maximize it through the following mechanism: from the beginning and up to the point where the a large proportion of the space has been discovered, generating random goals will very often produce goals that are outside the convex hull of already discovered goals. This in turn mechanically leads to exploration of stochastic variants of motor programs that produce outcomes on the convex hull, which statistically pushes the convex hull further, and thus fosters exploration of motor programs that have a high probability to produce novel outcomes outside the already known convex hull.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper aims to improve on the intrinsically motivated goal exploration framework by additionally incorporating representation learning for the space of goals. The paper is well motivated and follows a significant direction of research, as agreed by all reviewers. In particular, it provides a means for learning in complex environments, where manually designed goal spaces would not be available in practice. There had been significant concerns over the presentation of the paper, but the authors put great effort in improving the manuscript according to the reviewers\\u2019 suggestions, raising the average rating by 2 points after the rebuttal.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Some interesting ideas, yet no clear message\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"[Edit: After revisions, the authors have made a good-faith effort to improve the clarity and presentation of their paper: figures have been revised, key descriptions have been added, and (perhaps most critically) a couple of small sections outlining the contributions and significance of this work have been written. In light of these changes, I've updated my score.]\", \"summary\": \"The authors aim to overcome one of the central limitations of intrinsically motivated goal exploration algorithms by learning a representation without relying on a \\\"designer\\\" to manually specify the space of possible goals. This work is significant as it would allow one to learn a policy in complex environments even in the absence of a such a designer or even a clear notion of what would constitute a \\\"good\\\" distribution of goal states.\\n\\nHowever, even after multiple reads, much of the remainder of the paper remains unclear. Many important details, including the metrics by which the authors evaluate performance of their work, can only be found in the appendix; this makes the paper very difficult to follow.\\n\\nThere are too many metrics and too few conclusions for this paper. The authors introduce a handful of metrics for evaluating the performance of their approach; I am unfamiliar with a couple of these metrics and there is not much exposition justifying their significance and inclusion in the paper. Furthermore, there are myriad plots showing the performance of the different algorithms, but very little explanation of the importance of the results. For instance, in the middle of page 9, it is noted that some of the techniques \\\"yield almost as low performance as\\\" the randomized baseline, yet no attempt is made to explain why this might be the case or what implications it has for the authors' approach. This problem pervades the paper: many metrics are introduced for how we might want to evaluate these techniques, yet there is no provided reason to prefer one over another (or even why we might want to prefer them over the classical techniques).\", \"other_comments\": [\"There remain open questions about the quality of the MSE numbers; there are a number of instances in which the authors cite that the \\\"Meta-Policy MSE is not a simple to interpret\\\" (The remainder of this sentence is incomplete in the paper), yet little is done to further justify why it was used here, or why many of the deep representation techniques do not perform very well.\", \"The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided?\", \"The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI?\", \"The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper.\", \"The authors need to be clearer about their notation in a number of places. For instance, they use \\\\gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup.\", \"It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained).\", \"In short, the paper has some interesting ideas, yet lacks a clear takeaway message. Instead, it contains a large number of metrics and computes them for a host of different possible variations of the proposed techniques, and does not include significant explanation for the results. Even given my lack of expertise in this subject, the paper has some clear flaws that need addressing.\"], \"pros\": [\"A clear, well-written abstract and introduction\", \"While I am not experienced enough in the field to really comment on the originality, it does seem that the approach the authors have taken is original, and applies deep learning techniques to avoid having to custom-design a \\\"feature space\\\" for their particular family of problems.\"], \"cons\": [\"The figure captions are all very \\\"matter-of-fact\\\" and, while they explain what each figure shows, provide no explanation of the results. The figure captions should be as self-contained as possible (I should be able to understand the figures and the implications of the results from the captions alone).\", \"There is not much significance in the current form of the paper, owing to the lack of clear message. While the overarching problem is potentially interesting, the authors seem to make very little effort to draw conclusions from their results. I.e. it is difficult for me to easily visualize all of the \\\"moving parts\\\" of this work: a figure showing the relationship bet\", \"Too many individual ideas are presented in the paper, hurting clarity. As a result, the paper feels scattered. The authors do not have a clear message that neatly ties the results together.\"], \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Specific responses to reviewer 1 (part 3)\", \"comment\": \"> R1 \\\"The performance of any algorithm (except FI) in the Arm-Arrow task is really bad but without comment.\\\"\", \"see_general_answer_and_new_graphs_in_the_paper\": \"most algorithms actually perform very well from the main perspective of interest in the paper (exploration efficiency).\\n\\n> R1 \\\"- I am skeptical about the VAE and RFVAE results. If there are not mistakes then this is indeed alarming.\\\"\\n> R1 \\\"- The main problem seems to be that some algorithms are not representing the whole input space.\\n\\nFollowing your remark, we double checked the code and made an in depth verification of results. A small bug indeed existed, which made the projection of points in latent space wider than it should be. This was fixed in those new experiments, and we validated that the whole input space was represented in the latent representation. Despite this, it didn't changed the conclusion drawn in the original paper. Indeed, our new results show the same type of behavior as in the first version, in particular:\\n\\t+ The exploration performances for VAE with KDE goal sampling distribution are still above Gaussian goal Sampling. Our experiments showed that convergence on the KL term of the loss can be more or less quick depending on the initialization. Since we used an number of iterations as stopping criterion for our trainings (based on early experiments), we found that sometimes, at stop, despite achieving a low reconstruction error, the divergence was still pretty high. In those cases the representation was not perfectly matching an isotropic gaussian, which lead to biased sampling.\\n + The performances of the RFVAE are still worse than any other algorithms. Our experiments showed that they introduce a lot of discontinuities in the representation, which along with physics boundaries of achievable states, can generate \\\"pockets\\\" in the representation from which a Random Goal Exploration can't escape. This would likely be different for a more advanced exploration strategy such as Active Goal exploration. \\n \\n> R1 - Is it true that the robot always starts from same initial condition?! Context=Emptyset. \\n\\nyes. In (Forestier et al., ICDL-Epirob 2016), a similar setup is used except that the starting conditions are randomized at each new episode (and that goal representation are engineered): they show that the dynamics of exploration scales wells. Here we chose to start from the same initial condition to be able to display clearly in 2D the full space of discovered outcomes (if one would include the starting ball position, this would be a 4D space). \\n\\n> R1 - For ISOMap etc, you also used a 10dim embedding?\\n\\nyes.\\n\\n>In the related literature, in particular concerning the intrinsic motivation, I think the following papers are relevant:\\n>J. Schmidhuber, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013.\\n>and\\n>G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.\\n\\nyes, these are relevant papers indeed, which are cited in reviews we cite, but we added them for more coverage.\"}",
"{\"title\": \"Specific responses to reviewer 2\", \"comment\": \"These comments suggest that the reviewer thinks that in the particular experiment we made, and thus the particular implementation of IMGEPs we used, we are training a single large neural network for learning forward and inversed models. We could have done this indeed, and in that case the reviewer' suggestion would recommend very relevantly to use the lower-layers and/or decoding projection of the (variational) auto-encoders. However, we are not using neural networks for learning forward and inverse models, but rather non-parametric methods based on memorizing examplars associating the parameters of DMPs and their outcomes in the embedding space (which itself comes from auto-encoders),\\nin combination with local online regression models and optimization on these local models. This approach comes from the field of robotics, where is has shown extremely efficient for fast incremental learning of forward and inverse models. Comparing this approach with a full neural network approach (which might generalize better but have difficulties for fast incremental learning) would be a great topic for another paper. In the new version of the article, we have tried to improve the clarity of the description of the particular implementation of IMGEPs we have used.\"}",
"{\"title\": \"General answer to all reviewers\", \"comment\": \"We thank all reviewers for their detailed comments, which have helped us a lot to improve our paper. On one hand, we appreciate that all reviewers found the overall approach interesting and important.\\nOn the other hand, we agree with reviewers that there were shortcomings in paper, and we thank them for pointing ways in which it could be improved, which we have attempted to do in the new version of the article, that includes both new explanations and new experimental results. \\n\\nThe main point of the reviewers was that our text did not identify concisely and clearly the main contributions and conclusions of this article, and in particular did not enable the reader to rank the importance and focus of these contributions (from our point of view). The comment of reviewer R1, summarizing our contributions, actually shows that we have not explained clearly enough what was our main target contribution (see below).\\nWe have added an explicit paragraph at the end of the introduction to outline and rank our contributions, as well as a paragraph at the beginning of the experimental section to pin point the specific questions to which the experiments provide an answer. We hope the messages are now much clearer.\\n\\nAnother point was that our initial text contained too many metrics, and lacked justification of their choices and relative importance. We have rewritten the results sections by focusing in more depth on the most important metrics (related to our target contributions), updating some of them with more standard metrics, and removing some more side metrics. The central property we are interested in in this article is the dynamics and quality of exploration of the outcome space, characterizing the (evolution of the) distribution of discovered outcomes, i.e. the diversity of effects that the learner discovers how to produce. In the initial version of the article, we used an ad hoc measure called \\\"exploration ratio\\\" to characterize the evolution of the global quality of exploration of an algorithm. We have now replaced this ad hoc measure with a more principled and more precise measure: the KL divergence between the discovered distribution of outcomes and the distribution produced by an oracle (= uniform distribution of points over the reachable part of the outcome space). This new measure is more precise as it much better takes into account the set of roll-outs which do not make the ball/arrow move at all. In the new version of the article, we can now see that this more precise measure enables to show that several algorithms actually approximate extremely well the dynamics of exploration IMGEPs using a goal space with engineered features, and that even some IMGEP-UGL algorithms (RGE-VAE) systematically outperform this baseline algorithm. Furthermore, we have now included plots of the evolution of the distribution of discovered outcomes in individual runs to enable the reader to grasp more clearly the progressive exploration dynamics for each algorithms.\\n\\nAnother point was that the MSE measure used in the first version of the article was very misleading. Indeed, it did not evaluate the exploration dynamics, but rather it evaluated a peculiar way to reuse in combination both the discovered data points and the learned representation in a particular kind of test (raw target images were given to the learner). This was misleading because 1) we did not explain well that it was evaluating this as opposed to the main target of this article (distribution of outcomes); 2) this test evaluates a rather exotic way to reuse the discovered data points (previous papers reused the discovered data in other ways). This lead R1 to infer that the algorithms were not not working well in comparison with the \\u201cFull Information\\u201d (FI) baseline (now called EFR, for \\\"Engineered Feature Representation\\\"): on the contrary, several IMGEP-UGL algorithms actually perform better from the perspective we are interested in here. As the goal of this paper is not to study how the discovered outcomes can be reused for other tasks, we have removed the MSE measures.\"}",
"{\"title\": \"Interesting, but not substantial enough -> updated now good enough\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper investigates different representation learning methods to create a latent space for intrinsic goal generation in guided exploration algorithms. The research is in principle very important and interesting.\\n\\nThe introduction discusses a great deal about intrinsic motivations and about goal generating algorithms. This is really great, just that the paper only focuses on a very small aspect of learning a state representation in an agent that has no intrinsic motivation other than trying to achieve random goals.\\nI think the paper (not only the Intro) could be a bit condensed to more concentrate on the actual contribution. \\n\\nThe contribution is that the quality of the representation and the sampling of goals is important for the exploration performance and that classical methods like ISOMap are better than Autoencoder-type methods. \\n\\nAlso, it is written in the Conclusions (and in other places): \\\"[..] we propose a new intrinsically Motivated goal exploration strategy....\\\". This is not really true. There is nothing new with the intrinsically motivated selection of goals here, just that they are in another space. Also, there is no intrinsic motivation. I also think the title is misleading.\\n\\nThe paper is in principle interesting. However, I doubt that the experimental evaluations are substantial enough for profound conclusion.\", \"several_points_of_critic\": [\"the input space was very simple in all experiments, not suitable for distinguishing between the algorithms, for instance, ISOMap typically suffers from noise and higher dimensional manifolds, etc.\", \"only the ball/arrow was in the input image, not the robotic arm. I understand this because in phase 1 the robot would not move, but this connects to the next point:\", \"The representation learning is only a preprocessing step requiring a magic first phase.\", \"-> Representation is not updated during exploration\", \"The performance of any algorithm (except FI) in the Arm-Arrow task is really bad but without comment.\", \"I am skeptical about the VAE and RFVAE results. The difference between Gaussian sampling and the KDE is a bit alarming, as the KL in the VAE training is supposed to match the p(z) with N(0,1). Given the power of the encoder/decoder it should be possible to properly represent the simple embedded 2D/3D manifold and not just a very small part of it as suggested by Fig 10.\", \"I have a hard time believing these results. I urge you to check for any potential errors made. If there are not mistakes then this is indeed alarming.\"], \"questions\": [\"Is it true that the robot always starts from same initial condition?! Context=Emptyset.\", \"For ISOMap etc, you also used a 10dim embedding?\"], \"suggestion\": \"- The main problem seems to be that some algorithms are not representing the whole input space.\\n- an additional measure that quantifies the difference between true input distribution and reproduced input distribution could tier the algorithms apart and would measure more what seems to be relevant here. One could for instance measure the KL-divergence between the true input and the sampled (reconstructed) input (using samples and KDE or the like). \\n- This could be evaluated on many different inputs (also those with a bit more complicated structure) without actually performing the goal finding.\\n- BTW: I think Fig 10 is rather illustrative and should be somehow in the main part of the paper\\n \\nOn the positive side, the paper provides lots of details in the Appendix.\\nAlso, it uses many different Representation Learning algorithms and uses measures from manifold learning to access their quality.\\n\\nIn the related literature, in particular concerning the intrinsic motivation, I think the following papers are relevant:\\nJ. Schmidhuber, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013.\\n\\nand\\n\\nG. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.\", \"typos_and_small_details\": \"\", \"p3_par2\": \"for PCA you cited Bishop. Not critical, but either cite one the original papers or maybe remove the cite altogether\", \"p4_par_2\": \"has multiple interests...: interests -> purposes?\", \"p4_par_1\": \"Outcome Space to the agent is is ...\\nSec 2.2 par1: are rapidly mentioned... -> briefly\\nSec 2.3 ...Outcome Space O, we can rewrite the architecture as:\\n and then comes the algorithm. This is a bit weird\", \"sec_3\": \"par1: experimental campaign -> experiments?\", \"p7\": \"Context Space: the object was reset to a random position or always to the same position?\", \"footnote_14\": \"superior to -> larger than\", \"p8_par2\": \"Exploration Ratio Ratio_expl... probably also want to add (ER) as it is later used\", \"sec_4\": \"slightly underneath -> slightly below\", \"p9_par1\": \"unfinished sentence: It is worth noting that the....\", \"one_sentence_later\": \"RP architecture? RPE?\", \"fig_3\": \"the error of the methods (except FI) are really bad. An MSE of 1 means hardly any performance!\", \"p11_par2\": \"for e.g. with the SAGG..... grammar?\", \"plots_in_general\": \"use bigger font sizes.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
r1ISxGZRb | Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning | [
"Matt Riemer",
"Michele Franceschini",
"and Tim Klinger"
] | Deep lifelong learning systems need to efficiently manage resources to scale to large numbers of experiences and non-stationary goals. In this paper, we explore the relationship between lossy compression and the resource constrained lifelong learning problem of function transferability. We demonstrate that lossy episodic experience storage can enable efficient function transferability between different architectures and algorithms at a fraction of the storage cost of lossless storage. This is achieved by introducing a generative knowledge distillation strategy that does not store any full training examples. As an important extension of this idea, we show that lossy recollections stabilize deep networks much better than lossless sampling in resource constrained settings of lifelong learning while avoiding catastrophic forgetting. For this setting, we propose a novel dual purpose recollection buffer used to both stabilize the recollection generator itself and an accompanying reasoning model. | [
"generation",
"consolidation",
"recollections",
"efficient deep lifelong",
"resource",
"lifelong",
"deep lifelong",
"systems",
"resources",
"large numbers"
] | Reject | https://openreview.net/pdf?id=r1ISxGZRb | https://openreview.net/forum?id=r1ISxGZRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ryAT4O6QM",
"ByV-gu6mG",
"S1iEoBnlf",
"HyMjedamG",
"B1PkwyTSG",
"B1GkSWIWM",
"rJ9c-u67z",
"ryfA9SYez"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1515189445549,
1515188220104,
1511967539076,
1515188377977,
1517250270860,
1512604890473,
1515188626080,
1511770826142
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper768/Authors"
],
[
"ICLR.cc/2018/Conference/Paper768/Authors"
],
[
"ICLR.cc/2018/Conference/Paper768/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper768/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper768/AnonReviewer4"
],
[
"ICLR.cc/2018/Conference/Paper768/Authors"
],
[
"ICLR.cc/2018/Conference/Paper768/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Response to Comments By Reviewer 4\", \"comment\": \"We appreciate your concern that the VAE itself would suffer from catastrophic forgetting. We have attempted to provide more clarification about how the VAE is able to stabilize itself with self-generated recollections. We have also provided two additional charts to serve as empirical evidence that this happens when training in a continual lifelong learning setting on CIFAR-100. In the left chart of Figure 3, we demonstrate that self-generated recollections can stabilize a lifelong autoencoder as well as real example replay of a comparable resource footprint, and significantly better than online training on CIFAR-100. In Figure 4, we demonstrate that after running our CIFAR-100 models for many training examples on CIFAR-10, the benefit of the extra diversity of experiences we can have when using lossy recollections to prevent forgetting outweighs negative effects associated with forgetting that the VAE experiences.\\n\\nThank you for your question about freezing the decoder parameters before each incoming experience. We have now made this clearer in section 2.4. Instead of freezing the decoder parameters and keeping two copies, we can simply forward propagate for all of the replay mini-batches associated with learning the current example ahead of time. This feature, as well as the encoding and decoding of memories and training of the autoencoder, do indeed add computation over lossless methods. In our experiments, however, these costs were pretty negligible as the computation associated with the larger Resnet-18 reasoning model overshadows the computation associated with our much smaller VAEs. While this strategy does indeed add some computation for our approach, it is also critical for stabilizing the training of the autoencoder for continual lifelong learning, as we explain in section 2.4.\\n\\nThe techniques you mentioned for comparison are, unfortunately, not suited for the resource constrained lifelong learning problems we explore in this paper. We tried LwF for continual learning saving the old model parameters after every task on CIFAR-100 and found it to be very ineffective in terms of performance. It is also very computationally expensive later in training as the number of terms in the loss function grow linearly with the number of tasks. This result aligns with the experiments in (Lopez-Paz & Ranzato, NIPS 2017) that found a similar forgetting prevention technique EwC (Kirkpatrick et al., PNAS 2017) to be less effective than episodic techniques for lifelong learning on CIFAR-100. This is largely because forgetting prevention techniques focus on retaining poor performance on early tasks while episodic storage techniques continue to improve on these tasks when they learn relevant concepts later. \\n\\nProgressive Neural Networks have not been shown to scale to the number of tasks and deep residual network architectures that we consider. This is because model parameters scale even more than linearly with the number of tasks due to lateral connections with all prior task representations at each layer. As a result, each new task adds more parameters than the task before it. Our approach is not reliant on human defined tasks to work. Additionally, incremental storage and computation costs from adding model parameters with each task for such a large model consumes far more resources than the episodic storage footprints we consider in our experiments. We should also note that the work of (Rusu et al., 2016) is not directly comparable to ours in that their few task reinforcement learning experiments are not performing continual learning. They use A3C, which may be superior to experience replay methods in terms of wall clock time for convergence. However, A3C involves multiple agents performing RL episodes at the same time on different threads, which is not the same as continual learning of a single agent. While A3C is fast in terms of wall clock time, it is not efficient with the total number of episodes needed to reach good performance. On the other hand, this kind of efficiency is an important criteria for the very difficult and ambitious task of continual lifelong learning.\\n\\nWhile this work does not address the related topic of alleviating negative transfer in multi-task learning, our work does provide a clear advancement in the study of experience replay mechanisms for lifelong learning. Experience storage has been a key component to stabilize training of many of the most successful lifelong learning and reinforcement learning algorithms to date. It is not the goal of this paper to compare this very successful family of methods with other alternatives that function quite differently.\"}",
"{\"title\": \"New Revisions to Address Reviewer Concerns\", \"comment\": \"We would like to thank the reviewers for their time and feedback. To address reviewer concerns about clarity, we substantially reorganized and edited the paper. We hope our revised draft makes both the novelty and motivation of our approach clearer. There are substantial differences with the earlier version due to an adjusted presentation structure, but we did not significantly change the ideas presented.\\n\\nWe will now directly address the concerns raised by each reviewer.\"}",
"{\"title\": \"This paper presents important and timely problem of lifelong learning under resource constraints; the manuscript lacks clarity and structure; limited novelty.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper addresses lifelong learning setting under resource constraints, i.e. how to efficiently manage the storage and how to generalise well with a relatively small diversity of prior experiences. The authors investigate how to avoid storing a lot of original training data points while avoiding catastrophic forgetting at the same time.\\nThe authors propose a complex neural network architecture that has several components. One of the components is a variational autoencoder with discrete latent variables, where the recently proposed Gumbel-softmax distribution is used to efficiently draw samples from a categorical distribution (Jang et al ICLR 2017). Discrete variables are categorical latent variables using 1-hot encoding of the class variables. In fact, in the manuscript, the authors describe one-hot encoding of c classes as l-dimensional representation. Why is it not c-dimentional? Also the class probabilities p_i are not defined in (7). \\nThis design choice is reasonable, as autoencoder with categorical latent variables can achieve more storage compression of input observations in comparison with autoencoders with continuos variables. \\nAnother component of the proposed model is a recollection buffer/generator, a generative module (alongside the main model) which produces pseudo-experiences. These self generated pseudo experiences are sampled from the buffer and are combined with available real samples during training to avoid catastrophic forgetting of prior experiences. This module is inspired by episodic training proposed by Lopez-Paz and Ranzato in ICLR2017 for continual learning. In fact, a recollection buffer for MNIST benchmark has 50K codes to store. How fast would it grow with more tasks/training data? Is it suitable for lifelong learning? \\n\\nMy main concern with this paper is that it is not easy to grasp the gist of it. The paper is 11 pages long and often has sections with weakly related motivations described in details (essentially it would be good to cut the first 6 pages into half and concentrate on the relevant aspects only). It is easy to get lost in unimportant details, where as important details on model components are not very clear and not structured. Second concern is limited novelty (from what I understood).\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to Comments By Reviewer 1\", \"comment\": \"We have attempted to address your concern about retention of knowledge by the reasoning model when it is presented with many additional experiences. In Figure 4, we plot CIFAR-100 model performance after switching from continual lifelong learning on CIFAR-100 to the disjoint set of labels from CIFAR-10 for many training examples. Our results highlight that the increased diversity of experiences helps the resource constrained system retain knowledge better when using lossy storage than it does when using comparable lossless storage techniques.\\n\\nIn our continual lifelong learning experiments, we store the task and label index along with the latent code in the recollection buffer, as this information is already very light weight. We have reformatted the presentation of the approach to make this clearer in the paper.\\n\\nRegarding the benefit of the reasoning model not forgetting previously learned knowledge, we would first comment that our approach makes very few assumptions about the reasoning model. This feature would likely be orthogonal and complimentary to our approach in many settings. However, we would like to highlight that our goal isn\\u2019t only to prevent forgetting. Our goal is to navigate the stability-plasticity dilemma in a way that maximizes performance on old and new examples. Experience replay provides an approximation of i.i.d. stationary random input sampling in non-stationary environments, allowing neural networks to effectively optimize for the true objective with the efficacy of offline training in the limit of an unbounded experience buffer size. (Lopez-Paz & Ranzato, NIPS 2017) found EwC (Kirkpatrick et al., PNAS 2017) a popular forgetting prevention technique to be ineffective relative to techniques leveraging episodic storage for continual lifelong learning on CIFAR-100. One of the big reasons they found for the performance difference was that EwC focuses on retaining its poor performance on early tasks, while techniques with episodic storage continually improve on old examples as they learn relevant concepts later.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The reviewers were uniformly unimpressed with the contributions of this paper. The method is somewhat derivative and the paper is quite long and lacks clarity. Moreover, the tactic of storing autoencoder variables rather than full samples is clearly an improvement, but it still does not allow the method to scale to a truly lifelong learning setting.\"}",
"{\"title\": \"Deep Lifelong learning with recollections under resource constraints.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents an approach to lifelong learning with episodic experience storage under resource constraints. The key idea of the approach is to store the latent code obtained from a categorical Variational Autoencoder as opposed to the input example itself. When a new task is learnt, catastrophic forgetting is avoided by randomly sampling stored codes corresponding to past experience and adding the corresponding reconstruction to a batch of data from a new problem. The authors show that explicitly storing data provides better results than random sampling from the generative model. Furthermore, the method is compared to other techniques relying on episodic memory and as expected, achieves better results given a fixed effective buffer size due to being able to store more experience.\\n\\nWhile the core idea of this paper is reasonable, it provides little insight into how episodic experience storage compares to related methods as an approach to lifelong learning. While the authors compare their method to other techniques based on experience replay, I feel that a comparison to other techniques is important. A natural choice would be a model which introduces task-specific parameters for each problem (e.g. (Li & Hoiem, 2016) or (Rusu et al., 2016)).\\n\\nA major concern is the fact that the VAE with categorical latents itself suffers from catastrophic forgetting. While the authors propose to \\\"freeze decoder parameters right before each incoming experience and train multiple gradient descent iterations over randomly selected recollection batches before moving on to the next experience\\\", this makes the approach both less straight-forward to apply and more computationally expensive. \\n\\nMoreover, the authors only evaluate the approach on simple image recognition tasks (MNIST, CIFAR-100, Omniglot). I feel that an experiment in Reinforcement Learning (e.g. as proposed in (Rusu et al., 2016)) would provide more insight into how the approach behaves in more challenging settings. In particular, it is not clear whether experience replay may lead to negative transfer when subsequent tasks are more diverse.\\n\\nFinally, the manuscript lacks clarity. As another reviewer noted, detailed sections of weakly related motivations fail to strengthen the reader's understanding. As a minor point, the manuscript contains several grammar and spelling mistakes.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to Comments By Reviewer 2\", \"comment\": \"Thank you for your comment about the beginning of the paper. We have significantly reorganized the way we present the ideas to address your comment. Hopefully this also helps highlight some of the novel ideas presented in this paper. Our approach is novel in that it is the first that models hippocampal memory index theory using modern deep neural networks. We are also the first to demonstrate how the theory\\u2019s signature combination of pattern completion and pattern separation work together to enable faster knowledge transfer using recollections. This capability, in turn, leads to a model that can efficiently distill its knowledge to a student network of a different architecture without storing any real examples. Additionally, it can enable more effective experience replay with superior scaling in resource constrained settings of continual lifelong learning.\\n\\nWe have also tried to address your confusion related to the description of the Gumbel-Softmax function. To clarify, we are using c variables that are each l dimensional, implying we use c separate one hot encodings of size l to represent a latent code. This is standard practice for discrete latent variable autoencoders. We adopt conventions from (Jang et al., ICLR 2017) where possible in our presentation of the approach. We have also reworked the presentation of our approach to make the scaling considerations clear. A key benefit of the proposed technique we argue for in section 2.4 is that because of transfer learning, scaling is less than linear with the number of experiences in contrast with the linear scaling of storing lossless experiences.\"}",
"{\"title\": \"Recollections for efficient deep lifelong learning\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes an architecture for efficient deep lifelong learning. The key idea is to use recollection generator (autoencoder) to remember the previously processed data in a compact representation. Then when training a reasoning model, recollections generated from the recollection generator are used with real-world examples as input data. Using the recollection, it can avoid forgetting previous data. In the experiments, it has been shown that the proposed approach is efficient for transfer knowledge with small data compared to random sampling approach.\\n\\nIt is an interesting idea to remember previous examples using the compact representation from autoencoder and use it for transfer learning. However, I think the paper would be improved if the following points are clarified.\\n\\n1. It seems that reconstructed data from autoencoder does not contain target values. It is not clear to me how the reasoning model can use the reconstructed data (recollections) for supervised learning tasks. \\n\\n2. It seems that the proposed framework can be better presented as a method for data compression for deep learning. Ideally, for lifelong learning, the reasoning model should not forget previously learned kwnoledge embeded in their weights. \\nHowever, under the current architecture, it seems that the reasoning model does not have such mechanisms.\\n\\n3. For lifelong learning, it would be interesting to test if the same reasoning model can deal with increasing number of tasks from different datasets using the recollection mechanisms.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
H196sainb | Word translation without parallel data | [
"Guillaume Lample",
"Alexis Conneau",
"Marc'Aurelio Ranzato",
"Ludovic Denoyer",
"Hervé Jégou"
] | State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available. | [
"unsupervised learning",
"machine translation",
"multilingual embeddings",
"parallel dictionary induction",
"adversarial training"
] | Accept (Poster) | https://openreview.net/pdf?id=H196sainb | https://openreview.net/forum?id=H196sainb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SJfsiaQ7z",
"rkJTKT9lz",
"rJ0n4tGQG",
"SkMy4hKlz",
"B1yRBYGQM",
"H1RBrtfmG",
"HkD8ivF0-",
"rJEg3TtxM",
"Skw7wkXQG",
"BJ4hFZ5ez",
"SyE3AHgxG",
"SkOhmJaBf",
"rJcdzzcCb",
"Sy_UZ--4f",
"HkFHDhiGz",
"H1Qhqm9ez"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"comment",
"official_review"
],
"note_created": [
1514556313671,
1511868855534,
1514472630472,
1511797722369,
1514472903490,
1514472773616,
1509681999501,
1511803884453,
1514497823453,
1511819692278,
1511181995673,
1517249455594,
1509724786274,
1515422031974,
1514026816731,
1511828138705
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper7/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"ICLR.cc/2018/Conference/Paper7/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper7/Authors"
],
[
"ICLR.cc/2018/Conference/Paper7/AnonReviewer1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper7/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"title\", \"comment\": \"We thank you for your comment and we are glad to clarify.\\n\\nThe methodological difference between what we have proposed and Zhang et al.\\u2019s method is not just a better stopping criterion, but more importantly, a better underlying method. Here is the detailed comparison between the two approaches:\\n- The very first step which is adversarial training with orthogonality constraint, is similar, see figure 1B in sec. 2.1 of our paper and figure 2 in [Zhang et al 2017a] (except for the use of earth mover distance) and figure 2b in [Zhang et al 2017b], but:\\n- the refinement step described in sec. 2.2 and Figure 1C is not present is Zhang et al 2017a/b, nor is\\n- the use of CSLS metric addressing the hubness problem, see sec 2.3 and figure 1D.\\nIn contrast, we do not use any of the approaches described in Zhang et al. 2017b shown in their figure 2b and 2c.\\nEmpirically, we demonstrate in tab. 1 the importance of both the refinement step and the use of CSLS metric to achieve excellent performance.\\n\\nIn addition to this key differences between the two approaches, we have also proposed a better stopping criterion as pointed out in your comment. This is actually not just a stopping criterion but a \\u201cvalidation\\u201d criterion that quantifies the closeness of the source and target spaces, and that correlates well with the word translation accuracy (see Figure 2). We not only use it as a stopping criterion, but also to select the best models across several experiments, which is something that Zhang et al. cannot do. Moreover, their stopping criterion is based on \\u201csharp drops of the generator loss\\u201d, and it did not work in our experiments to select the best models (see Figure 2 of our paper).\\n\\nIn terms of evaluation protocol, Zhang et al. compare their unsupervised approach with a supervised method trained on 50 or 100 pairs of words only, which is a little odd given that most papers consider 5000 pairs of words (see Mikolov et al., Dinu et al., Faruqui et al., Smith et al., Artetxe et al., etc.). As a result, they have an extremely weak supervised baseline, while our supervised baseline is itself the new state of the art.\\n\\nFinally, note that we have released our code and we know that other research groups were able to already reproduce our results, and we have also released our ground-truth dictionaries and evaluation pipeline, which will hopefully help the community make further strides in this area (as today we lack a standardized evaluation protocol as pointed out above, and large scale ground truth dictionaries in lots of different language pairs).\"}",
"{\"title\": \"Answer\", \"comment\": \"I find the answer from the authors quite satisfying. In particular, the missing result that was provided in the answer (and should be definitely added to the paper) addresses my main concerns regarding the comparability with previous work. While this result is not as spectacular as the others (almost at par with the supervised system, possibly because the comparability of Wikipedia is playing a role as pointed in my previous comment) and I think that some of the claims in the paper should be reworded accordingly, it does convincingly show that the proposed method can achieve SOTA results in a standard dataset without any supervision.\\n\\nRegarding the work of Zhang et al. (2017b) and Artetxe et al. (2017), I agree on most of the comments on the former, and this new result shows that the proposed method works better than the latter. However, I still think that some of the claims regarding these papers (e.g. Zhang et al. (2017b) \\\"is significantly below supervised methods\\\" or Artetxe et al. (2017) does not work for en-ru and en-zh) are unfounded and need to either be supported experimentally or reconsidered.\"}",
"{\"title\": \"response 1\", \"comment\": [\"We thank the reviewer for the feedback and comments.\", \"It is true that the supervised approach is limited in the sense that it only considers 5000 pairs of words. However, previous works have shown that using more than 5000 pairs of words does not improve the performance (Artetxe et al. (2017)), and can even be detrimental (see Dinu et al. (2015)). This is why we decided to consider 5000 pairs only, to be consistent with previous works. Also, note that we made our supervised baseline (Procrustes + CSLS) as strong as possible, and it is actually state-of-the-art.\", \"Regarding the claim \\\"this is a first step towards fully unsupervised machine translation\\\", what we meant is that the method proposed in the paper could potentially be used in a more complex framework for unsupervised MT at the sentence level. We rephrased this sentence in the updated version of the paper.\", \"We now address the comments / suggestions of the reviewer:\", \"The abstract could indeed benefit from details about the model. We will add some.\", \"The co-occurrence statistics have indeed an impact on the overall performance of the model. This impact is consistent for both supervised and unsupervised approaches. Indeed, our unsupervised method obtains 66.2% accuracy on the English-Italian pair on the Wikipedia corpora (Table 2), and 45.1% accuracy on the UKWAC / ITWAC non-comparable corpora. This result was not in the paper (we thought it was redundant with Table 1), but we added it in Table 2 in the updated version. Figure 3 in the appendix also gives insights about the impact of the similarity of the two domains, by comparing the quality of English-English alignment using embeddings trained on different English corpora.\", \"It would indeed possible to add weights in Equation (6). We tried to weight the r_S and r_T terms, but we did not observe a significant improvement compared to the current equation.\", \"In the supervised approach, we generated translations for all words from the source language to the target language, and vice-versa (a translation being a pair (x, y) associated with the probability for y of being the correct translation of x). Then, we considered all pairs of words (x, y) such that y has a high probability of being a translation of x, but also that x has a high probability of being a translation of y. Then, we sorted all generated translation pairs by frequency of the source word, and took the 5000 first resulting pairs.\", \"We tried to use non-linear mappings (namely a feedforward network with 1 or 2 hidden layers), but in these experiments, the adversarial training was quite unstable, and like in Mikolov et al. (2013), we did not observe better results compared to the linear mapping. Actually, the linear mapping was working significantly better, and since the Procrustes algorithm in the refinement step requires the mapping to be linear, we decided to focus on this type of mapping. Moreover, the linear mapping is convenient because we can impose the orthogonality constraint, which guarantees that the quality of the source monolingual embeddings is preserved after mapping.\", \"We did not try to jointly learn the embeddings as well as the mapping, but this is a nice idea and definitely something that needs to be investigated. We think that the joint learning could improve the cross-lingual embeddings, but especially, it could significantly improve the quality of monolingual embeddings on low-resource languages.\", \"Our approach would definitely benefit from having a few parallel training points. These points could be used to pretrain the linear mapping for the adversarial training, or even as a validation dataset. This will be the focus of future work.\"]}",
"{\"title\": \"Comparison with previous work should be improved\", \"comment\": \"I think that the paper does not do a good job at comparing the proposed method with previous work.\\n\\nWhile most of the experiments are run in a custom dataset and do not include results from previous authors, the paper also reports some results in the standard dataset from Dinu et al. (2015) \\\"to allow for a direct comparison with previous approaches\\\", which I think that is necessary. However, they inexplicably use a different set of embeddings, trained in a different corpus, for their unsupervised method in these experiments, so their results are not actually comparable with the rest of the systems. While I think that these results are also interesting, as they shows that the training corpus and embedding hyperparameters can make a huge difference, I see no reason not to also report the truly comparable results with the standard embeddings used by previous work. In other words, Table 2 is missing a row for \\\"Adv - Refine - CSLS\\\" using the same embeddings as the rest of the systems.\\n\\nMoreover, I think that the choice of training the embeddings in Wikipedia is somewhat questionable. Wikipedia is a document-aligned comparable corpus, and it seems reasonable that the proposed method could somehow benefit from that, even if it was not originally designed to do so. In other words, while the proposed method is certainly unsupervised in design, I think that it was not tested in truly unsupervised conditions. In fact, there is some previous work that learns cross-lingual word embeddings from Wikipedia by exploiting this document alignment information (http://www.aclweb.org/anthology/P15-1165), which shows that this cross-lingual signal in Wikipedia is actually very strong.\\n\\nApart from that, I think that the paper is a bit unfair with some previous work. In particular, the proposed adversarial method is very similar to that of Zhang et al. (2017b), and the authors simply state that the performance of the latter \\\"is significantly below supervised methods\\\", without any experimental evidence that supports this claim. Considering that the implementation of Zhang et al. (2017b) is public (http://nlp.csai.tsinghua.edu.cn/~zm/UBiLexAT/), the authors could have easily tested it in their experiments and show that the proposed method is indeed better than that of Zhang et al. (2017b), but they don't.\\n\\nI also think that the authors are a bit unfair in their criticism of Artetxe et al. (2017). While the proposed method has the clear advantage of not requiring any cross-lingual signal, not even the assumption of shared numerals in Artetxe et al. (2017), it is not true that the latter is \\\"just not applicable\\\" to \\\"languages that do not share a common alphabet (en-ru and en-zh)\\\", as both Russian and Chinese, as well as many other languages that do not use a latin alphabet, do use arabic numerals. In relation to that, the statement that \\\"the method of Artetxe et al. (2017) on our dataset does not work on the word translation task for any of the language pairs, because the digits were filtered out from the datasets used to train the fastText embeddings\\\" clearly applies to the embeddings they use, and not to the method of Artetxe et al. (2017) itself. Once again, considering that the implementation of Artetxe et al. (2017) is public (https://github.com/artetxem/vecmap), the authors could have easily supported their claims experimentally, but they also fail to do so.\"}",
"{\"title\": \"response 3\", \"comment\": \"We thank the reviewer for the feedback and comments.\\n\\nAs mentioned in the comments, we added to the paper citations to the work of Ravi & Knight (2011) and some subsequent works on decipherment, and down-toned some claims in the paper.\\n\\nThank you for pointing the paper of Vulic & Moens, we were not aware of this paper and we added a citation in the updated version of the paper. Note however that the work of Vulic & Moens relies on document-aligned corpora while our method does not require any form of alignment.\", \"we_evaluated_the_cross_lingual_embeddings_on_4_different_tasks\": \"cross-lingual word similarity, word translation, sentence retrieval, and sentence translation. It is true that the quality of these embeddings on other downstream tasks would be interesting and will be investigated in future work.\"}",
"{\"title\": \"response 2\", \"comment\": \"We thank the reviewer for the feedback and comments.\\n\\nThe main concern of the review is about the lack of comparisons with existing works.\\n- The reviewer reproaches the lack of comparison against CCA, while the comparison against CCA is provided in Table 2. The reviewer also points out the lack of comparison against Artetxe et al. (2017). This comparison is also provided in the paper.\\n- We agree that our method could be compared to decipherment techniques, and would have been happy to try the method of Ravi & Knight but there is no open-source version of their code available online (like for Faruqui & Dyer, Dinu et al, Artexte et al, Smith et al). Therefore, considering the large body of literature in that domain, we focused on comparing our approach with the most recent state-of-the-art and supervised approaches, which in our opinion is a fair way to evaluate against reproducible baselines.\\n\\nThe second reviewer concern is about the performance of the model on non-comparable corpora. We considered that this was redundant with the results on Wikipedia provided in Table 1 and Table 2. As explained in one previous comment, our strategy was to first show that our supervised method (Procrustes-CSLS) is state-of-the-art, and then to compare our unsupervised approach against this new baseline. We added the result of our unsupervised approach (Adv - Refine - CSLS) on non-comparable WaCky corpora in Table 2. In particular, our unsupervised model on the non-comparable WaCky datasets is also state of the art with 45.1% accuracy.\\n\\nThe reviewer criticises the lack of novelty. To the best of our knowledge, the fact that an adversarial approach obtains state-of-the-art cross-lingual embeddings is new. Most importantly, the contributions of our paper are not limited to the adversarial approach. The CSLS method introduced to mitigate the hubness problem is new, and improves the state-of-the-art by up to 24% on the sentence retrieval task, as well as it improves the supervised baseline. We also introduced an unsupervised criterion that is highly correlated with the cross-lingual embeddings quality, which is also novel as far as we know, and a key element for training.\\n\\nAl last, please consider that we made our code publicly available and provided high-quality dictionaries for 110 oriented language pairs to help the community, as this type of resources are very difficult to find online.\"}",
"{\"title\": \"Is it really the first step towards unsupervised MT?\", \"comment\": \"Saying that the method is a first step towards fully unsupervised machine translation seems like a bold (if not false) statement. In particular, this has been done before using deciphering:\\n\\nRavi & Knight, \\\"Deciphering Foreign Language\\\", ACL 2011, http://aclweb.org/anthology/P/P11/P11-1002.pdf\\n\\nThere are plenty of other similar previous work besides this one. I think any claims on MT without parallel corpora should at least mention deciphering as related work.\"}",
"{\"title\": \"Review\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper proposes a method to learn bilingual dictionaries without parallel data using an adversarial technique. The task is interesting and relevant, especially for in low-resource language pair settings.\\n\\nThe paper, however, misses comparison against important work from the literature that is very relevant to their task \\u2014 decipherment (Ravi, 2013; Nuhn et al., 2012; Ravi & Knight, 2011) and other approaches like CCA. \\n\\nThe former set of works, while focused on machine translation also learns a translation table in the process. Besides, the authors also claim that their approach is particularly suited for low-resource MT and list this as one of their contributions. Previous works have used non-parallel and comparable corpora to learn MT models and for bilingual lexicon induction. The authors seem aware of corpora used in previous works (Tiedemann, 2012) yet provide no comparison against any of these methods. While some of the bilingual lexicon extraction works are cited (Haghighi et al., 2008; Artetxe et al., 2017), they do not demonstrate how their approach performs against these baseline methods. Such a comparison, even on language pairs which share some similarities (e.g., orthography), is warranted to determine the effectiveness of the proposed approach.\\n\\nThe proposed methodology is not novel, it rehashes existing adversarial techniques instead of other probabilistic models used in earlier works. \\n\\nFor the translation task, it would be useful to see performance of a supervised MT baseline (many tools available in open-source) that was trained on similar amount of parallel training data (60k pairs) and see the gap in performance with the proposed approach.\\n\\nThe paper mentions that the approach is \\u201cunsupervised\\u201d. However, it relies on bootstrapping from word embeddings learned on Wikipedia corpus, which is a comparable corpus even though individual sentences are not aligned across languages. How does the quality degrade if word embeddings had to be learned from scratch or initialized from a different source?\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"review responses\", \"comment\": [\"We thank all the reviewers for the feedback and comments. We replied to each of them individually, and uploaded a revised version of the paper. In particular, we:\", \"Rephrased one of the claims made in the abstract about unsupervised machine translation\", \"Added the requested 45.1% result of our unsupervised approach on the WaCky datasets\", \"Fixed some typos\", \"Added missing citations\"]}",
"{\"title\": \"comparison\", \"comment\": \"The paper by Dinu et al. provides embeddings and dictionaries for the English-Italian language pair. The embeddings they provide have become pretty standard and we found at least 5 previous methods that used this dataset:\\nMikolov et al., Faruqui et al., Dinu et al., Smith et al., Artetxe et al.\\nThese previous papers provide strong supervised SOTA baselines on the word translation task, and in Table 2 we show results of our supervised method compared to these 5 papers. The row \\u201cProcrustes + CSLS\\u201d is a supervised baseline, training our method with supervision using exactly the same word embeddings and dictionaries as in Dinu et al. These results show that our supervised baseline works better than all these previous approaches (reaching 44.9% P@1 en-it).\\nThe requested unsupervised configuration \\u201cAdv - Refine - CSLS\\u201d using the same embeddings and dictionary as in Dinu et al. obtains 45.1% on en-it, which is better than our supervised baseline (and SOTA by more than 2%). \\n\\nHowever, this information is redundant with Table 1, which shows that our unsupervised approach is better than our supervised baseline on European languages. We therefore decided not to incorporate this result, but we will add it back as suggested.\\n\\nMoreover, using the Wacky datasets (non comparable corpora) to learn embeddings, we improved the SOTA by 11.5% and 26.6% on the sentence retrieval task using our CSLS method, see table 3. Again, these experiments use the very same setting as previously reported in the literature.\\n\\nMore generally, regarding your comment \\u201cthey inexplicably use a different set of embeddings, trained in a different corpus\\u201d, note that:\\n- As noted above, we did compare using the very same embeddings and settings as others.\\n- We did study the effect of using different corpora: see fig. 3\\n- As shown in the paper, using our method on Wikipedia improves the results by more than 20%\\n- Wikipedia is available in most languages, pretrained embeddings were already released and publicly available, we just downloaded them (while the Wacky datasets are only available for a few languages)\\n- We found that the monolingual quality of these pretrained embeddings is better than the one obtained on the Wacky datasets\\n\\nAs opposed to the 5 methods we compare ourselves against in the paper, Zhang et al. (2017):\\n1) used different embeddings and dictionaries which they do not provide, \\n2) used a lexicon of 50 or 100 word pairs only in their supervised baseline, which is different than standard practice since Mikolov et al. (2013b) (see Dinu et al., Faruqui et al., Smith et al., etc.) and which is also what we did, namely considering dictionaries with 5000 pairs. As a result, they compare themselves to a very weak baseline.\\n3) in the retrieval task they consider a very simplistic settings, with only a few thousands words, as opposed to large dictionaries of 200k words (as done by Dinu et al., Smith et al. and us). \\n4) they. do not provide a validation set and, as shown in Figure 2 in our paper, their stopping criterion does not work well.\\nWe did try to run their code, but we have not been successful yet.\\n\\nAs for comparing against Artetxe et al., as reported in table 2 they obtain a P@1 of 39.7% while we obtain 45.1% using the same Dinu\\u2019s embeddings.\\n\\nFinally, we have released our code, along with our embeddings / dictionaries for reproducibility. We will share the link here as soon as the decisions are out in order to preserve anonymity.\"}",
"{\"title\": \"Good and interesting work (a few issues on the paper's claims)\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This paper presents a new method for obtaining a bilingual dictionary, without requiring any parallel data between the source and target languages. The method consists of an adversarial approach for aligning two monolingual word embedding spaces, followed by a refinement step using frequent aligned words (according to the adversarial mapping). The approach is evaluated on single word translation, cross-lingual word similarity, and sentence translation retrieval tasks.\\n\\nThe paper presents an interesting approach which achieves good performance. The work is presented clearly, the approach is well-motivated and related to previous studies, and a thorough evaluation is performed.\", \"my_one_concern_is_that_the_supervised_approach_that_the_paper_compares_to_is_limited\": [\"it is trained on a small fixed number of anchor points, while the unsupervised method uses significantly more words. I think the paper's comparisons are valid, but the abstract and introduction make very strong claims about outperforming \\\"state-of-the-art supervised approaches\\\". I think either a stronger supervised baseline should be included (trained on comparable data as the unsupervised approach), or the language/claims in the paper should be softened. The same holds for statements like \\\"... our method is a first step ...\\\", which is very hard to justify. I also do not think it is necessary to over-sell, given the solid work in the paper.\", \"Further comments, questions and suggestions:\", \"It might be useful to add more details of your actual approach in the Abstract, not just what it achieves.\", \"Given you use trained word embeddings, it is not a given that the monolingual word embedding spaces would be alignable in a linear way. The actual word embedding method, therefore, has a big influence on performance (as you show). Could you comment on how crucial it would be to train monolingual embedding spaces on similar domains/data with similar co-occurrence statistics, in order for your method to be appropriate?\", \"Would it be possible to add weights to the terms in eq. (6), or is this done implicitly?\", \"How were the 5k source words for Procrustes supervised baseline selected?\", \"Have you considered non-linear mappings, or jointly training the monolingual word embeddings while attempting the linear mapping between embedding spaces?\", \"Do you think your approach would benefit from having a few parallel training points?\", \"Some minor grammatical mistakes/typos (nitpicking):\", \"\\\"gives a good performance\\\" -> \\\"gives good performance\\\"\", \"\\\"Recent works\\\", \\\"several works\\\", \\\"most works\\\", etc. -> \\\"recent studies\\\", \\\"several studies\\\", etc.\", \"\\\"i.e, the improvements\\\" -> \\\"i.e., the improvements\\\"\", \"The paper is well-written, relevant and interesting. I therefore recommend that the paper be accepted.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"There is significant discussion on this paper and high variance between reviewers: one reviewer gave the paper a low score. However the committee feels that this paper should be accepted at the conference since it provides a better framework for reproducibility, performs more large scale experiments than prior work. One small issue the lack of comparison in terms of empirical results between this work and Zhang et al's work, but the responses provided to both the reviewers and anonymous commenters seem to be satisfactory.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for the pointer, we were aware of this work and we will add a citation. Note however that our focus is not to learn to a machine translation system (we just gave a simple example of this application, together with others like sentence retrieval, word similarity, etc.), but to infer a bilingual dictionary without using any labeled data. Unlike Ravi et al. we use monolingual data on both side at training time, and we infer a large bilingual dictionary (200K words). When we say \\\"this is a first step towards fully unsupervised machine translation\\\" it does not mean we are the first to look at this problem, we simply meant that our method could be used as a first step in a more complex pipeline. We will rephrase this sentence to avoid confusion.\\nIn other words, the two works look at different things: this one is focussed on learning a bilingual dictionary, while the other is focussed on the problem of machine translation.\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"Thank you for the very detailed response, also to the other reviewers' comments: all the questions and concerns were addressed very well.\"}",
"{\"title\": \"Methodological Distinction from Zhang et al. not clear\", \"comment\": \"While the results in this paper are very nice, this method seems to be almost the same as Zhang et al., and even after reading the comments in the discussion I can't tell what the main methodological differences are. Is it really only the stopping criterion for training? If so, the title \\\"word translation without parallel data\\\" seems quite grandiose, and it should probably be something more like \\\"a better stopping criterion for word translation without parallel data\\\".\\n\\nI'm relatively familiar with this field, and if it's even difficult for me to tell the differences between this and highly relevant previous work I'm worried that it will be even more difficult for others to put this research in the appropriate context.\"}",
"{\"title\": \"Well-rounded contribution, nice read, incomplete related work\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"An unsupervised approach is proposed to build bilingual dictionaries without parallel corpora, by aligning the monolingual word embeddings spaces, i.a. via adversarial learning.\\n\\nThe paper is very well-written and makes for a rather pleasant read, save for some need for down-toning the claims to novelty as voiced in the comment re: Ravi & Knight (2011) or simply in general: it's a very nice paper, I enjoy reading it *in spite*, and not *because* of the text sales-pitching itself at times.\\n\\nThere are some gaps in the awareness of the related work in the sub-field of bilingual lexicon induction, e.g. the work by Vulic & Moens (2016).\\n\\nThe evaluation is for the most part intrinsic, and it would be nice to see the approach applied downstream beyond the simplistic task of English-Esperanto translation: plenty of outlets out there for applying multilingual word embeddings. Would be nice to see at least some instead of the plethora of intrinsic evaluations of limited general interest.\\n\\nIn my view, to conclude, this is still a very nice paper, so I vote clear accept, in hope to see these minor flaws filtered out in the revision.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
BygpQlbA- | Towards Provable Control for Unknown Linear Dynamical Systems | [
"Sanjeev Arora",
"Elad Hazan",
"Holden Lee",
"Karan Singh",
"Cyril Zhang",
"Yi Zhang"
] | We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state. Using a recent spectral filtering technique for concisely representing such systems in a linear basis, we formulate optimal control in this setting as a convex program. This approach eliminates the need to solve the non-convex problem of explicit identification of the system and its latent state, and allows for provable optimality guarantees for the control signal. We give the first efficient algorithm for finding the optimal control signal with an arbitrary time horizon T, with sample complexity (number of training rollouts) polynomial only in log(T) and other relevant parameters. | [
"optimal control",
"reinforcement learning"
] | Invite to Workshop Track | https://openreview.net/pdf?id=BygpQlbA- | https://openreview.net/forum?id=BygpQlbA- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SydMCJ9gz",
"H1hFmXMGf",
"HynVA_vxG",
"r1IUEJ6Sf",
"HJGD4Qffz",
"ryr6tuv-G",
"BJE1VXfGz",
"ByvMH7zMz"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1511812624380,
1513399172307,
1511652915665,
1517249613789,
1513399386181,
1512700348916,
1513399260324,
1513399567472
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper570/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper570/Authors"
],
[
"ICLR.cc/2018/Conference/Paper570/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper570/Authors"
],
[
"ICLR.cc/2018/Conference/Paper570/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper570/Authors"
],
[
"ICLR.cc/2018/Conference/Paper570/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Review of \\\"Towards Provable Control for Unknown Linear Dynamical Systems\\\"\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper studies the control of symmetric linear dynamical systems with unknown dynamics. Typically this problem is split into a (non-convex) system ID step followed by a derivation of an optimal controller, but there are few guarantees about this combined process. This manuscript formulates a convex program of optimal control without the separate system ID step, resulting in provably optimality guarantees and efficient algorithms (in terms of the sample complexity). The paper is generally pretty well written.\\n\\nThis paper leans heavily on Hazan 2017 paper (https://arxiv.org/pdf/1711.00946.pdf). Where the Hazan paper concerns itself with the system id portion of the control problem, this paper seems to be the controls extension of that same approach. From what I can tell, Hazan's paper introduces the idea of wave filtering (convolution of the input with eigenvectors of the Hankel matrix); the filtered output is then passed through another matrix that is being learned online (M). That matrix is then mapped back to system id (A,B,C,D). The most novel contribution of this ICLR paper seems to be equation (4), where the authors set up an optimization problem to solve for optimal inputs; much of that optimization set-up relies on Hazan's work, though. However, the authors do prove their work, which increases the novelty. The novelty would be improved with clearer differentiation from the Hazan 2017 paper.\", \"my_biggest_concerns_that_dampen_my_enthusiasm_are_some_assumptions_that_may_not_be_realistic_in_most_controls_settings\": [\"First, the most concerning assumption is that of a symmetric LDS matrix A (and Lyapunov stability). As far as I know, symmetric LDS models are not common in the controls community. From a couple of quick searches it seems like there are a few physics / chemistry applications where a symmetric A makes sense, but the authors don't do a good enough job setting up the context here to make the results compelling. Without that context it's hard to tell how broadly useful these results are. In Hazan's paper they mention that the system id portion, at least, seems to work with non-symmetric, and even non-linear dynamical systems (bottom of page 3, Hazan 2017). Is there any way to extend the current results to non-symmetric systems?\", \"Second, it appears that the proposed methods may rely on running the dynamical system several times before attempting to control it. Am I misunderstanding something? If so this seems like it may be a significant constraint that would shrink the application space and impact even further.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Main points\", \"comment\": \"We thank the reviewers for their comments, and note the following main points.\\n\\n1. The difference between this paper and [HSZ17] is as follows. The results of [HSZ17] together with random exploration requires sample complexity that scales with poly(T). We show how to explore better with the filters than with random exploration, significantly reducing sample complexity to polylog(T), This is an important point, since poly(T) bounds can be obtained by straightforward regression and can be considered folklore. \\n\\n2. Our work is distinguished from Dean et al\\u2019s work as follows: \\nThe Dean et al. work considers a case with no hidden state - this is known to be efficiently solvable by convex optimization. \\nIn contrast, our setting is more general and has an evolving hidden state. The natural formulation is thus via *non-convex* optimization, for which no efficient algorithm was known before to our work. \\n\\n3. Clarification on the unit ball constraints (Optimal control inputs are restricted to be inside the unit ball and overall norm is bounded by L):\\nThe constraint comes from the fact that the error from the learned dynamics scales as the input.\", \"unit_ball_constraint\": \"This is a reasonable setting because often there is a maximum input that one can put into the system. It is without loss of generality because for the unrestricted setting, for a reasonable system starting at a bounded hidden state, the optimal control input will be bounded by some norm, which can be rescaled to 1. (Just scale down by an upper bound on the norm.)\", \"overall_norm_constraint\": \"This is reasonable because when the system is controllable, the optimal control decays the state geometrically, and the total sum of inputs is bounded.\"}",
"{\"title\": \"Idea is OK but the paper is not clearly written\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a new algorithm to generate the optimal control inputs for unknown linear dynamical systems (LDS) with known system dimensions.\\n\\nThe idea is exciting LDS by wave filter inputs and record the output and directly estimate the operator that maps the input to the output instead of estimating the hidden states. After obtaining this operator, this paper substitutes this operator to the optimal control problem and solve the optimal control problem to estimate the optimal control input, and show that the gap between the true optimal cost and the cost from applying estimated optimal control input is small with high probability.\\nI think estimating the operator from the input to the output is interesting, instead of constructing (A, B, C, D) matrices, but this idea and all the techniques are from Hazan et. el., 2017. After estimating this operator, it is straightforward to use this to generate the estimated optimal control input. So I think the idea is OK, but not a breakthrough.\\n\\nAlso I found the symmetric matrix assumption on A is quite limited. This limitation is from Hazan et. el., 2017, where the authors wants to predict the output. For prediction purposes, this restriction might be OK, but for control purposes, many interesting plants does not satisfy this assumption, even simple RL circuit. I agree with authors that this is an attempt to combine system identification with generating control inputs together, but I am not sure how to remove the restriction on A.\\nDean et. el., 2017 also pursued this direction by combining system identification with robust controller synthesis to handle estimation errors in the system matrices (A, B) in the state-feedback case (LQR), and I can see that Dean et. el. could be extended to handle observer-feedback case (LQG) without any restriction.\\n\\nDespite of this limitation I think the paper's idea is OK and the result is worth to be published but not in the current form. The paper is not clearly written and there are several areas need to be improved.\\n\\n1. System identification.\\nSubspace identification (N4SID) won't take exponential time. I recommend the authors to perform either proper literature review or cite one or two papers on the time complexity and their weakness. Also note that subspace identification can estimate (A, B, C, D) matrices which is great for control purposes especially for the infinite horizon LQR.\\n\\n2. Clarification on the unit ball constraints.\\nOptimal control inputs are restricted to be inside the unit ball and overall norm is bounded by L. Where is this restriction coming from? The standard LQG setup does not have this restriction.\\n\\n3. Clarification on the assumption (3).\\nWhere is this assumption coming from? I can see that this makes the analysis go through but is this a reasonable assumption? Does most of system satisfy this constraint? Is there any? It's ok not to provide the answer if it's hard to analyze, but if that's the case the paper should provide some numerical case studies to show this bound either holds or the gap is negligible in the toy example.\\n\\n4. Proof of theorem 3.3.\\nTheorem 3.3 is one of the key results in this paper, yet its proof is just \\\"noted\\\". The setup is slightly different from the original theorem in Hazan et. el., 2017 including the noise model, so I strongly recommend to include the original theorem in the appendix, and include the full proof in the appendix.\\n\\n5. Proof of lemma 3.1.\\nI found it's hard to keep track of which one is inside the expectation. I recommend to follow the notation E[variable] the authors been using throughout the paper in the proof instead of dropping these brackets.\\n \\n6. Minor typos\\nIn theorem 2.4, ||Q||_op is used for defining rho, but in the text ||Q||_F is used. I think ||Q||_op is right.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper studies the control of symmetric linear dynamical systems with unknown dynamics. While the reviewers agree that this is an interesting topic, there are concerns that the assumptions are not realistic. Lack of experiments also stands out. I recommend the paper to workshop track with the hope that it will foster more discussions and lead to more realistic assumptions.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"Response\", \"comment\": \"Re: innovation compared to HSZ\\u201917: The reviewer asked whether LDS control is a simple consequence of the ability to predict the next reward, as shown in HSZ17. This issue confused us too originally. But prediction in the sense of HSZ17 is a lot easier because the guarantee is in terms of mean-squared error for a single input-output sequence, over a large number of steps. Such MSE error permits predictions to be off for long stretches of time. To do control on the other hand one needs to look ahead at results of all control choices up to the horizon L and pick the best. Since the HSZ17 predictions for different lookahead paths may have arbitrary error in any time interval, the estimate for the max reward over all paths can be arbitrarily off. The bulk of the paper is showing that it is nevertheless possible with small sample complexity, and the proof is novel over HSZ17.\\n\\n1. The assumption that the LDS uses a *symmetric* matrix is indeed crucial for our result. However, note that solving the symmetric case is still significant progress on the problem of provably efficient control of LDS, which has been open for decades.\\n\\n2. The reviewer is correct that our proposed methods will rely on running the dynamical system several times. The need for multiple restarts is inherent to the problem of learning the system, at least under the assumptions in our setting. Notice that one cannot simply wait for the state to decay, since the transition matrix can have an eigenvalue of 1. A basic example shows this: suppose A is a tridiagonal matrix, B controls the first dimension of h, C observes the last dimension of h. Then, multiple restarts are needed to find the optimal control, since there is a delay before C can be determined. We will update the appendix with the full construction, to clarify this point. See also Main Point 1.\"}",
"{\"title\": \"Interesting approach but maybe more suited for a theory conference (no experiments).\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper presents a provable algorithm for controlling an unknown linear dynamical system (LDS). Given the recent interest in (deep) reinforcement learning (combined with the lack of theoretical guarantees in this space), this is a very timely problem to study. The authors provide a rigorous end-to-end analysis for the LDS setting, which is a mathematically clean yet highly non-trivial setup that has a long history in the controls field.\\n\\nThe proposed approach leverages recent work that gives a novel parametrization of control problems in the LDS setting. After estimating the values of this parametrization, the authors formulate the problem of finding optimal control inputs as a large convex problem. The time and sample complexities of this approach are polynomial in all relevant parameters. The authors also highlight that their sample complexity depends only logarithmically on the time horizon T. The paper focuses on the theoretical results and does not present experiments (the polynomials are also not elaborated further).\\n\\nOverall, I think it is important to study control problems from a statistical perspective, and the LDS setting is a very natural target. Moreover, I find the proposed algorithmic approach interesting. However, I am not sure if the paper is a good fit for ICLR since it is purely theoretical in nature and has no experiments. I also have the following questions regarding the theoretical contributions:\\n\\n(A) The authors emphasize the logarithmic dependence on T. However, the bounds also depend polynomially on L, and as far as I can tell, L can be polynomial in T for certain systems if we want to achieve a good overall cost. It would be helpful if the authors could comment on the dependence between T and L.\\n\\n(B) Why does the bound in Theorem 2.4 become worse when there are some directions that do not contribute to the cost (the lambda dependence)?\\n\\n(C) Do the authors expect that it will be straightforward to remove the assumption that A is symmetric, or is this an inherent limitation of the approach?\\n\\nMoreover, I have the following comments:\\n\\n(1) Theorem 3.3 is currently not self-contained. It would enhance readability of the paper if the results were more self-contained. (It is obviously good to cite results from prior work, but then it would be more clear if the results are invoked as is without modifications.)\\n\\n(2) In Theorem 1.1, the notation is slightly unclear because B^T is only defined later.\\n\\n(3) In Section 1.2 (Tracking a known system): \\\"given\\\" instead of \\\"give\\\"\\n\\n(4) In Section 1.2 (Optimal control): \\\"symmetric\\\" instead of \\\"symmetrics\\\"\\n\\n(5) In Section 1.2 (Optimal control): the paper says \\\"rather than solving a recursive system of equations, we provide a formulation of control as a one-shot convex program\\\". Is this meant as a contrast to the work of Dean et al. (2017)? Their abstract also claims to utilize a convex programming formulation.\\n\\n(6) Below Definition 2.3: What is capital X?\\n\\n(7) In Definition 2.3: What does the parenthesis in \\\\phi_j(1) denote?\\n\\n(8) Below Theorem 2.4: Why is Phi now nk x T instead of nk x nT as in Definition 2.3?\\n\\n(9) Lemma 3.2: Is \\\\hat{D} defined in the paper? I assume that it involves \\\\hat{M}, but it would be good to formally define this notation.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"A. For reasonable systems L is a constant. See Main Point 3.\\nB. If there are directions that do not contribute to the cost, then under the optimal control, the output may be large in that direction. Our bounds for the error depend on the size of the outputs y (Lemma 3.4) because the error in estimating the quadratic form depends linearly on the size of y.\\nC. This requires further work. We have ongoing work on extending the work of Hazan, Singh, and Zhang to the nonsymmetric case, which will then also allow control.\\n2. Fixed.\\n3. Fixed.\\n4. Fixed.\\n5. See Main Point 2.\\n6. Should be x. Fixed.\\n7. phi_j(k) denotes the kth entry of \\\\phi_j.\\n8. Typo, fixed.\\n9. \\\\hat{D} is exactly the analogue of D for the predicted dynamics.\"}",
"{\"title\": \"Response\", \"comment\": \"1. We thank the review for pointing this out. However, we did not find clear provable guarantees for N4SID (in terms of sample complexity, etc.) in our setting. If the reviewer were to give a clear reference or explanation, we would be happy to include it.\\nOur claim on exponential time is based on the fact that system identification using any kind of local search (ex. gradient descent) converges to a local optimum. It\\u2019s not clear how to ensure that the search will reach the actual parameters, beyond a method that takes exponential time such as grid search.\\n3. This condition is now rewritten to be clearer. The assumption $Q>\\\\lambda I$ is reasonable because it says that all directions of the output incur cost - a common case is just $Q=I$. Inequality (3) says that we can incur not much more loss than just the background noise. This is true as long as the system can be driven to 0 in a reasonable amount of time.\\n4. See Main Point 3.\\n5. Done.\\n6. Done.\"}"
]
} |
SysEexbRb | Critical Points of Linear Neural Networks: Analytical Forms and Landscape Properties | [
"Yi Zhou",
"Yingbin Liang"
] | Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks. One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum. | [
"neural networks",
"critical points",
"analytical form",
"landscape"
] | Accept (Poster) | https://openreview.net/pdf?id=SysEexbRb | https://openreview.net/forum?id=SysEexbRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"rJye8vezz",
"SJ6btV9gz",
"ryOWEcdlM",
"S1aEzCJxG",
"Hyyw_tH4z",
"BydBmeLQf",
"HJJEXJaHM",
"S1BRtK8EM",
"H1oDUDefG",
"SkBt4Dgff"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1513285095179,
1511831813439,
1511724032087,
1511150133464,
1515718742872,
1514697536179,
1517249319473,
1515784652854,
1513285218872,
1513284733509
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper549/Authors"
],
[
"ICLR.cc/2018/Conference/Paper549/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper549/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper549/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper549/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper549/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper549/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper549/Authors"
],
[
"ICLR.cc/2018/Conference/Paper549/Authors"
]
],
"structured_content_str": [
"{\"title\": \"We thank the reviewer for providing valuable feedbacks. Below is a point-to-point response.\", \"comment\": \"Q1: I think in the title/abstract/intro the use of neural nets is somewhat misleading as neural nets are typically nonlinear. This paper is mostly about linear networks. I would suggest rewording title/abstract/intro.\", \"a\": \"Prop 6 is more useful in terms of the structure of the forms it characterizes for the critical points. For example, such forms in Prop 6 (and its special case of Prop 7) are exploited to construct a spurious local minimum in Example 1. Computationally, as pointed out in our response to Q3, we can compute/verify the parameters for various cases, but we cannot fully list all critical points, which are uncountable.\", \"q2\": \"From my understanding, the p_i have been introduced in Theorem 1 but given their prominent role in this proposition they merit a separate definition.\", \"q3\": \"Given X and Y, can one run an algorithm to find all the critical points or at least the parameters used in the characterization p_i, V_i etc?\", \"q4\": \"What insights do you gain by knowing Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5?\", \"q5\": \"Does Theorem 2 have any computational implications, e.g. are saddles strict with a quantifiable bound?\", \"q6\": \"Why is Proposition 6 useful, can you find the parameters of this characterization with a computationally efficient algorithm?\"}",
"{\"title\": \"An interesting work on the characterization of critical points of neural networks\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper mainly focuses on the square loss function of linear networks. It provides the sufficient and necessary characterization for the forms of critical points of one-hidden-layer linear networks. Based on this characterization, the authors are able to discuss different types of non-global-optimal critical points and show that every local minimum is a global minimum for one-hidden-layer linear networks. As an extension, the manuscript also characterizes the analytical forms for the critical points of deep linear networks and deep ReLU networks, although only a subset of non-global-optimal critical points are discussed. In general, this manuscript is well written.\", \"pros\": \"1. This manuscript provides the sufficient and necessary characterization of critical points for deep networks. \\n2. Compared to previous work, the current analysis for one-hidden-layer linear networks doesn\\u2019t require assumptions on parameter dimensions and data matrices. The novel analyses, especially the technique to characterize critical points and the proof of item 2 in Proposition 3, will probably be interesting to the community.\\n3. It provides an example when a local minimum is not global for a one-hidden-layer neural network with ReLU activation.\", \"cons\": \"1. I'm concerned that the contribution of this manuscript is a little incremental. The equivalence of global minima and local minima for linear networks is not surprising based on existing works e.g. Hardt & Ma (2017) and Kawaguchi (2016). \\n2. Unlike one-hidden-layer linear networks, the characterizations of critical points for deep linear networks and deep ReLU networks seem to be hard to be interpreted. This manuscript doesn't show that every local minimum of these two types of deep networks is a global minimum, which actually has been shown by existing works like Kawaguchi (2016) with some assumptions. The behaviors of linear networks and practical (deep and nonlinear) networks are very different. Under such circumstance, the results about one-hidden-layer linear networks are less interesting to the deep learning community.\", \"minors\": \"\", \"there_are_some_mixed_up_notations\": \"tilde{A_i} => A_i , and rank(A_2) => rank(A)_2 in Proposition 3.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper studies the critical points of shallow and deep linear networks. The authors give a (necessary and sufficient) characterization of the form of critical points and use this to derive necessary and sufficient conditions for which critical points are global optima. While the exposition of the paper can be improved in my view this is a neat and concise result and merits publication in ICLR.\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper studies the critical points of shallow and deep linear networks. The authors give a (necessary and sufficient) characterization of the form of critical points and use this to derive necessary and sufficient conditions for which critical points are global optima. Essentially this paper revisits a classic paper by Baldi and Hornik (1989) and relaxes a few requires assumptions on the matrices. I have not checked the proofs in detail but the general strategy seems sound. While the exposition of the paper can be improved in my view this is a neat and concise result and merits publication in ICLR. The authors also study the analytic form of critical points of a single-hidden layer ReLU network. However, given the form of the necessary and sufficient conditions the usefulness of of these results is less clear.\", \"detailed_comments\": [\"I think in the title/abstract/intro the use of Neural nets is somewhat misleading as neural nets are typically nonlinear. This paper is mostly about linear networks. While a result has been stated for single-hidden ReLU networks. In my view this particular result is an immediate corollary of the result for linear networks. As I explain further below given the combinatorial form of the result, the usefulness of this particular extension to ReLU network is not very clear. I would suggest rewording title/abstract/intro\", \"Theorem 1 is neat, well done!\", \"Page 4 p_i\\u2019s in proposition 1\", \"From my understanding the p_i have been introduced in Theorem 1 but given their prominent role in this proposition they merit a separate definition (and ideally in terms of the A_i directly).\", \"Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5\", \"Are these characterizations computable i.e. given X and Y can one run an algorithm to find all the critical points or at least the parameters used in the characterization p_i, V_i etc?\", \"Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5\", \"Would recommend a better exposition why these theorems are useful. What insights do you gain by knowing these theorems etc. Are less sufficient conditions that is more intuitive or useful. (an insightful sufficient condition in some cases is much more valuable than an unintuitive necessary and sufficient one).\", \"Page 5 Theorem 2\", \"Does this theorem have any computational implications? Does it imply that the global optima can be found efficiently, e.g. are saddles strict with a quantifiable bound?\", \"Page 7 proposition 6 seems like an immediate consequence of Theorem 1 however given the combinatorial nature of the K_{I,J} it is not clear why this theorem is useful. e.g . back to my earlier comment w.r.t. Linear networks given Y and X can you find the parameters of this characterization with a computationally efficient algorithm?\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Authors of this paper provided full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks: shallow linear networks, deep linear networks and shallow ReLU nonlinear networks.\", \"rating\": \"7: Good paper, accept\", \"review\": \"Authors of this paper provided full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks: shallow linear networks, deep linear networks and shallow ReLU nonlinear networks. The analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points.\\n\\nThe paper is well organized and well written. Authors exploited the analytical forms of the critical points to provide a new proof for characterizing the landscape around the critical points. This technique generalizes existing work under full relaxation of assumptions. In the linear network with one hidden layer, it generalizes the work Baldi & Hornik (1989) with arbitrary network parameter dimensions and any data matrices; In the deep linear networks, it generalizes the result in Kawaguchi (2016) under no assumptions on the network parameters and data matrices. Moreover, it also provides new characterization for shallow ReLU nonlinear networks, which is not discussed in previous work.\\n\\nThe results obtained from the analytical forms of the critical points are interesting, but one problem is that how to obtain the proper solution of equation (3)? In the Example 1, authors gave a concrete example to demonstrate both local minimum and local maximum do exist in the shallow ReLU nonlinear networks by properly choosing these matrices satisfying (12). It will be interesting to see how to choose these matrices for all the studied networks with some concrete examples.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for the clarification. Most of my concerns are addressed. An anonymous reviewer raised a concern about the overlap with existing work, Li et al. 2016b. The authors' comments about this related work sound ok to me. But I would suggest the authors add more discussion about it. Overall the paper is above the acceptance threshold in my opinion and I keep my rating.\"}",
"{\"title\": \"Revision\", \"comment\": \"Based on the reviewers' comments, we uploaded a revision that made the following changes. We are happy to make further changes if the reviewers have additional comments.\\n\\n1. We fixed the mixed-up notations in Prop. 3. Note that in item 3 of Prop. 3, we only perturb A_2 to tilde{A_2}.\\n\\n2. In the title, abstract and introduction, we reworded neural networks as linear neural networks whenever applicable.\\n\\n3. We added Remark 1 above Prop. 1 to separately define the parameters p_i's.\\n\\n4. In the paragraph before Remark 1, we commented that the critical points characterized in Theorem 1 cannot be fully listed out because they are in general uncountable. We also explained how to use the form in Theorem 1 to obtain some critical points. We also note that the analytical structure of the critical points is important, which determines the landscape properties of the loss function. This comment is also applicable to the case of deep linear networks and shallow ReLU networks. \\n\\n5. Towards the end of the paragraph after Prop. 2, we added further insight of Prop. 2 in a special case, i.e., both A_2 and A_1 are full rank at global minima under the assumptions on data matrices and network dimensions in Baldi & Hornik (1989). In the paragraph after Prop. 5, we added the similar understanding, i.e., if all the parameter matrices are square and the data matrices satisfy the assumptions as in Baldi & Hornik (1989), then all global minima must correspond to full rank parameter matrices. \\n\\n6. After Theorem 2, we commented that the saddle points can be non-strict for arbitrary data matrices X and Y with an illustrative example. \\n\\n7. We added another related work Li et al. 2016b.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"I recommend acceptance based on the positive reviews. The paper analyzes critical points for linear neural networks and shallow ReLU networks. Getting characterization of critical points for shallow ReLU networks is a great first step.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Reply\", \"comment\": \"I am satisfied with the authors response and maintain my rating and acceptance recommendation.\"}",
"{\"title\": \"We thank the reviewer for providing valuable feedbacks. Below is a point-to-point response.\", \"comment\": \"Q1: How to obtain the proper solution of eq (3)?\", \"a\": \"For other studied networks (shallow linear and deep linear networks), examples can be constructed based on the corresponding characterizations. For shallow linear networks, as our response for Q1, we can set L_1 = 0 (so that eq (3) is satisfied), and then A1 = C^-1 V^t U^t YX^+, A_2 = UVC with any invertible matrix C and any matrix V with the structure specified in Theorem 1 are critical points. Furthermore, if we further set the parameters p_i according to Prop 2, we obtain examples for global minima. For deep linear networks, it is also easier to construct examples by setting L_k = 0 for all k so that eq (6) is satisfied, and we can then obtain critical points for any invertible C_k and proper V_k with the structure specified in Theorem 3. Furthermore, if we further set the parameters p_i(0) according to Prop 5, we obtain examples for global minima. We note that all local minima are also global minima for these linear networks.\", \"q2\": \"In Example 1, authors gave a concrete example to demonstrate both local minimum and local maximum do exist in the shallow ReLU nonlinear networks by property choosing these matrices satisfying (12). How to choose these matrices for all the studied networks with some concrete examples?\"}",
"{\"title\": \"We thank the reviewer for providing valuable feedbacks. Below is a point-to-point response.\", \"comment\": \"Q1: Contribution of this manuscript is a little incremental. Equivalence of global minima and local minima for linear networks is not surprising, e.g. Hardt & Ma (2017) and Kawaguchi (2016).\", \"a\": \"We agree that it is challenging to understand deep and nonlinear networks, and their behaviors can be very different from shallow linear networks. Ultimately, we agree that tools for studying shallow linear networks won\\u2019t be sufficient. However, understanding shallow linear networks can still be beneficial in various cases. For example, our characterizations of deep linear and shallow ReLU networks are further developments of the characterizations of shallow linear networks. Such understandings allow us to show the existence of spurious local minimum for ReLU networks (Example 1), which is different from the behavior of linear networks.\\n\\nWe also thank the reviewer for pointing out the mixed-up notations. We will fix these notations.\", \"q2\": \"The characterizations of critical points for deep linear and ReLU networks seem to be hard to be interpreted.\", \"q3\": \"This manuscript doesn't show that every local minimum of these two types of deep networks (i.e., deep linear and ReLu networks) is a global minimum, which actually has been shown by Kawaguchi (2016) with some assumptions.\", \"q4\": \"The behaviors of linear networks and practical deep and nonlinear networks are very different. The results about one-hidden-layer linear networks are less interesting to the deep learning community.\"}"
]
} |
H1I3M7Z0b | WSNet: Learning Compact and Efficient Networks with Weight Sampling | [
"Xiaojie Jin",
"Yingzhen Yang",
"Ning Xu",
"Jianchao Yang",
"Jiashi Feng",
"Shuicheng Yan"
] | We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via \emph{ad hoc} processing such as model pruning or filter factorization. Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces {parameter sharing} throughout the learning process. We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably. By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification. Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet. Combined with weight quantization, the resulted models are up to \textbf{180$\times$} smaller and theoretically up to \textbf{16$\times$} faster than the well-established baselines, without noticeable performance drop. | [
"Deep learning",
"model compression"
] | Invite to Workshop Track | https://openreview.net/pdf?id=H1I3M7Z0b | https://openreview.net/forum?id=H1I3M7Z0b | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"rJRJeMoxz",
"S1vdEk6BM",
"S1xBMQtgG",
"Bkc2TkFlG"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1511886822374,
1517249647075,
1511760439541,
1511746994380
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper1147/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper1147/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents a method for reducing the number of parameters of neural networks by sharing the set of weights in a sliding window manner, and replicating the channels, and finally by quantising weights. The paper is clearly written and results seem compelling but on a pretty restricted domain which is not well known. This could have significance if it applies more generally.\\n\\nWhy does it work so well? Is this just because it acts on audio and these filters are phase shifted?\\nWhat happens with 2D convnets on more established datasets and with more established baselines?\\nWould be interesting to get wall clock speed ups for this method?\\n\\nOverall I think this paper lacks the breadth of experiments, and to really understand the significance of this work more experiments in more established domains should be performed.\", \"other_points\": [\"You are missing a related citation \\\"Speeding up Convolutional Neural Networks with Low Rank Expansions\\\" Jaderberg et al 2014\", \"Eqn 2 should be m=m* x C\", \"Use \\\\citep rather than \\\\cite\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper received generally positive reviews, but the reviewers also had some concerns about the evaluations.\", \"pros\": \"-- An improvement over HashNet, the model ties weights more systematically, and gets better accuracy.\", \"cons\": \"-- Tying weights to compress models already tried before.\\n-- Tasks are all small and/or audio related.\\n-- Unclear how well the results will generalize for 2D convolutions.\\n-- HashNet results are preliminary; comparisons with HashNet missing for audio tasks.\\n\\nGiven the expert reviews, I am recommending the paper to the workshop track.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper presents a method to compress deep network by weight sampling and channel sharing. The method combined with weight quantization provides 180x compression with a very small accuracy drop.\\n\\nThe method is novel and tested on multiple audio classification datasets and results show a good compression ratio with a negligible accuracy drop. The organization of the paper is good. However, it is a bit difficult to understand the method. Figure 1 does not help much. Channel sharing part in Figure 1 is especially confusing; it looks like the whole filter has the same weights in each channel. Also it isn\\u2019t stated in Figure and text that the weight sharing filters are learned by training.\\n\\nIt would be a nice addition to add number of operations that are needed by baseline method and compressed method with integral image.\", \"table_5\": \"Please add network size of other networks (SoundNet and Piczak ConvNet). For setting, SoundNet has two settings, scratch init and unlabeled video, what is that setting for WSNet and baseline?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review of WSNet\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"In this work, the authors propose a technique to compress convolutional and fully-connected layers in a network by tying various weights in the convolutional filters: specifically within a single channel (weight sampling) and across channels (channel sampling). When combined with quantization, the proposed approach allows for large compression ratios with minimal loss in performance on various audio classification tasks. Although the results are interesting, I have a number of concerns about this work, which are listed below:\\n\\n1. The idea of tying weights in the neural network in order to compress the model is not entirely new. This has been proposed previously in the context of feed-forward networks [1], and convolutional networks [2] where the choice of parameter tying is based on hash functions which ensure a random (but deterministic) mapping from a small set of \\u201ctrue\\u201d weights to a larger set of \\u201cvirtual\\u201d weights. I think it would be more fair to compare against the HashedNet technique.\", \"references\": [\"[1] C. Bucilua, R. Caruana, and A. Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535\\u2013541. ACM, 2006\", \"[2] J. Ba and R. Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654\\u20132662, 2014.\", \"[3] G. Hinton, O. Vinyals, J. Dean. Distilling the Knowledge in a Neural Network, NIPS 2014 Deep Learning Workshop. 2014.\", \"[4] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148\\u20132156, 2013.\", \"5. Section 3, where the authors describe the proposed techniques is somewhat confusing to read, because of a lack of detailed mathematical explanations of the proposed techniques. This makes the paper harder to understand, in my view. Please re-write these sections in order to clearly express the parameter tying mechanism. In particular, I had the following questions:\", \"Are weights tied across layers i.e., are the \\u201cweight sharing\\u201d matrices shared across layers?\", \"There appears to be a typo in Equation 3: I believe it should be m = m* C.\", \"Filter augmentation/Weight quantization are applicable to all methods, including the baseline. It would therefore be interesting to examine how they affect the baseline, not just the proposed system.\", \"Section 3.5, on using the \\u201cIntegral Image\\u201d to speed up computation was not clear to me. In particular, could the authors re-write to explain how the computation is computed efficiently with \\u201ctwo subtraction operations\\u201d. Could the authors also clarify the savings achieved by this technique?\", \"6. Results are reported on the various test sets without any discussion of statistical significance. Could the authors describe whether the differences in performance on the various test sets are statistically significant?\", \"7. On the ESC-50, UrbanSound8K, and DCASE tasks, it is a bit odd to compare against previous baselines which use different input features, use different model configurations, etc. It would be much better to use one of the previously published configurations as the baseline, and apply the proposed techniques to that configuration to examine performance. In particular, could the authors also use log-Mel filterbank energies as input features similar to (Piczak, 2015) and (Salomon and Bello, 2015), and apply the proposed techniques starting from those input features? Also, it would be useful when comparing against previously published baselines to indicate total number of independent parameters in the system in addition to accuracy numbers.\", \"8. Minor Typographical Errors: There are a number of minor typographical/grammatical errors in the paper, some of which are listed below:\", \"Abstract: \\u201cCombining weight quantization ...\\u201d \\u2192 \\u201cCombining with weight quantization ...\\u201d\", \"Sec 1: \\u201c... without sacrificing the loss of accuracy\\u201d \\u2192 \\u201c... without sacrificing accuracy\\u201d\", \"Sec 1: \\u201cAbove experimental results strongly evident the capability of WSNet \\u2026\\u201d \\u2192 \\u201cAbove experimental results strongly evidence the capability of WSNet \\u2026\\u201d\", \"Sec 2: \\u201c... deep learning based approaches has been recently proven ...\\u201d \\u2192 \\u201c... deep learning based approaches have been recently proven ...\\u201d\", \"The work by Aytar et al., 2016 is repeated twice in the references.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
HyjC5yWCW | Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm | [
"Chelsea Finn",
"Sergey Levine"
] | Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models. | [
"meta-learning",
"learning to learn",
"universal function approximation"
] | Accept (Poster) | https://openreview.net/pdf?id=HyjC5yWCW | https://openreview.net/forum?id=HyjC5yWCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ByJP4Htez",
"Hy-TUXPzM",
"SyTFKLYgf",
"ryN8PmPGG",
"S1qm_Qvzz",
"Sy-YsJ27f",
"H1Qg7kaBM",
"HyUeuXDfM",
"S1CSbaKez",
"SJ5ZD7DGM"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1511769175254,
1513727672731,
1511774596751,
1513727819988,
1513728034384,
1515088761124,
1517249259162,
1513727982391,
1511801157982,
1513727746204
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper513/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper513/Authors"
],
[
"ICLR.cc/2018/Conference/Paper513/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper513/Authors"
],
[
"ICLR.cc/2018/Conference/Paper513/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper513/Authors"
],
[
"ICLR.cc/2018/Conference/Paper513/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper513/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Technically interesting work but practical significance seems highly questionable\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper studies the capacity of the model-agnostic meta-learning (MAML) framework as a universal learning algorithm approximator. Since a (supervised) learning algorithm can be interpreted as a map from a dataset and an input to an output, the authors define a universal learning algorithm approximator to be a universal function approximator over the set of functions that map a set of data points and an input to an output. The authors show constructively that there exists a neural network architecture for which the model learned through MAML can approximate any learning algorithm.\\n\\nThe paper is for the most part clear, and the main result seems original and technically interesting. At the same time, it is not clear to me that this result is also practically significant. This is because the universal approximation result relies on a particular architecture that is not necessarily the design one would always use in MAML. This implies that MAML as typically used (including in the original paper by Finn et al, 2017a) is not necessarily a universal learning algorithm approximator, and this paper does not actually justify its empirical efficacy theoretically. For instance, the authors do not even use the architecture proposed in their proof in their experiments. This is in contrast to the classical universal function approximator results for feedforward neural networks, as a single hidden layer feedforward network is often among the family of architectures considered in the course of hyperparameter tuning. This distinction should be explicitly discussed in the paper. Moreover, the questions posed in the experimental results do not seem related to the theoretical result, which seems odd.\", \"specific_comments_and_questions\": \"\", \"page_4\": \"\\\"\\\\prod_{i=1}^N (W_i - \\\\alpha \\\\nabla_{W_i})\\\". There seems to be a typo here: \\\\nabla_{W_i} should be \\\\nabla_{W_i} L.\", \"page_7\": \"\\\"(1) can a learner trained with MAML further improve from additional gradient steps when learning new tasks at test time...? (2) does the inductive bias of gradient descent enable better few-shot learning performance on tasks outside of the training distribution...?\\\". These questions seem unrelated to the universal learning algorithm approximator result that constitutes the main part of the paper. If you're going to study these question empirically, why didn't you also try to investigate them theoretically (e.g. sample complexity and convergence of MAML)? A systematic and comprehensive analysis of these questions from both a theoretical and empirical perspective would have constituted a compelling paper on its own.\", \"pages_7_8\": \"Experiments. What are the architectures and hyperparameters used in the experiments, and how sensitive are the meta-learning algorithms to their choice?\", \"page_8\": \"\\\"our experiments show that learning strategies acquired with MAML are more successful when faced with out-of-domain tasks compared to recurrent learners....we show that the representations acquired with MAML are highly resilient to overfitting\\\". I'm not sure that such general claims are justified based on the experimental results in this paper. Generalizing to out-of-domain tasks is heavily dependent on the specific level and type of drift between the old and new distributions. These properties aren't studied at all in this work.\", \"post_author_rebuttal\": \"After reading the response from the authors and seeing the updated draft, I have decided to upgrade my rating of the manuscript to a 6. The universal learning algorithm approximator result is a nice result, although I do not agree with the other reviewer that it is a \\\"significant contribution to the theoretical understanding of meta-learning,\\\" which the authors have reinforced (although it can probably be considered a significant contribution to the theoretical understanding of MAML in particular). Expressivity of the model or algorithm is far from the main or most significant consideration in a machine learning problem, even in the standard supervised learning scenario. Questions pertaining to issues such as optimization and model selection are just as, if not more, important. These sorts of ideas are explored in the empirical part of the paper, but I did not find the actual experiments in this section to be very compelling. Still, I think the universal learning algorithm approximator result is sufficient on its own for the paper to be accepted.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Main response to reviewers\", \"comment\": \"We thank the reviewers for their constructive feedback!\\n\\nWe would first like to clarify that the main theoretical result holds for a generic deep network with ReLU nonlinearities, an architecture which is standard in practice. We have revised Section 4 and Appendix D in the paper to clarify and explicitly show this. As mentioned by R1, this theoretical result is a \\u201csignificant contribution to the theoretical understanding of meta-learning\\u201d.\\n\\nSecond, to address the reviewers concerns about a disconnect between the theory and experiments, we did two things:\\n1) We added a new experiment in Section 7.2 that directly follows up on the theoretical result, empirically comparing the depth required for meta-learning to the depth required for representing the individual tasks being meta-learned. The empirical results in this section support the theoretical result.\\n2) We clarified in Section 7 the importance of the existing experiments, which is as follows: the theory shows that MAML is just as expressive as black-box (e.g. RNN-based) meta-learners, but this does not, by itself, indicate why we might prefer one method over the other and in which cases we should prefer one over the other. The experiments illustrate how MAML can improve over black-box meta-learners when extrapolating to out-of-distribution tasks.\\n\\nWe respond to individual comments in direct replies to the reviewers comments. Given the low confidence scores, we hope that the reviewers will follow up on our response and adjust their reviews based on our response if things have become more clear.\"}",
"{\"title\": \"Result looks interesting. Presentation could be further improved.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper tries to address an interesting question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm. The authors provide answers, both theoretically and empirically.\\n\\nThe presentation could be further improved. For example, \\n\\n-the notation $\\\\mathcal{L}$ is inconsistent. It has different inputs at each location.\\n-the bottom of page 5, \\\"we then define\\\"?\\n-I couldn't understand the sentence \\\"can approximate any continuous function of (x,y,x^*) on compact subsets of R^{dim(y)}\\\" in Lemma 4.1\\\". \\n-before Equation (1), \\\"where we will disregard the last term..\\\" should be further clarified.\\n-the paragraph before Section 4. \\\"The first goal of this paper is to show that f_{MAML} is a universal function approximation of (D_{\\\\mathcal{T}},x^*)\\\"? A function can only approximate the same type function.\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Revised paper addressing comments\", \"comment\": \"Please see our main response in a comment above that addresses the primary concerns among all reviewers. We reply to your specific comments here.\\n\\n>\\\"the notation $\\\\mathcal{L}$ is inconsistent. It has different inputs at each location\\\"\\nThank you for pointing this out. We have modified the paper in Sections 2.2, 3, and 4 to use two different symbols and use each of these symbols in a consistent manner.\\n\\n>\\\"-the bottom of page 5, \\\"we then define\\\"?\\\"\\nThe lemma previously appeared on the following page, after \\u201cwe then define\\u201d. Now, it appears on the same page.\\n\\n> \\\"I couldn't understand the sentence \\\"can approximate any continuous function of (x,y,x^*) on compact subsets of R^{dim(y)}\\\" in Lemma 4.1\\\". \\\"\\nWe added a footnote to clarify that this assumption is inherited from the UFA theorem.\\n\\n> the paragraph before Section 4. \\\"The first goal of this paper is to show that f_{MAML} is a universal function approximation of (D_{\\\\mathcal{T}},x^*)\\\"? A function can only approximate the same type function.\\nWe modified to text at the end of Section 3 to make it clear that f_{MAML} is the same type of function.\"}",
"{\"title\": \"Revised paper, addressing concerns. [part 2/2]\", \"comment\": \"> \\u201cI'm not sure that such general claims are justified based on the experimental results in this paper. Generalizing to out-of-domain tasks is heavily dependent on the specific level and type of drift between the old and new distributions. These properties aren't studied at all in this work.\\u201d\\nWe modified the first-mentioned claim to be more precise. We agree that out-of-domain generalization is heavily dependent on both the task and the form of drift. Thus, we aimed to study many different levels and types of drift, studying four different types of drift (shear, scale, amplitude, phase) and several levels/amounts of each of these types of drift, within two different problem domains (Omniglot, sinusoid regression). In every single type and level of drift that we experimented with, we observed the same result -- that gradient-descent generalized better than recurrent networks. \\nWith regard to the second claim on resilience to overfitting, this claim is in the context of the experiments with additional gradient steps and is not referring to out-of-domain tasks. The claim is supported by the results in our experiments.\"}",
"{\"title\": \"Important paper\", \"comment\": \"I want to thank the authors for preparing the paper.\\nThe paper clearly shows that model-agnostic meta-learning (MAML) can approximate any learning algorithm.\\nThis was not obvious to me before.\\n\\nI have now more confidence to apply MAML on many new tasks.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"R3 summarizes the reasons for the decision on this paper: \\\"The universal learning algorithm approximator result is a nice result, although I do not agree with the other reviewer that it is a \\\"significant contribution to the theoretical understanding of meta-learning,\\\" which the authors have reinforced (although it can probably be considered a significant contribution to the theoretical understanding of MAML in particular). Expressivity of the model or algorithm is far from the main or most significant consideration in a machine learning problem, even in the standard supervised learning scenario. Questions pertaining to issues such as optimization and model selection are just as, if not more, important. These sorts of ideas are explored in the empirical part of the paper, but I did not find the actual experiments in this section to be very compelling. Still, I think the universal learning algorithm approximator result is sufficient on its own for the paper to be accepted.\\\"\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Revised paper, addressing concerns. [part 1/2]\", \"comment\": \"Thank you for the constructive feedback. All of the concerns raised in the review have been addressed in the revised version of the paper.\\n\\nPlease see our main response in a comment above that addresses the primary concerns among all reviewers. We reply to your specific comments here.\\n\\n> \\u201c...This is because the universal approximation result relies on a particular architecture that is not necessarily the design one would always use in MAML. ... For instance, the authors do not even use the architecture proposed in their proof in their experiments...\\u201d\\nAs mentioned above, we would like to clarify that the result holds for a generic deep network with ReLU nonlinearities that is used in prior papers that use MAML [Finn et al. \\u201817ab, Reed et al. \\u201817] and in the experiments in Section 7 of this paper. We revised Section 4 and Appendix D of the paper to make this more clear and explicitly show how this is the case.\\n\\n> \\u201cPage 4: \\\"\\\\hat{f}(\\\\cdot; \\\\theta') approximates f_{\\\\text{target}}(x, y, x^*) up to arbitrary position\\\". There seems to be an abuse of notation here as the first expression is a function and the second expression is a value.\\u201d\\n> \\u201cPage 4: \\\"\\\\prod_{i=1}^N (W_i - \\\\alpha \\\\nabla_{W_i})\\\". There seems to be a typo here: \\\\nabla_{W_i} should be \\\\nabla_{W_i} L.\\u201d\\nThank you for catching these two typos. We fixed both.\\n\\n> Page 4: \\\"to show universality, we will construct a setting of the weight matrices that enables independent control of the information flow...\\\". How does this differ from the classical UFA proofs? The relative technical merit of this paper would be more clear if this is properly discussed.\", \"we_added_text_in_the_latter_part_of_section_3_to_clarify_the_relationship_to_the_ufa_theorem\": \"\\u201cIt is clear how $f_\\\\text{MAML}$ can approximate any function on $x^\\\\star$, as per the UFA theorem; however, it is not obvious if $f_\\\\text{MAML}$ can represent any function of the set of input, output pairs in $\\\\dataset_\\\\task$, since the UFA theorem does not consider the gradient operator.\\u201d\\nOur proof uses the UFA proof as a subroutine, and is otherwise completely distinct.\\n\\n> \\u201cThese questions seem unrelated to the universal learning algorithm approximator result that constitutes the main part of the paper. If you're going to study these question empirically, why didn't you also try to investigate them theoretically (e.g. sample complexity and convergence of MAML)? A systematic and comprehensive analysis of these questions from both a theoretical and empirical perspective would have constituted a compelling paper on its own.\\u201d\\nYes, these two questions would be very interesting to analyze theoretically. We leave such theoretical questions to future work. With regard to the connection between these experiments and the theory, please see our comment above to all of the reviewers -- we added another experiment in Section 7.2 which directly follows up on the theory, studying the depth necessary to meta-learn a distribution of tasks compared to the depth needed for standard learning. We also added more discussion connecting the theory and the existing experiments.\\n\\n> \\u201cWhat are the architectures and hyperparameters used in the experiments, and how sensitive are the meta-learning algorithms to their choice?\\u201d\\nWe outlined most of the experimental details in the main text and in the Appendix. We added some additional details that we had missed, in Sections 7.1 and Appendix G.\", \"omniglot\": \"We use a standardized convolutional encoder architecture in the Omniglot domain (4 conv layers each with 64 3x3 filters, stride 2, ReLUs, and batch norm, followed by a linear layer). All methods used the Adam optimizer with default hyperparameters. Other hyperparameter choices were specific to the algorithm and can be found in the respective papers.\", \"sinusoid\": \"With MAML, we used a simple fully-connected network with 2 hidden layers of width 100 and ReLU nonlinearities, and the suggested hyperparameters in the MAML codebase (Adam optimizer, alpha=0.001, 5 gradient steps). On the sinusoid task with TCML, we used an architecture of 2x{ 4 dilated convolution layers with 16 channels, 2x1 kernels, and dilation size of 1,2,4,8 respectively; then an attention block with key/value dimensionality of 8} followed by a 1x1 conv. TCML used the Adam optimizer with default hyperparameters.\\nWe have not found any of the algorithms to be particularly sensitive to the architecture or hyperparameters. The hyperparameters provided in each paper\\u2019s codebases worked well.\"}",
"{\"title\": \"Review (educated guess)\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper provides proof that gradient-based meta-learners (e.g. MAML) are \\\"universal leaning algorithm approximators\\\".\", \"pro\": [\"Generally well-written with a clear (theoretical) goal\", \"If the K-shot proof is correct*, the paper constitutes a significant contribution to the theoretical understanding of meta-learning.\", \"Timely and relevant to a large portion of the ICLR community (assuming the proofs are correct)\"], \"con\": [\"The theoretical and empirical parts seem quite disconnected. The theoretical results are not applied nor demonstrated in the empirical section and only functions as an underlying premise. I wonder if a purely theoretical contribution would be preferable (or with even fewer empirical results).\", \"It has not yet been possible for me to check all the technical details and proofs.\"], \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"New experiment & more discussion added\", \"comment\": \"Please see our main response in a comment above that addresses the primary concerns among all reviewers. We reply to your specific comments here.\\n\\n> \\u201cThe theoretical and empirical parts seem quite disconnected.\\u201d\\nAs mentioned in our main response above, we added a new experiment in Section 7.2 that connects to the theory. The theory suggests that depth is important for an expressive meta-learner compared to standard neural network learner, for which a single hidden layer should theoretically suffice. The results in our new experimental analysis support our theoretical finding that more depth is needed for MAML than for representing individual tasks. We also added additional discussion to clarify and motivate the existing experiments of inductive bias.\"}"
]
} |
S1ANxQW0b | Maximum a Posteriori Policy Optimisation | [
"Abbas Abdolmaleki",
"Jost Tobias Springenberg",
"Yuval Tassa",
"Remi Munos",
"Nicolas Heess",
"Martin Riedmiller"
] | We introduce a new algorithm for reinforcement learning called Maximum a-posteriori Policy Optimisation (MPO) based on coordinate ascent on a relative-entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings. | [
"Reinforcement Learning",
"Variational Inference",
"Control"
] | Accept (Poster) | https://openreview.net/pdf?id=S1ANxQW0b | https://openreview.net/forum?id=S1ANxQW0b | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"r1x7J3PkgN",
"S1DYhi9gz",
"HkHiimLEG",
"H1y3N2alf",
"HJEuS8amM",
"Hy4_ANE-f",
"By6E3iqxz",
"S1ymUU6Xz",
"Sy4xXkaBG",
"HyixP8aQz",
"rypb6tngM",
"Hkk3DISC-",
"HkX4eXIC-",
"HkJ6aF2xf",
"H18NG4zPM"
],
"note_type": [
"comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544678362835,
1511861375520,
1515760541420,
1512060070902,
1515181419987,
1512488555764,
1511861301074,
1515181591211,
1517249260208,
1515181811477,
1511984388978,
1509414823044,
1509466155110,
1511984566837,
1518645805719
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1110/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1110/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1110/AnonReviewer2"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper1110/AnonReviewer2"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Mathematical mistakes in the derivation of the \\\"generalized M-step\\\"\", \"comment\": \"There are several mathematical issues with the derivation for the generalized M-step, including in the current arxiv version of the paper.\\n\\n1. If you are doing a Laplace approximation and do a Gaussian prior around the current policy, it is not the covariance that is equal to the Fisher (even if with temperature), but rather the precision. This surprisingly is not really a typo as it carries out to the derivation in D.3 where the authors claim that the quadratic with the **inverse** Fisher is somehow a second order Taylor expansion to the KL, which is clearly not. \\n\\n2. The authors in the main text are talking about the **empirical** Fisher. Suddenly, we go to D.3 for the actual derivation and ignoring the mistake discussed in 1. they are motivating the new KL term by being a second order Taylor expansion to the KL. This is flawed, as in fact only the true Fisher has such property and the empirical Fisher has nothing to do with the KL divergence. \\n\\n3. In terms of second order Taylor expansions of the KL it is well known that the Fisher is the second-order derivative tensor of both the forward and the backward KL. Hence, it remains significantly unjustified why one is chosen above the other without any either theoretical or empirical evidence for the choice.\\n\\nFinally, the fact that the more stable version decouples the KL term for the term of the mean and the covariance bring in to significant questioning where the whole method has any relevance to \\\"maximum a-posterior\\\" optimization, rather than a Trust Region method. An interesting paper indeed.\"}",
"{\"title\": \"Re: Clarifications\", \"comment\": \"Thank you for carefully reading of the paper and uncovering a few minor mistakes.\\n\\n> Firstly, I think it would be helpfull to formally define what $$q(\\\\rho)$$ is. My current assumption is: $$q(\\\\rho) = p(s_0) \\\\prod_1^\\\\infty p(s_{t+1}|a_t, s_t) q(a_t|s_t)$$.\\nYour assumption is correct. q(\\\\rho) is analogous to p(\\\\rho) (as described in the background section on MDPs). We will add this definition. \\n\\n>1. I think at the end of the line you should have $$+ \\\\log p(\\\\theta)$$ rather than $$+ p(\\\\theta)$$ (I believe this is a typo)\\nCorrect, this is indeed a typo and will be fixed in the next revision of the paper.\\n\\n> 2. In the definition of the log-probabilities, the $$\\\\alpha$$ parameter appears only in the definition of 'p(O=1|\\\\rho)'. The way it appears is as a denominator in the log-probability. In line 4 of equation (1) it has suddenly appeared as a multiplier in front of the log-densities of $$\\\\pi(a|s_t)$$ and $$q(a|s_t)$$. This is possible if we factor out the $$\\\\alpha^{-1}$$ from the sum of the rewards, but then on that line, there should be a prefactor of $$\\\\alpha^{-1}$$ in front of the expectation over 'q' which seems missing. (I believe this is a typo as well).\\n\\nIn this step we indeed just multiplied with the (non-zero) \\\\alpha. We presume you meant that alpha is then, however, missing in front of the prior p(\\\\theta) here. You are correct and this will be also fixed in the next revision.\\n\\n> 3. In the resulting expectation, it is a bit unclear how did the discount factors $$\\\\gamma^t$$ have appeared as well as in front of the rewards also in front of the KL divergences? From the context provided I really failed to be able to account for this, and given that for the rest of the paper this form has been used more than once I was wondering if you could provide some clarification on the derivation of the equation as it is not obvious to at least some of the common readers of the paper.\\n\\nThank you for pointing out this inconsistency which has arisen due to some last minute changes in notation that we introduced when we unified the notation in the paper - switching from presenting the finite-horizon, undiscounted, setting to using the infinite-horizon formulation. As pointed out by previous work (e.g. Rawlik et al.) there is a direct correspondence between learning / inference in an appropriately constructed graphical model (as suggested by the first line of Eq. 1) and the regularized control objective in the finite horizon, undiscounted case. The regularized RL objective still exists in the discounted, infinite horizon case (e.g. Rawlik et al. or see [1] for another construction), but an equivalent graphical model is harder to construct (and is not of the form currently presented in the paper; e.g. see [1]). We will fix this and clarify the relation in the revision\\n\\n[1] Probabilistic Inference for Solving Discrete and Continuous State Markov Decision Processes, Marc Toussaint, Amos Storkey, ICML 2004\"}",
"{\"title\": \"Paper updated\", \"comment\": \"We have updated the paper to address the concerns raised by the reviewers.\", \"in_particular_we_have_included\": [\"A detailed theoretical analysis of the MPO framework\", \"An updated methods section that has a simpler derivation of the algorithm\"]}",
"{\"title\": \"Interesting off-policy algorithms with nice results\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This is an interesting policy-as-inference approach, presented in a reasonably clear and well-motivated way. I have a couple questions which somewhat echo questions of other commenters here. Unfortunately, I am not sufficiently familiar with the relevant recent policy learning literature to judge novelty. However, as best I am aware the empirical results presented here seem quite impressive for off-policy learning.\\n\\n- When is it possible to normalize the non-parametric q(a|s) in equation (6)? It seems to me this will be challenging in most any situation where the action space is continuous. Is this guaranteed to be Gaussian? If so, I don\\u2019t understand why.\\n\\n\\u2013 In equations (5) and (10), a KL divergence regularizer is replaced by a \\u201chard\\u201d constraint. However, for optimization purposes, in C.3 the hard constraint is then replaced by a soft constraint (with Lagrange multipliers), which depend on values of epsilon. Are these values of epsilon easy to pick in practice? If so, why are they easier to pick than e.g. the lambda value in eq (10)?\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Thank you for your review, we have prepared an additional theoretical analysis and will update the paper\", \"comment\": \"We appreciate the detailed comments and questions regarding the connection between our method and EM methods. We have addressed your main concern with an additional theoretical analysis of the algorithm, strengthening the paper.\\n\\n> 1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; However, for nonparametric \\n> EM case, there is no guarantee for that. This is the biggest concern I have for the theoretical justification of the paper.\\n\\nWe have derived a proof that gives a monotonic improvement guarantee for the nonparametric variant of the algorithm under certain circumstances. We will include this proof in the paper. To summarize: Assuming Q can be represented and estimated, the \\\"partial\\\" E-step in combination with an appropriate gradient-based M-step leads to an improvement of the KL regularized objective and guarantees monotonic improvement of the overall procedure under certain circumstances. See also our response to the Anonymous question below.\\n\\n> 2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. This is not true. \\n> The Retrace algorithm, is per se, a value iteration algorithm. I think the author could say using the policy evaluation version of Retrace, \\n> or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.\\n\\nWe will clarify that we are using the Retrace operator for policy evaluation only (This use case was indeed also analyzed in Munos et al. (2016)).\\n\\n> Besides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as \\n> \\u201cConvergent Tree-Backup and Retrace with Function Approximation\\u201d. But this is a minor point if the author doesn\\u2019t emphasize too much about off-policy stability.\\n\\nWe agree that off-policy stability with function approximation is an important open problem that deserves additional attention but not one specific to this method (i.e. any existing DeepRL algorithm shares these concerns). We will add a short note.\\n\\n> 3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. \\n> Usually, an in-depth analysis between the choice of \\\\lambda in multiplier formulation and the \\\\epsilon in the constraint should be discussed, which is necessary for further theoretical analysis. \\n\\nWe now have a detailed analysis of the unconstrained multiplier formulation (see comment above) of our algorithm. In practice we found that implementing updates according to both hard-constraints and using a fixed regularizer worked well for individual domains. Both \\\\lambda and \\\\epsilon can be found via a small hyperparameter search in this case. When applying the algorithm to many different domains (with widely different reward scales) with the same set of hyperparameters we found it easier to use the hard-constrained version; which is why we placed a focus on it. We will include these experimental results in an updated version of the paper. We believe these observations are in-line with research on hard-constrained/KL-regularized on-policy learning algorithms such as PPO/TRPO (for which explicit connections between the two settings are also ). \\n\\n> 4. The experimental conclusions are conducted without sound evidence. For example, the author claims the method to be 'highly data efficient' compared with existing approaches, however, there is no strong evidence supporting this claim. \\n\\nWe believe that the large set of experiments we conducted in the experimental section gives evidence for this. Figure 4 e.g. clearly shows the improved data-efficiency MPO gives over our implementations of state-of-the-art RL algorithms for both on-policy (PPO) and off-policy learning (DDPG, policy gradient + Retrace). Further, when looking at the results for the parkour domain we observe an order of magnitude improvement over the reference experiment. We have started additional experiments for parkour with a full humanoid body - leading to similar speedups over PPO - which will be included in the final version and further solidify the claim on a more difficult benchmark.\"}",
"{\"title\": \"some details to discuss\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper studies new off-policy policy optimization algorithm using relative entropy objective and use EM algorithm to solve it. The general idea is not new, aka, formulating the MDP problem as a probabilistic inference problem.\", \"there_are_some_technical_questions\": \"1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; However, for nonparametric EM case, there is no guarantee for that. This is the biggest concern I have for the theoretical justification of the paper.\\n\\n2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. This is not true. The Retrace algorithm, is per se, a value iteration algorithm. I think the author could say using the policy evaluation version of Retrace, or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.\\n\\nBesides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as \\n\\u201cConvergent Tree-Backup and Retrace with Function Approximation\\u201d. But this is a minor point if the author doesn\\u2019t emphasize too much about off-policy stability.\\n\\n3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. Usually, an in-depth analysis between the choice of \\\\lambda in multiplier formulation and the \\\\epsilon in the constraint should be discussed, which is necessary for further theoretical analysis. \\n\\n4. The experimental conclusions are conducted without sound evidence. For example, the author claims the method to be 'highly data efficient' compared with existing approaches, however, there is no strong evidence supporting this claim. \\n\\n\\nOverall, although the motivation of this paper is interesting, I think there is still a lot of details to improve.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for spotting some minor inconsistencies\", \"comment\": \"Thank you for your thorough read of the paper.\\n\\n> The derivation of \\\"one-step KL regularised objective\\\" is unclear to me and this seems to be related to a partial E-step. \\n\\nWe will clarify the relationship between the one-step objective and Eq. 1 in more detail in a revised version of the paper. We will also include a proof that the the specific \\\"partial\\\" update we use in the E-step leads to an improvement in Eq. (1) and guarantees monotonic improvement of the overall procedure.\\n\\nIn short, the relation between objective (1) and formula (4) is as follows:\\ninstead of optimizing objective (1) directly in the E-step (which would entail running soft-Q-learning to convergence - e.g. Q-learning with additional KL terms of subsequent time-steps in a trajectory added to the rewards) we start from the \\\"unregularized\\\" Q-function (Eq. (3)) and expand it via the \\\"regularized\\\" Bellman operator T Q(s,a) = E_a[Q(s,a)] + \\\\alpha KL(q || \\\\pi). We thus only consider the KL at a given state s in the E-step and not the \\\"full\\\" objective from (1). Nonetheless, as mentioned above we have now prepared a proof that this still leads to an improvement in (1).\\n\\n> (2) As far as I know, the previous works on variational RL maximize the marginal log-likelihood p(O=1|\\\\theta) (Toussaint (2009) and Rawlik (2012)), whereas you maximizes the unnormalized posterior p(O=1, \\\\theta) with the prior assumption on $\\\\theta$. I wonder if the prior assumption enhances the performance. \\n\\nCorrect. The prior p(\\\\theta) allows us to add regularization to the M-step of our procedure (enforcing a trust-region on the policy). We found this to be important when dealing with hihg-dimensional systems like the humanoid where the M-step could otherwise overfit (as the integral over action is only evaluated using 30 samples in our experiments).\"}",
"{\"title\": \"We thank the reviewer for comments and thoughtful questions.\", \"comment\": \"We thank the reviewer for comments and thoughtful questions. We reply to your main concerns in turn below.\\n\\n> When is it possible to normalize the non-parametric q(a|s) in equation (6)? It seems to me this will be challenging in most any situation where the action space is continuous. \\n> Is this guaranteed to be Gaussian? If so, I don\\u2019t understand why.\\n\\nPlease see appendix, section C.2. In the parametric case the solution for q(a|s) is trivially normalized when we impose a parametric form that allows analytic evaluation of the normalization function (such as a Gaussian distribution). . \\nFor the non-parametric case note that the normalizer is given by \\nZ(s) = \\\\int \\\\pi_old(a|s) exp( Q(s,a)/eta) da,\\ni.e. it is an expectation with respect to our old policy for which we can obtain a MC estimate: \\\\hat{Z}(s) = 1/N \\\\sum_i exp(Q(s,a_i)/eta) with a_i \\\\sim \\\\pi_old( \\\\cdot | s).\\nThus we can empirically normalize the density for those state-action samples that we use to estimate pi_new in the M-step.\\n\\n> In equations (5) and (10), a KL divergence regularizer is replaced by a \\u201chard\\u201d constraint. \\n> However, for optimization purposes, in C.3 the hard constraint is then replaced by a soft constraint (with Lagrange multipliers), which depend on values of epsilon. \\n> Are these values of epsilon easy to pick in practice? If so, why are they easier to pick than e.g. the lambda value in eq (10)?\\n\\nThank you for pointing out that the reasoning behind this was not entirely easy to follow. We will improve the presentation in the paper. Indeed we found that choosing epsilon can be easier than choosing a multiplier for the KL regularizer. This is due to the fact that the scale of the rewards is unknown a-priori and hence the multiplier that trades of maximizing expected reward and minimizing KL can be expected to change for different RL environments. In contrast to this, when we put a hard constraint on the KL we can explicitly force the policy to stay \\\"epsilon-close\\\" to the last solution - independent of the reward scale. This allows for an easier transfer of hyperparameters across tasks.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The main idea of policy-as-inference is not new, but it seems to be the first application of this idea to deep RL, and is somewhat well motivated. The computational details get a bit hairy, but the good experimental results and the inclusion of ablation studies pushes this above the bar.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Thank you for your questions and insightful comments.\", \"comment\": \"We thank you for your questions and insightful comments.\\n\\n> There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-\\nparametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. \\n> These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?\\n\\nThank you for pointing out the additional related work. We will include it in the paper. Regarding the EM vs. trust-region question: The benefit of deriving the algorithm from the perspective of an EM-like coordinate ascent is that it motivates and provides a convenient means for theoretical analysis of the two-step procedure used in our approach. See the added a theoretical analysis that was added to the appendix of the paper.\\n\\n> It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. \\n> First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. \\n> This means that we change the objective all the time which is theoretically a bit weird. \\n> Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. \\n> Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. \\n> Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions\\n> just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined. \\n\\nThe reviewers point is well taken. While we think the unconstrained (soft-regularized) is instructive and useful for theoretical analysis the hard-constrained version can indeed be understood as proposed by the reviewer and equally provides important insights. We will clarify this in the paper and also include an experimental comparison between the soft and hard-regularized cases.\", \"regarding_your_two_concerns\": \"For our theoretical guarantee (that we have now derived in the appendix) to hold we have to fix alpha. However, in practice it changes slowly during optimization and converges to a stable value. One can indeed think of the second trust-region as a simple regularizer that prevents overfitting/too large changes in the (sample-based) M-step (similar small changes in the policy are also required by our proof).\\n\\n- Regarding the additional experiments you asked for:\\n\\nWe agree and have carried out additional experiments that will be included in the final version, preliminary results are as follows:\\n\\n1) MPO without trust region in M-step:\\nAlso works well for low-dimensional problems but is less robust for high-dimensional problems such as the humanoid.\\n\\n2) MPO without retrace algorithm for getting the Q-value\\nIs significantly slower to reach the same level of performance in the majority of the control suite tasks (retrace + MPO is never worse in any of the control suite tasks).\\n\\n3) test different epsilons for E and M step\\nThe algorithm seems to be robust to settings of epsilon - as long as it is set roughly to the right order of magnitude (10^-3 to 10^-2 for the E-step, 10^-4 to 10^-1 for the M-step). A very small epsilon will, of course, slow down convergence.\"}",
"{\"title\": \"The paper presents an interesting new algorithm for deep reinforcement learning which outperforms state of the art methods.\", \"rating\": \"7: Good paper, accept\", \"review\": [\"The paper presents a new algorithm for inference-based reinforcement learning for deep RL. The algorithm decomposes the policy update in two steps, an E and an M-step. In the E-step, the algorithm estimates a variational distribution q which is subsequentially used for the M-step to obtain a new policy. Two versions of the algorithm are presented, using a parametric or a non-parametric (sample-based) distribution for q. The algorithm is used in combination with the retrace algorithm to estimate the q-function, which is also needed in the policy update.\", \"This is a well written paper presenting an interesting algorithm. The algorithm is similar to other inference-based RL algorithm, but is the first application of inference based RL to deep reinforcement learning. The results look very promising and define a new state of the art or deep reinforcement learning in continuous control, which is a very active topic right now. Hence, I think the paper should be accepted.\", \"I do have a few comments / corrections / questions about the paper:\", \"There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-parametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?\", \"It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. This means that we change the objective all the time which is theoretically a bit weird. Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined.\", \"I did not get whether the additional \\\"one-step KL regularisation\\\" is obtained from the lower bound or just added as additional regularisation? Could you explain?\", \"The algorithm has now 2 KL constraints, for E and M step. Is the epsilon for both the same or can we achieve better performance by using different epsilons?\", \"I think the following experiments would be very informative:\", \"MPO without trust region in M-step\", \"MPO without retrace algorithm for getting the Q-value\", \"test different epsilons for E and M step\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Clarification of Equation 1\", \"comment\": \"These might be very obvious questions, but I failed to derive the last line (line 4) in equation (1) in the paper.\\n\\nFirstly, I think it would be helpfull to formally define what $$q(\\\\rho)$$ is. My current assumption is:\\n$$q(\\\\rho) = p(s_0) \\\\prod_1^\\\\infty p(s_{t+1}|a_t, s_t) q(a_t|s_t)$$\\nwhere the 'p' distributions are taken to be equal to the real environmental state transitions.\\n\\nNow, there are a few problems that I encountered when trying to derive equation (1):\\n\\n1. I think at the end of the line you should have $$+ \\\\log p(\\\\theta)$$ rather than $$+ p(\\\\theta)$$ (I believe this is a typo)\\n\\n2. In the definition of the log-probabilities, the $$\\\\alpha$$ parameter appears only in the definition of 'p(O=1|\\\\rho)'. The way it appears is as a denominator in the log-probability. In line 4 of equation (1) it has suddenly appeared as a multiplier in front of the log-densities of $$\\\\pi(a|s_t)$$ and $$q(a|s_t)$$. This is possible if we factor out the $$\\\\alpha^{-1}$$ from the sum of the rewards, but then on that line, there should be a prefactor of $$\\\\alpha^{-1}$$ in front of the expectation over 'q' which seems missing. (I believe this is a typo as well).\\n\\n3. In the resulting expectation, it is a bit unclear how did the discount factors $$\\\\gamma^t$$ have appeared as well as in front of the rewards also in front of the KL divergences? From the context provided I really failed to be able to account for this, and given that for the rest of the paper this form has been used more than once I was wondering if you could provide some clarification on the derivation of the equation as it is not obvious to at least some of the common readers of the paper.\"}",
"{\"title\": \"Comments\", \"comment\": \"(1) Clarification of Equation 4\\n\\nThe derivation of \\\"one-step KL regularised objective\\\" is unclear to me and this seems to be related to a partial E-step. \\n\\nWould you explain this part in more detail?\\n\\n(2) As far as I know, the previous works on variational RL maximize the marginal log-likelihood p(O=1|\\\\theta) (Toussaint (2009) and Rawlik (2012)), whereas you maximizes the unnormalized posterior p(O=1, \\\\theta) with the prior assumption on $\\\\theta$. \\nI wonder if the prior assumption enhances the performance.\"}",
"{\"title\": \"Few comments...\", \"comment\": [\"I do have a few comments / corrections / questions about the paper:\", \"There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-parametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?\", \"It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. This means that we change the objective all the time which is theoretically a bit weird. Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined.\", \"I did not get whether the additional \\\"one-step KL regularisation\\\" is obtained from the lower bound or just added as additional regularisation? Could you explain?\", \"The algorithm has now 2 KL constraints, for E and M step. Is the epsilon for both the same or can we achieve better performance by using different epsilons?\", \"I think the following experiments would be very informative:\", \"MPO without trust region in M-step\", \"MPO without retrace algorithm for getting the Q-value\", \"test different epsilons for E and M step\"]}",
"{\"title\": \"Code\", \"comment\": \"Hi,\\n\\nReally impressive work. Do you have any plan to release the code?\"}"
]
} |
B1tC-LT6W | Trace norm regularization and faster inference for embedded speech recognition RNNs | [
"Markus Kliegl",
"Siddharth Goyal",
"Kexin Zhao",
"Kavya Srinet",
"Mohammad Shoeybi"
] | We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers. | [
"LVCSR",
"speech recognition",
"embedded",
"low rank factorization",
"RNN",
"GRU",
"trace norm"
] | Reject | https://openreview.net/pdf?id=B1tC-LT6W | https://openreview.net/forum?id=B1tC-LT6W | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HJcmDCFgz",
"Hk1q_liEG",
"BJ_PnTW7M",
"Bk-k0Ctgz",
"BJgpEMDEM",
"HksCB16HG",
"By29r_AeG",
"S1UpspbmG",
"HkhMCaWQM"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1511806754319,
1516075142739,
1514425439854,
1511808473461,
1515820215848,
1517250002603,
1512109459584,
1514425278448,
1514425875924
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper74/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper74/Authors"
],
[
"ICLR.cc/2018/Conference/Paper74/Authors"
],
[
"ICLR.cc/2018/Conference/Paper74/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper74/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper74/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper74/Authors"
],
[
"ICLR.cc/2018/Conference/Paper74/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors propose a strategy for compressing RNN acoustic models in order to deploy them for embedded applications. The technique consists of first training a model by constraining its trace norm, which allows it to be well-approximated by a truncated SVD in a second fine-tuning stage. Overall, I think this is interesting work, but I have a few concerns which I\\u2019ve listed below:\\n\\n1. Section 4, which describes the experiments of compressing server sized acoustic models for embedded recognition seems a bit \\u201cdisjoint\\u201d from the rest of the paper. I had a number of clarification questions spefically on this section:\\n- Am I correct that the results in this section do not use the trace-norm regularization at all? It would strengthen the paper significantly if the experiments presented on WSJ in the first section were also conducted on the \\u201cinternal\\u201d task with more data.\\n- How large are the training/test sets used in these experiments (for test sets, number of words, for training sets, amount of data in hours (is this ~10,000hrs), whether any data augmentation such as multi-style training was done, etc.)\\n- What are the \\u201ctier-1\\u201d and \\u201ctier-2\\u201d models in this section? It would also aid readability if the various models were described more clearly in this section, with an emphasis on structure, output targets, what LMs are used, how are the LMs pruned for the embedded-size models, etc. Also, particularly given that the focus is on embedded speech recognition, of which the acoustic model is one part, I would like a few more details on how decoding was done, etc.\\n- The details in appendix B are interesting, and I think they should really be a part of the main paper. That being said, the results in Section B.5, as the authors mention, are somewhat preliminary, and I think the paper would be much stronger if the authors can re-run these experiments were models are trained to convergence.\\n- The paper focuses fairly heavily on speech recognition tasks, and I wonder if it would be more suited to a conference on speech recognition. \\n\\n2. Could the authors comment on the relative training time of the models with the trace-norm regularizer, L2-regularizer and the unconstrained model in terms of convergence time.\\n\\n3. Clarification question: For the WSJ experiments was the model decoded without an LM? If no LM was used, then the choice of reporting results in terms of only CER is reasonable, but I think it would be good to also report WERs on the WSJ set in either case.\\n\\n4. Could the authors indicate the range of values of \\\\lambda_{rec} and \\\\lambda_{nonrec} that were examined in the work? Also, on a related note, in Figure 2, does each point correspond to a specific choice of these regularization parameters?\\n\\n5. Figure 4: For the models in Figure 4, it would be useful to indicate the starting CER of the stage-1 model before stage-2 training to get a sense of how stage-2 training impacts performance.\\n\\n6. Although the results on the WSJ set are interesting, I would be curious if the same trends and conclusions can be drawn from a larger dataset -- e.g., the internal dataset that results are reported on later in the paper, or on a set like Switchboard. I think these experiments would strengthen the paper.\\n\\n7. The experiments in Section 3.2.3 were interesting, since they demonstrate that the model can be warm-started from a model that hasn\\u2019t fully converged. Could the authors also indicate the CER of the model used for initialization in addition to the final CER after stage-2 training in Figure 5.\\n\\n8. In Section 4, the authors mention that quantization could be used to compress models further although this is usually degrades WER by 2--4% relative. I think the authors should consider citing previous works which have examined quantization for embedded speech recognition [1], [2]. In particular, note that [2] describes a technique for training with quantized forward passes which results in models that have smaller performance degradation relative to quantization after training.\", \"references\": \"[1] Vincent Vanhoucke, Andrew Senior, and Mark Mao, \\u201cImproving the speed of neural networks on cpus,\\u201d in Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.\\n[2] Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin, \\u201cOn the efficient representation and execution of deep acoustic models,\\u201d Proc. of Interspeech, pp. 2746 -- 2750, 2016.\\n\\n9. Minor comment: The authors use the term \\u201cwarmstarting\\u201d to refer to the process of training NNs by initializing from a previous model. It would be good to clarify this in the text.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Clarification of contributions\", \"comment\": \"Indeed, at this point, it seems hard to escape the conclusion that trace norm regularization is not substantially or at all superior to L2 regularization with respect to the number of parameters versus CER trade-off. In retrospect, we should have written the paper quite differently to highlight the comparison of regularization techniques as well as some of our other main contributions.\\n\\nFor the record and for the benefit of possible future viewers of this page, we would like to list one more time some of these contributions:\\n\\n1. The initial baseline we faced, based on the literature, is actually the unregularized green points in Figure 4. Producing the strong L2 regularized baseline is itself a contribution of this paper. We found that for both trace norm and L2 regularization, getting such strong results requires separate regularization strengths for the hidden-to-hidden and input-to-hidden weights of recurrent layers. See Figure 1.\\n\\n2. We showed that, whether using L2 or trace norm regularization, it is not necessary to train \\\"stage 1\\\" models fully. A few epochs should suffice. This could substantially speed up training of large models. (See Figure 5.)\\n\\n3. We created and made publicly available efficient GEMM kernels for small batch sizes on the ARM platform.\\n\\nFinally, as a pointer for possible future readers, we would like to mention that we do not think the approximate doubling of parameters in stage 1 using trace norm regularization is a serious obstacle. We suspect using rank r = min(m,n)/2 instead of r = min(m, n) in stage 1 would not impact results much for most problems. However, to be clear, we have not tested this and do not report this in the paper.\"}",
"{\"title\": \"Fixed mistake in Figure 4\", \"comment\": \"Thank you for your thorough review. Thanks to your comment 3 in particular, we found an error in our preparation of Figure 4. We have remedied this situation and the figure looks more reasonable now. Unfortunately, our claim of more \\u201cconsistent\\u201d good results appears weakened through this finding. However, the rest of the paper is not affected by this mistake.\", \"to_respond_to_your_points_in_detail\": \"1.Yes, we did the hyperparameter comparisons on a validation set that is separate from the train set. We have clarified this in the text.\\n\\n2. Figure 3 shows fully converged models that are uncompressed (as we only do the compression for stage 2). The green points correspond to the baseline model mentioned in B.1, trained without any regularization. Without regularization, this baseline model is seen to perform quite poorly in terms of final CER. For WSJ, we found that models benefit greatly from regularization. Therefore, to have fair baselines to compare trace norm regularization against, we tuned L2 regularized models just as extensively as we tuned trace norm regularized models. The L2 regularized models are the orange points. We have clarified this in the text.\\n\\n3. Thank you for drawing our attention again to the orange points in Figure 4. It turns out we made an error: the stage 1 models used for Figure 4 (for both trace norm and L2 regularization) were actually selected to another criterion regarding CER vs. rank at 90% trade-off we considered earlier on, rather than just best CER as we indicated in the text. After fixing the criterion to \\u201cbest CER\\u201d as we had intended, there is no longer such drastically different behavior between the orange points. We have corrected the figure and updated the claims about more consistent training.\", \"the_corrected_lambda_values_and_the_cer_values_of_the_models_that_were_used_as_starting_points_for_the_stage_2_experiments_are_as_follows\": \"L2 models, CER, \\ud835\\udf06nonrec,\\ud835\\udf06rec \\n\\t1, 6.6963, 0.05, 0.01\\n\\t2, 6.7536, 0.05, 0.005\\n\\t3, 6.7577, 0.05, 0.0025\\n\\tTrnorm models, CER, \\ud835\\udf06nonrec,\\ud835\\udf06rec \\n\\t1, 6.6471, 0.02, 0.001\\n\\t2, 6.7475, 0.02, 0.005\\n\\t3, 6.7823, 0.02, 0.0005\", \"writing\": \"We have clarified the meaning of \\u201crec\\u201d and \\u201cnonrec\\u201d in the main body of the text. We did not want to go into the full details of the Deep Speech 2 architecture in the main text, as we feel the details are not very pertinent to our present study and may distract the reader from the generality of the ideas. However, we have tried to provide more detail on those parts that are relevant. We hope the balance we struck now improves the exposition.\"}",
"{\"title\": \"Model compression with trace norm regularization - pertinent details on experiments missing\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The problem considered in the paper is of compressing large networks (GRUs) for faster inference at test time.\", \"the_proposed_algorithm_uses_a_two_step_approach\": \"1) use trace norm regularization (expressed in variational form) on dense parameter matrices at training time without constraining the number of parameters, b) initializing from the SVD of parameters trained in stage 1, learn a new network with reduced number of parameters.\\n\\nThe experiments on WSJ dataset are promising towards achieving a trade-off between number of parameters and accuracy.\", \"i_have_the_following_questions_regarding_the_experiments\": \"1. Could the authors confirm that the reported CERS are on validation/test dataset and not on train/dev data? It is not explicitly stated. I hope it is indeed the former, else I have a major concern with the efficacy of the algorithm as ultimately, we care about the test performance of the compressed models in comparison to uncompressed model. \\n\\n2. In B.1 the authors use an increasing number units in the hidden layers of the GRUs as opposed to a fixed size like in Deep Speech 2, an obvious baseline that is missing from the experiments is the comparison with *exact* same GRU (with 768, 1024, 1280, 1536 hidden units) *without any compression*. \\n\\n3. What do different points in Fig 3 and 4 represent. What are the values of lamdas that were used to train (the l2 and trace norm regularization) the Stage 1 of models shown in Fig 4. I want to understand what is the difference in the two types of behavior of orange points (some of them seem to have good compression while other do not - it the difference arising from initialization or different choice of lambdas in stage 1. \\n\\nIt is interesting that although L2 regularization does not lead to low \\\\nu parameters in Stage 1, the compression stage does have comparable performance to that of trace norm minimization. The authors point it out, but a further investigation might be interesting.\", \"writing\": \"1. The GRU model for which the algorithm is proposed is not introduced until the appendix. While it is a standard network, I think the details should still be included in the main text to understand some of the notation referenced in the text like \\u201c\\\\lambda_rec\\u201d and \\u201c\\\\lambda_norec\\u201d\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to authors\", \"comment\": \"After the correction in Figure 4, for final compression performance, trace norm regularization proposed by this paper has performance comparable to more standard L2 performance. In the light of this new experiment, there is not enough evidence to prefer using trace norm regularization and factorized weights in stage 1. In fact, the factorized representation doubles the number of parameters to be learned in stage 1.\\n\\nThe experiments do not seem to validate the significance of the main contribution of paper - namely using a trace norm regularization in stage 1 for better performance after compression with low rank factorization. Am I missing something here?\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Pros\\n-- Shows alternative strategies to train low-rank factored weight matrices for recurrent nets.\\n\\nCons\\n-- Minor modifications (and gains) over other forms of regularization like L2.\\n-- Results are only on an ASR task, so it\\u2019s not entirely clear how they\\u2019ll work on other tasks.\\n\\nAs pointed out by the reviewers, unless the authors show that the techniques generalize well to other tasks, and larger datasets it hard to accept it to the main conference. The AC, therefore, recommends that the paper be rejected.\"}",
"{\"title\": \"This paper presents a trace norm regularization technique for factorized matrix multiplication with the purpose of overcoming the computational complexity in DNN and RNN\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Paper is well written and clearly explained. The paper is a experimental paper as it has more content on the experimentation and less content on problem definition and formulation. The experimental section is strong and it has evaluated across different datasets and various scenarios. However, I feel the contribution of the paper toward the topic is incremental and not significant enough to be accepted in this venue. It only considers a slight modification into the loss function by adding a trace norm regularization.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"We contend the paper is of more general interest.\", \"comment\": \"We would like to thank you for taking the time to review our paper. Although these particular results for speech may appear incremental, we believe the methodologies and insights go far beyond speech recognition and should be of interest to researchers working on low rank methods and compressing large non-convolutional neural networks. In addition to the systematic modification to the loss function that we propose, we also present a methodology for training models with this modified loss function and also a methodology for studying the effectiveness and making fair comparisons of such techniques. There are also other critical insights that were necessary to make this work, such as the need to treat the recurrent (hidden-to-hidden) and non-recurrent (input-to-hidden) weights of recurrent layers separately, and regularize them with different strengths.\\n\\nOur goal in this paper is to lay the groundwork to attract more attention from the community and invite further study of the technique we present as well as the techniques of Prabhavalkar et al. and others that we build upon. The combination of these techniques, as we have shown, could also be potentially useful for speeding up the training of large networks. We hope that our work will promote the use of factorized matrices in the research community, resulting in a more compact representation of neural networks.\"}",
"{\"title\": \"Explanation of content prioritization and organization\", \"comment\": \"Thank you for your very thorough review and detailed feedback.\\n\\nBefore addressing specific questions and comments, we would like to elaborate more on why we chose this venue to present our work and on the principles behind our paper\\u2019s organization. We agree that a lot of attention is being paid to the speech-specific aspects in our paper. This is natural given the title and since all of the experimental results we report are for speech recognition. However, it is our firm conviction that the technique we present as well as the techniques of Prabhavalkar et al. are much more broadly applicable.\\n\\nConsequently, we had originally somewhat broader ambitions. We hoped to compare sparsity and low-rank factorization in more detail, on both speech recognition and other tasks like language modelling (on, say, Penn Treebank or the billion words corpus). Due to resource constraints, we could not run all experiments we hoped to and had to prioritize. Our organizing principle was to put what we thought was of more general interest in the main text, and shift the more preliminary work and the work that we felt was very speech-specific to appendices. We think that material is nonetheless valuable and we hope our inclusion of it can invite further research from the community to expand upon the issues raised and the solutions offered.\\n\\nWhat is in the main text of the paper, we believe could in principle be applied just as well to any other deep models involving dense or recurrent layers.\", \"regarding_your_specific_points\": \"1. As described above, it was a conscious decision to not focus too much on the speech-specific aspects in the main text of the paper. We plan to add some more details to the appendix later on, but detailed experiments we will need to relegate, as you suggested, to a possible future speech conference submission. Regarding section 4: We do wish we could have reported trace-norm-regularized results here, but in the end that would have cannibalized too many resources from other higher-priority experiments. (A single training run on our large 10,000+ hour speech datasets may occupy 16 GPU\\u2019s and take, with interruptions, around 3 to 4 weeks to complete.) As a result we introduced the newly developed kernels with only a few models we had already started training before the techniques in Section 3 were developed. \\n\\n2. Unfortunately, due to training on different types of hardware with various interruptions, we could not compile meaningful wall-clock training time comparisons.\\n\\n3. Correct, for WSJ we did not use a language model. As we were interested in relative performance of different compression techniques for the acoustic model only, we decided to keep the WSJ experiments as simple as possible.\\n\\n4. Figure 1 shows the lambda values examined for stage 1. Yes, for Figure 2 we show the variation with respect to one of the lambda's when the other lambda is fixed at 0.\\n\\n5. Thank you for the suggestion. We have fixed Figure 4 (please see our response to Reviewer 2 for further details) and the behavior for all points is more consistent now. All the stage 1 models used for warm-starting the points in this figure had below 6.8 final CER. We look at stage 1 CER vs. stage 2 CER in response to your point 7 below, where the effect is more interesting.\\n\\n6. This is a great suggestion for follow-up work. Unfortunately, due to resource constraints, we could not pursue this for the present paper.\\n\\n7. Great suggestion. We have updated Figure 5 to include this information. As is clearer now, the stage 1 models trained for only a few epochs are really very far from being fully converged and yet are still good enough to be used for warm-starting successful stage 2 models.\\n\\n8. Good point. As the relative WER losses we saw from compressing the language and acoustic models were much larger the relative loss from quantization, we chose to not pursue quantization further for this particular study. However, as you suggest, we should at least point to the relevant literature. We have added these citations and clarified this in the text.\\n\\n9. Thank you for pointing this out. We have clarified this in the text.\"}"
]
} |
BJB7fkWR- | Domain Adaptation for Deep Reinforcement Learning in Visually Distinct Games | [
"Dino S. Ratcliffe",
"Luca Citi",
"Sam Devlin",
"Udo Kruschwitz"
] | Many deep reinforcement learning approaches use graphical state representations,
this means visually distinct games that share the same underlying structure cannot
effectively share knowledge. This paper outlines a new approach for learning
underlying game state embeddings irrespective of the visual rendering of the game
state. We utilise approaches from multi-task learning and domain adaption in
order to place visually distinct game states on a shared embedding manifold. We
present our results in the context of deep reinforcement learning agents. | [
"Deep Reinforcement Learning",
"Domain Adaptation",
"Adversarial Networks"
] | Reject | https://openreview.net/pdf?id=BJB7fkWR- | https://openreview.net/forum?id=BJB7fkWR- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BJLq8yaSz",
"H1xFygmyz",
"SyZl4CKeM",
"BJ5RWXilM"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1517250190442,
1510305655712,
1511805928742,
1511891410443
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper477/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper477/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper477/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The reviewers have found that while the task of visual domain adaptation is meaningful to explore and improve, the proposed method is not sufficiently well-motivated, explained or empirically tested.\"}",
"{\"title\": \"This paper contains interesting ideas, but it is not ready for publication.\", \"rating\": \"3: Clear rejection\", \"review\": \"In this paper, the authors propose a new approach for learning underlying structure of visually distinct games.\\n\\nThe proposed approach combines convolutional layers for processing input images, Asynchronous Advantage Actor Critic for deep reinforcement learning task and adversarial approach to force the embedding representation to be independent of the visual representation of games. \\n\\nThe network architecture is suitably described and seems reasonable to learn simultaneously similar games, which are visually distinct. However, the authors do not explain how this architecture can be used to do the domain adaptation. \\nIndeed, if some games have been learnt by the proposed algorithm, the authors do not precise what modules have to be retrained to learn a new game. This is a critical issue, because the experiments show that there is no gain in terms of performance to learn a shared embedding manifold (see DA-DRL versus baseline in figure 5).\\nIf there is a gain to learn a shared embedding manifold, which is plausible, this gain should be evaluated between a baseline, that learns separately the games, and an algorithm, that learns incrementally the games. \\nMoreover, in the experimental setting, the games are not similar but simply the same.\\n\\nMy opinion is that this paper is not ready for publication. The interesting issues are referred to future works.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea maybe? Very poor experimental section.\", \"rating\": \"2: Strong rejection\", \"review\": \"This paper introduces a method to learn a policy on visually different but otherwise identical games. While the idea would be interesting in general, unfortunately the experiment section is very much toy example so that it is hard to know the applicability of the proposed approach to any more reasonable scenario. Any sort of remotely convincing experiment is left to 'future work'.\\n\\nThe experimental setup is 4x4 grid world with different basic shape or grey level rendering. I am quite convinced that any somewhat correctly setup vanilla deep RL algorithm would solve these sort of tasks/ ensemble of tasks almost instantly out of the box.\", \"figure_5\": \"Looks to me like the baseline is actually doing much better than the proposed methods?\", \"figure_6\": \"Looking at those 2D PCAs, I am not sure any of those method really abstracts the rendering away. Anyway, it would be good to have a quantified metric on this, which is not just eyeballing PCA scatter plots.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": [\"This paper discusses an agent architecture which uses a shared representation to train multiple tasks with different sprite level visual statistics. The key idea is that the agent learns a shared representations for tasks with different visual statistics\", \"A lot of important references touching on very similar ideas are missing. For e.g. \\\"Unsupervised Pixel-level Domain Adaptation with Generative Adversarial Networks\\\", \\\"Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping\\\", \\\"Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics\\\".\", \"This paper has a lot of orthogonal details. For instance sec 2.1 reviews the history of games and AI, which is besides the key point and does not provide any literary context.\", \"Only single runs for the results are shown in plots. How statistically valid are the results?\", \"In the last section authors mention the intent to do future work on atari and other env. Given that this general idea has been discussed in the literature several times, it seems imperative to at least scale up the experiments before the paper is ready for publication\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
r1q7n9gAb | The Implicit Bias of Gradient Descent on Separable Data | [
"Daniel Soudry",
"Elad Hoffer",
"Mor Shpigel Nacson",
"Nathan Srebro"
] | We show that gradient descent on an unregularized logistic regression
problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore,
we show this convergence is very slow, and only logarithmic in the
convergence of the loss itself. This can help explain the benefit
of continuing to optimize the logistic or cross-entropy loss even
after the training error is zero and the training loss is extremely
small, and, as we show, even if the validation loss increases. Our
methodology can also aid in understanding implicit regularization
in more complex models and with other optimization methods. | [
"gradient descent",
"implicit regularization",
"generalization",
"margin",
"logistic regression",
"loss functions",
"optimization",
"exponential tail",
"cross-entropy"
] | Accept (Poster) | https://openreview.net/pdf?id=r1q7n9gAb | https://openreview.net/forum?id=r1q7n9gAb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ByMhM0wGz",
"HyBrwGweG",
"S1jezarxG",
"HJxIf0wGz",
"SymFQAPfz",
"B1ToQy6SG",
"HkS9oWtef"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_review"
],
"note_created": [
1513771690412,
1511626557375,
1511539187063,
1513771591932,
1513771899210,
1517249445263,
1511754637358
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper358/Authors"
],
[
"ICLR.cc/2018/Conference/Paper358/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper358/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper358/Authors"
],
[
"ICLR.cc/2018/Conference/Paper358/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper358/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Comment addressed in revision\", \"comment\": \"We thank the reviewer for the positive review and for the helpful comment. We uploaded a revised version in which clarified in the abstract that the weights converge \\u201cin direction\\u201d to the L2 max margin solution.\"}",
"{\"title\": \"Very interesting characterisation of limiting behaviour of the log-loss minimisaton\", \"rating\": \"7: Good paper, accept\", \"review\": \"Paper focuses on characterising behaviour of the log loss minimisation on the linearly separable data. As we know, optimisation like this does not converge in a strict mathematical sense, as the norm of the model will grow to infinity. However, one can still hope for a convergence of normalised solution (or equivalently - convergence in term of separator angle, rather than parametrisation). This paper shows that indeed, log-loss (and some other similar losses), minimised with gradient descent, leads to convergence (in the above sense) to the max-margin solution. On one hand it is an interesting property of model we train in practice, and on the other - provides nice link between two separate learning theories.\", \"pros\": [\"easy to follow line of argument\", \"very interesting result of mapping \\\"solution\\\" of unregularised logistic regression (under gradient descent optimisation) onto hard max margin one\"], \"cons\": [\"it is not clear in the abstract, and beginning of the paper what \\\"convergence\\\" means, as in the strict sense logistic regression optimisation never converges on separable data. It would be beneficial for the clarity if authors define what they mean by convergence (normalised weight vector, angle, whichever path seems most natural) as early in the paper as possible.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting paper, but issues with correctness and presentation\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper offers a formal proof that gradient descent on the logistic\\nloss converges very slowly to the hard SVM solution in the case where\\nthe data are linearly separable. This result should be viewed in the\\ncontext of recent attempts at trying to understand the generalization\\nability of neural networks, which have turned to trying to understand\\nthe implicit regularization bias that comes from the choice of\\noptimizer. Since we do not even understand the regularization bias of\\noptimizers for the simpler case of linear models, I consider the paper's\\ntopic very interesting and timely.\\n\\nThe overall discussion of the paper is well written, but on a more\\ndetailed level the paper gives an unpolished impression, and has many\\ntechnical issues. Although I suspect that most (or even all) of these\\nissues can be resolved, they interfere with checking the correctness of\\nthe results. Unfortunately, in its current state I therefore do not\\nconsider the paper ready for publication.\", \"technical_issues\": \"The statement of Lemma 5 has a trivial part and for the other part the\", \"proof_is_incorrect\": \"Let x_u = ||nabla L(w(u))||^2.\\n - Then the statement sum_{u=0}^t x_u < infinity is trivial, because\\n it follows directly from ||nabla L(w(u))||^2 < infinity for all u. I\\n would expect the intended statement to be sum_{u=0}^infinity x_u <\\n infinity, which actually follows from the proof of the lemma.\\n - The proof of the claim that t*x_t -> 0 is incorrect: sum_{u=0}^t x_u\\n < infinity does not in itself imply that t*x_t -> 0, as claimed. For\\n instance, we might have x_t = 1/i^2 when t=2^i for i = 1,2,... and\\n x_t = 0 for all other t.\\n\\nDefinition of tilde{w} in Theorem 4:\\n - Why would tilde{w} be unique? In particular, if the support vectors\\n do not span the space, because all data lie in the same\\n lower-dimensional hyperplane, then this is not the case.\\n - The KKT conditions do not rule out the case that \\\\hat{w}^top x_n =\\n 1, but alpha_n = 0 (i.e. a support vector that touches the margin,\\n but does not exert force against it). Such n are then included in\\n cal{S}, but lead to problems in (2.7), because they would require\\n tilde{w}^top x_n = infinity, which is not possible.\\n\\nIn the proof of Lemma 6, case 2. at the bottom of p.14:\\n - After the first inequality, C_0^2 t^{-1.5 epsilon_+} should be \\n C_0^2 t^{-epsilon_+}\\n - After the second inequality the part between brackets is missing an\\n additional term C_0^2 t^{-\\\\epsilon_+}.\\n - In addition, the label (1) should be on the previous inequality and\\n it should be mentioned that e^{-x} <= 1-x+x^2 is applied for x >= 0\\n (otherwise it might be false).\\nIn the proof of Lemma 6, case 2 in the middle of p.15:\\n - In the line of inequality (1) there is a t^{-epsilon_-} missing. In\\n the next line there is a factor t^{-epsilon_-} too much.\\n - In addition, the inequality e^x >= 1 + x holds for all x, so no need\\n to mention that x > 0.\", \"in_lemma_1\": [\"claim (3) should be lim_{t \\\\to \\\\infty} w(t)^\\\\top x_n = infinity\", \"In the proof: w(t)^top x_n > 0 only holds for large enough t.\"], \"remarks\": \"p.4 The claim that \\\"we can expect the population (or test)\\nmisclassification error of w(t) to improve\\\" because \\\"the margin of w(t)\\nkeeps improving\\\" is worded a little too strongly, because it presumes\\nthat the maximum margin solution will always have the best\\ngeneralization error.\\n\\nIn the proof sketch (p.3):\\n - Why does the fact that the limit is dominated by gradients that are\\n a linear combination of support vectors imply that w_infinity will\\n also be a non-negative linear combination of support vectors?\\n - \\\"converges to some limit\\\". Mention that you call this limit\\n w_infinity\", \"minor_issues\": \"In (2.4): add \\\"for all n\\\".\\n\\np.10, footnote: Shouldn't \\\"P_1 = X_s X_s^+\\\" be something like \\\"P_1 =\\n(X_s^top X_s)^+\\\"?\\n\\nA.9: ell should be ell'\\n\\nThe paper needs a round of copy editing. For instance:\\n - top of p.4: \\\"where tilde{w} A is the unique\\\"\\n - p.10: \\\"the solution tilde{w} to TO eq. A.2\\\"\\n - p.10: \\\"might BOT be unique\\\"\\n - p.10: \\\"penrose-moorse pseudo inverse\\\" -> \\\"Moore-Penrose\\n pseudoinverse\\\"\\n \\nIn the bibliography, Kingma and Ba is cited twice, with different years.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Comments addressed in revision\", \"comment\": \"We thank the reviewer for the positive review and for the helpful comments. We uploaded a revised version in which all the reviewer comments were addressed.\\n\\n[\\u201cI\\u2019m curious whether the result in this paper can be applied to other loss functions, such as hinge loss.\\u201d]\\n\\nWe believe our results could be extended to many other types of loss functions (in fact, we are currently working on such extensions). However, for the hinge loss (without regularization), gradient descent on separable data can converge to a finite solution which is not to the max margin vector. For example, if there is a single data point x=(1,0), and we start with a weight vector w=(2,2), the hinge loss and its gradient are both equal to zero. Therefore, no weight updates are performed, and we do not converge to the direction of the L2 max margin classifier: w=(1,0).\\n\\n[\\u201cIt is better for the authors to use another section to illustrate experimental settings instead of writing them in the caption of Figure 3.1. \\u201c]\\n\\nWe felt it is easier to read if all details are summarized in the figure, and wanted to save space to fit the main paper into 8 pages. However, we can change this if required.\"}",
"{\"title\": \"Comments addressed in revision\", \"comment\": \"We thank the reviewer for acknowledging the significance of our results, and for investing significant efforts in improving the quality of this manuscript. We uploaded a revised version in which all the reviewer comments were addressed, and the appendix was further polished. Notably,\\n\\n[Lemma 5 in appdendix]\\n\\n- Indeed, the upper limit of the sum over x_u should be 'infinity' instead of 't'.\\n\\n- It should be 'x_t -> 0', not 't*x_t -> 0'.\\n\\n[Definition of tilde{w} Theorem 4]\\n\\n- tilde{w} is indeed unique, given the initial conditions. We clarified this in Theorem 4 and its proof.\\n\\n- alpha_n=0 for the support vectors is only true for a measure zero of all datasets (we added a proof of this in appendix F). Thus, we clarified in the revision that our results hold for almost every dataset (and so, they are true with probability 1 for any data drawn from a continuous-valued distribution).\\n\\n[Why does the fact that the limit is dominated by gradients that are a linear combination of support vectors imply that w_infinity will also be a non-negative linear combination of support vectors?]\", \"we_clarified_in_the_revision\": \"\\u201c...The negative gradient would then asymptotically become a non-negative linear combination of support vectors. The limit w_{\\\\infinity} will then be dominated by these gradients, since any initial conditions become negligible as ||w(t)||->infinity (from Lemma 1)\\u201d.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper is tackling an important open problem.\\n\\nAnonReviewer3 identified some technical issues that led them to rate the manuscript 5 (i.e., just below the acceptance threshold). Many of these issues are resolved by the reviewer in their review, and the author response makes it clear that these fixes are indeed correct. However, other issues that the reviewer raises are not provided with solutions. The authors address these points, but in one case at least (regarding w_infinity), I find the new text somewhat hand-waivy. Regardless, I'm inclined to accept the paper because the issues seem to be straightforward. Ultimately, the authors are responsible for the correctness of the results.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"This paper analyzes the implicit regularization introduced by gradient descent for optimizing the smooth monotone exponential tailed loss function with separable data. The proposed result is very interesting since it illustrates that using gradient descent to minimize such loss function can lead to the L_2 maximum margin separator.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"(a) Significance\\nThe main contribution of this paper is to characterize the implicit bias introduced by gradient descent on separable data. The authors show the exact form of this bias (L_2 maximum margin separator), which is independent of the initialization and step size. The corresponding slow convergence rate explains the phenomenon that the predictor can continue to improve even when the training loss is already small. The result of this paper can inspire the study of the implicit bias introduced by gradient descent variants or other optimization methods, such as coordinate descent. In addition, the proposed analytic framework seems promising since it may be extended to analyze other models, like neural networks.\\n\\n(b) Originality\\nThis is the first work to give the detailed characterizations of the implicit bias of gradient descent on separable data. The proposed assumptions are reasonable, but it seems to limit to the loss function with exponential tail. I\\u2019m curious whether the result in this paper can be applied to other loss functions, such as hinge loss.\\n\\n(c) Clarity & Quality \\nThe presentation of this paper is OK. However, there are some places can be improved in this paper. For example, in Lemma 1, results (3) and (4) can be combined together. It is better for the authors to use another section to illustrate experimental settings instead of writing them in the caption of Figure 3.1.\", \"minor_comments\": \"1. In Lemma 1 (4), w^T(t)->w(t)^T\\n2. In the proof of Lemma 1, it\\u2019s better to use vector 0 for the gradient L(w)\\n3. In Theorem 4, the authors should specify eta\\n4. In appendix A, page 11, beta is double used\\n5. In appendix D, equation (D.5) has an extra period\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
BkrsAzWAb | Online Learning Rate Adaptation with Hypergradient Descent | [
"Atilim Gunes Baydin",
"Robert Cornish",
"David Martinez Rubio",
"Mark Schmidt",
"Frank Wood"
] | We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation. | [
"rate adaptation",
"stochastic gradient descent",
"online",
"hypergradient descent online",
"hypergradient descent",
"general",
"convergence rate",
"optimizers",
"easy",
"practice"
] | Accept (Poster) | https://openreview.net/pdf?id=BkrsAzWAb | https://openreview.net/forum?id=BkrsAzWAb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"r1jLC23Jf",
"HJcZnfFXM",
"S1WP2GFQz",
"Hy8WTMFmf",
"BJ6v0V9ef",
"H1PN17VXz",
"r1HU3l1kf",
"HkKfEyprG",
"rkaXMT-lz",
"BydQzcHxf",
"B1wQhWzGM",
"S1sZVWMlz",
"H1pbs28kG"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"decision",
"comment",
"official_comment",
"comment",
"official_comment",
"official_review"
],
"note_created": [
1510948435021,
1514904578083,
1514904664868,
1514904829744,
1511833189294,
1514577710890,
1510046797377,
1517249553067,
1511277093098,
1511526943610,
1513393183220,
1511293955195,
1510554372703
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper1073/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1073/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper1073/Authors"
],
[
"~Ricardo_Pio_Monti1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"~Kai_Li2"
],
[
"ICLR.cc/2018/Conference/Paper1073/Authors"
],
[
"~Yi_Lian1"
],
[
"ICLR.cc/2018/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1073/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Somewhat weak novelty, but well written, complete, and potentially impactful.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors consider a method (which they trace back to 1998, but may have a longer history) of learning the learning rate of a first-order algorithm at the same time as the underlying model is being optimized, using a stochastic multiplicative update. The basic observation (for SGD) is that if \\\\theta_{t+1} = \\\\theta_t - \\\\alpha \\\\nabla f(\\\\theta_t), then \\\\partial/\\\\partial\\\\alpha f(\\\\theta_{t+1}) = -<\\\\nabla f(\\\\theta_t), \\\\nabla f(\\\\theta_{t+1})>, i.e. that the negative inner product of two successive stochastic gradients is equal in expectation to the derivative of the tth update w.r.t. the learning rate \\\\alpha.\\n\\nI have seen this before for SGD (the authors do not claim that the basic idea is novel), but I believe that the application to other algorithms (the authors explicitly consider Nesterov momentum and ADAM) are novel, as is the use of the multiplicative and normalized update of equation 8 (particularly the normalization).\\n\\nThe experiments are well-presented, and appear to convincingly show a benefit. Figure 3, which explores the robustness of the algorithms to the choice of \\\\alpha_0 and \\\\beta, is particularly nicely-done, and addresses the most natural criticism of this approach (that it replaces one hyperparameter with two).\\n\\nThe authors highlight theoretical convergence guarantees as an important future work item, and the lack of them here (aside from Theorem 5.1, which just shows asymptotic convergence if the learning rates become sufficiently small) is a weakness, but not, I think, a critical one. This appears to be a promising approach, and bringing it back to the attention of the machine learning community is valuable.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your encouraging evaluation and for the improvements suggested.\\n\\n> 1, the derivation of the update of \\\\alpha relies on the expectation formulation. I would like to see the investigation of the effect of the size of minibatch to reveal the variance of the gradient in the algorithm combined with such trick.\\n\\nWe do not have theoretical results about the effect of the minibatch size and gradient variance on the hypergradient descent (HD) algorithm. Considering that the reviewer was potentially referring to experimental evidence, we will make sure to include experimental results with varying minibatch sizes in an appendix in the final revision of this paper.\\n\\n> 2, The derivation of the multiplicative rule of HD relies on a reference I cannot find. Please include this part for self-containing.\\n\\nThank you for pointing this out. The mentioned reference for the multiplicative HD rule is now made accessible online, and can be located with a Google search of the title.\\n\\n> 3, As the authors claimed, the Maclaurin et.al. 2015 is the most related work, however, they are not compared in the experiments. Moreover, the empirical comparisons are only conducted on MNIST. To be more convincing, it will be good to include such competitor and comparing on practical applications on CIFAR10/100 and ImageNet.\\n\\nAs you point out, Maclaurin et al. (2015) is a highly related work, which introduces the term \\u201chypergradient\\u201d and similarly performs gradient-based updates of hyperparameters through a reversible higher-order automatic differentiation setup. \\n\\nHowever, note that in the approach in Maclaurin et al. (2015) a regular optimization procedure is truncated to a fixed number N of \\u201celementary\\u201d iterations (such as N = 100 in the paper), at the end of which the derivative of an objective is propagated all the way through this N inner optimization iterations (the \\u201creversibility\\u201d trick introduced in the paper is for making this possible in practice), and the resulting hypergradient is used in an outer optimization of M \\u201cmeta\\u201d iterations (such as M=50 in the paper). Our technique, in contrast, is an online adaptation of a hyperparameter (in particular, the learning rate) at each iteration of optimization, and does not perform derivative propagation through an inner optimization that consists of many iterations. The techniques are thus not directly comparable as competing alternatives. For instance, it is not straightforward to replicate our learning rate trajectory through the VGGNet/CIFAR-10 experiment of 78125 iterations (Figure 2 on page 7, rightmost column) in the reversible learning algorithm due to (1) uninformative gradients beyond a few hundred iterations (see Section 4 \\u201cLimitations\\u201d in Maclaurin et al. 2015) and (2) potentially prohibitive memory requirements. Having said this, we believe that it would be interesting to compare the behavior of our algorithm for the initial 100 iterations with the 100-iteration learning-rate schedules reported in Maclaurin et al. (2015) and we intend to add such an experiment in the appendix in the final revision of the paper.\\n\\n> Moreover, the empirical comparisons are only conducted on MNIST. \\n\\nPlease note that the paper does report non-MNIST empirical comparisons, specifically CIFAR-10 (Section 4.3 on page 8 and Figure 2 on page 7).\\n\\n> Minors: In the experiments results figures, after adding the new trick, the SGD algorithms become more stable, i.e., the variance diminishes. Could you please explain why such phenomenon happens?\\n\\nAs far as we can observe, the variance does not diminish, and the method behaves in a similar way to how regular SGD does with a good choice of the learning rate, as for example 10e-2 in the case of logistic regression. We would be interested in looking into this more carefully if you could point us to an experiment/figure where this behavior with SGD happens.\\n\\nThank you once more for all these constructive comments and suggested additions that allow us to improve the paper.\"}",
"{\"title\": \"Thank you\", \"comment\": \"> I have seen this before for SGD (the authors do not claim that the basic idea is novel), but I believe that the application to other algorithms (the authors explicitly consider Nesterov momentum and ADAM) are novel, as is the use of the multiplicative and normalized update of equation 8 (particularly the normalization).\\n\\n> The experiments are well-presented, and appear to convincingly show a benefit. Figure 3, which explores the robustness of the algorithms to the choice of \\\\alpha_0 and \\\\beta, is particularly nicely-done, and addresses the most natural criticism of this approach (that it replaces one hyperparameter with two).\\n\\nThank you very much for your evaluation and your encouraging feedback.\\n\\nFigure 3 was produced with exactly the purpose that you described, and we are very glad that this was noticed and found useful.\\n\\n> The authors highlight theoretical convergence guarantees as an important future work item, and the lack of them here (aside from Theorem 5.1, which just shows asymptotic convergence if the learning rates become sufficiently small) is a weakness, but not, I think, a critical one. This appears to be a promising approach, and bringing it back to the attention of the machine learning community is valuable.\\n\\nWe agree that a theoretical convergence analysis is a highly desired future work and is a limitation of the current paper. We also agree with the assessment that the approach appears promising and therefore we would like to bring it to the attention of the larger community.\"}",
"{\"title\": \"Thank you\", \"comment\": \"> One central problem of the paper is missing novelty. The authors are well aware of this. They still manage to provide added value. Despite its limited novelty, this is a very interesting and potentially impactful paper. I like in particular the detailed discussion of related work, which includes some frequently overlooked precursors of modern methods.\\n\\nThank you very much for your evaluation and encouraging words.\\n\\n> The experimental evaluation is rather solid, but not perfect. It considers three different problems: logistic regression (a convex problem), and dense as well as convolutional networks. That's a solid spectrum. However, it is not clear why the method is tested only on a single data set: MNIST. Since it is entirely general, I would rather expect a test on a dozen different data sets. That would also tell us more about a possible sensitivity w.r.t. the hyperparameters \\\\alpha_0 and \\\\beta.\\n\\nPlease note that we provide experimental evaluation on a non-MNIST data set, specifically CIFAR-10 (Section 4.3 on page 8 and Figure 2 on page 7).\\n\\n> The extensions in section 5 don't seem to be very useful. In particular, I cannot get rid of the impression that section 5.1 exists for the sole purpose of introducing a convergence theorem. Analyzing the actual adaptive algorithm would be very interesting. In contrast, the present result is trivial and of no interest at all, since it requires knowing a good parameter setting, which defeats a large part of the value of the method.\\n\\nWe agree with your assessment that the analysis in Section 5.1 is significantly restricted and this is a limitation of the current paper. There remains much to be done in this respect, and a theoretical convergence analysis is a highly desired future work. Please note that a convergence analysis of the technique in the multidimensional quadratic case is available in a separate work, which we will highlight prominently in the de-anonymized final revision of the paper.\\n\\n> MINOR POINTS\\n\\nThank you for pointing these out, we will fix them in the final revision.\"}",
"{\"title\": \"interesting idea, but weak experiments\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper revisits an interesting and important trick to automatically adapt the stepsize. They consider the stepsize as a parameter to be optimized and apply stochastic gradient update for the stepsize. Such simple trick alleviates the effort in tuning stepsize, and can be incorporated with popular stochastic first-order optimization algorithms, including SGD, SGD with Nestrov momentum, and Adam. Surprisingly, it works well in practice.\\n\\nAlthough the theoretical analysis is weak that theorem 1 does not reveal the main reason for the benefits of such trick, considering their performance, I vote for acceptance. But before that, there are several issues need to be addressed. \\n\\n1, the derivation of the update of \\\\alpha relies on the expectation formulation. I would like to see the investigation of the effect of the size of minibatch to reveal the variance of the gradient in the algorithm combined with such trick. \\n\\n2, The derivation of the multiplicative rule of HD relies on a reference I cannot find. Please include this part for self-containing. \\n\\n3, As the authors claimed, the Maclaurin et.al. 2015 is the most related work, however, they are not compared in the experiments. Moreover, the empirical comparisons are only conducted on MNIST. To be more convincing, it will be good to include such competitor and comparing on practical applications on CIFAR10/100 and ImageNet.\", \"minors\": \"In the experiments results figures, after adding the new trick, the SGD algorithms become more stable, i.e., the variance diminishes. Could you please explain why such phenomenon happens?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you very much for your time and for reporting your results. This sort of validation is extremely valuable for us and the community.\\n\\nFollowing the decision notification, we will make a repository public with the full code in Python (including the plotting codes that we used for producing the plots in the paper). We will also add information about the hardware setup that was used for running the presented experiments.\"}",
"{\"title\": \"Nice paper!\", \"comment\": \"Dear authors,\\n\\nThank you for this paper, I really enjoyed it! :)\", \"i_have_two_small_comments\": [\"A related field which may provide additional insights in that of Adaptive filter theory [1]. A particularly relevant example would be the use of adaptive forgetting factors, where gradient information is used to tune a forgetting factor recursively.\", \"A further interesting application for the proposed method could be in the context of non-stationary data. In such a setting, it may be desirable to allow the learning to rate to increase if necessary (as would be the case if, for example, the underlying data distribution changed). Potential scenarios where this could happen are streaming data applications (where model parameters are constantly updated to take into consideration new observations/drifts in the distribution) or transfer learning applications.\", \"Best wishes and good luck!\"], \"references\": \"1. Adaptive Filter Theory, Simon Haykin, Prentice Hall, 2008\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"All reviewers agreed that, despite the lack of novelty, the proposed method is sound and correctly linked to existing work. As the topic of automatically learning the stepsize is of great practical interest, I am glad to have this paper presented as a poster at ICLR.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"why multiplicative adaptation is in general faster than the additive adaptation\\uff1f\", \"comment\": \"One of the practical advantages of this multiplicative\\nrule is that it is invariant up to rescaling and that the multiplicative adaptation is in general faster than\\nthe additive adaptation. Why?\"}",
"{\"title\": \"Thank you\", \"comment\": \"Hi, both are very interesting potential applications!\\n\\nI think an application to non-stationary data, where the learning rate varies on the fly as new data comes in, would be very interesting indeed. We will keep this in mind. \\n\\nWe're also looking at adaptive filter theory.\\n\\nThank you very much for the pointers.\"}",
"{\"title\": \"Successfully reproduced!\", \"comment\": \"This paper introduces an adaptive method to adjust the learning rate of machine learning algorithms, and aims to improve the convergence time and reduce manual tuning of learning rate. The idea is simple and straightforward: to automatically update the learning rate by performing gradient descent on the learning rate alongside the gradient descent procedure of the parameters of interest. This is achieve by introducing a new hyperparameter and specifying an initial learning rate. The idea is intuitive and the implementation is not hard.\\nWe find that the way the experiments are set\\u00adup and described facilitates reproducibility. The data sets in the experiment are all publicly available, partitioning information of training and test data sets are clearly stated except the randomization control of training set for each experiment. Authors implemented and documented the Lua code of the proposed optimization algorithms for SGD, SGDN and Adam, and made those codes available within the torch.optim package on github. The python version of AdamHD can also be found publicly online. Since we do not have programming experience using Lua, we implemented the python version of SGDHD and SGDNHD by ourselves following the paper pseudocode, but we cannot guarantee that our implementation based on our understanding is exactly the same as the authors'. However, the code that authors used to generate the exact plots and graphs to illustrate their experiment results are not available. Thus we also implemented this part of code ourselves according to paper. Most parameters (including hyperparameters) used in experiments were given. We would suggest authors to include more hardware\\u00adspecific information used to run their experiments in the paper, including time, memory, GPU and type of machine.\\nIt is not hard to replicate the results shown in the original paper, with some effort to apply machine learning methods embedded in the Torch or PyTorch library on the given data set. Based on the results, it is great to see that most of the experiments in the study are reproducible. Specifically, the change of learning rate and training/validation loss in our replication generally follows a similar pattern to that in the paper. For example, the learning rate increases in the first few epochs in logistic regression and neural networks using SGDHD. Also, the learning rate and training/validation loss tends to oscillate starting at some point in the paper and our results shows the same pattern. However, there are also instances where the non\\u00adHD version of the optimizers perform better than the HD counterparts.\\nOverall, the paper is well\\u00adwritten, provides a promising algorithm that works at least as well as existing gradient\\u00addescent\\u00adbased optimization algorithms that use a fixed global learning rate. The authors claim that an important future work is to investigate the theoretical convergence guarantees of their algorithm, which is indeed very insightful. I am hoping that the authors can also justify the theoretical support behind the adaptation of the learning rate in the sense that to what they are trying to adapt the learning rate.\"}",
"{\"title\": \"why multiplicative adaptation is in general faster than the additive adaptation\\uff1f\", \"comment\": \"You only need a logaritmic number of iterations to shift your current learning rate to another value, instead of a linear number of them. We have also seen in practice that with good hyperparameters for both implementations, the multiplicative rule adapts faster. There is also a theoretical reason that comes from the formal derivation of the rule that suggests that the multiplicative rule makes more sense than the additive one.\"}",
"{\"title\": \"good, but not perfect\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"SUMMARY:\\n\\nThe authors reinvent a 20 years old technique for adapting a global or component-wise learning rate for gradient descent. The technique can be derived as a gradient step for the learning rate hyperparameter, or it can be understood as a simple and efficient adaptation technique.\", \"general_impression\": \"One central problem of the paper is missing novelty. The authors are well aware of this. They still manage to provide added value.\\nDespite its limited novelty, this is a very interesting and potentially impactful paper. I like in particular the detailed discussion of related work, which includes some frequently overlooked precursors of modern methods.\", \"criticism\": \"The experimental evaluation is rather solid, but not perfect. It considers three different problems: logistic regression (a convex problem), and dense as well as convolutional networks. That's a solid spectrum. However, it is not clear why the method is tested only on a single data set: MNIST. Since it is entirely general, I would rather expect a test on a dozen different data sets. That would also tell us more about a possible sensitivity w.r.t. the hyperparameters \\\\alpha_0 and \\\\beta.\\n\\nThe extensions in section 5 don't seem to be very useful. In particular, I cannot get rid of the impression that section 5.1 exists for the sole purpose of introducing a convergence theorem. Analyzing the actual adaptive algorithm would be very interesting. In contrast, the present result is trivial and of no interest at all, since it requires knowing a good parameter setting, which defeats a large part of the value of the method.\", \"minor_points\": \"page 4, bottom: use \\\\citep for Duchi et al. (2011).\\n\\nNone of the figures is legible on a grayscale printout of the paper. Please do not use color as the only cue to identify a curve.\\n\\nIn figure 2, top row, please display the learning rate on a log scale.\\n\\npage 8, line 7 in section 4.3: \\\"the the\\\" (unintended repetition)\", \"end_of_section_4\": \"an increase from 0.001 to 0.001002 is hardly worth reporting - or am I missing something?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
HJMN-xWC- | Learning Parsimonious Deep Feed-forward Networks | [
"Zhourong Chen",
"Xiaopeng Li",
"Nevin L. Zhang"
] | Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively. However, the structure of standard feed-forward neural networks (FNNs) is simply a stack of fully connected layers, regardless of the feature correlations in data. In addition, the number of layers and the number of neurons are manually tuned on validation data, which is time-consuming and may lead to suboptimal networks. In this paper, we propose an unsupervised structure learning method for learning parsimonious deep FNNs. Our method determines the number of layers, the number of neurons at each layer, and the sparse connectivity between adjacent layers automatically from data. The resulting models are called Backbone-Skippath Neural Networks (BSNNs). Experiments on 17 tasks show that, in comparison with FNNs, BSNNs can achieve better or comparable classification performance with much fewer parameters. The interpretability of BSNNs is also shown to be better than that of FNNs. | [
"Parsimonious Deep Feed-forward Networks",
"structure learning",
"classification",
"overfitting",
"fewer parameters",
"high interpretability"
] | Reject | https://openreview.net/pdf?id=HJMN-xWC- | https://openreview.net/forum?id=HJMN-xWC- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"rkDPp89xz",
"r179HyTBf",
"Bku44r3Mf",
"HkQdJFuef",
"BJ25hzHWf",
"S1BSve3mz",
"S1nXjXVbz",
"SyTaKQVZf",
"BJna8g27G"
],
"note_type": [
"official_review",
"decision",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1511841118772,
1517249930666,
1514062895701,
1511718762737,
1512545430716,
1515091773291,
1512483620418,
1512483270628,
1515091651698
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper555/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper555/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper555/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper555/AnonReviewer4"
],
[
"ICLR.cc/2018/Conference/Paper555/Authors"
],
[
"ICLR.cc/2018/Conference/Paper555/Authors"
],
[
"ICLR.cc/2018/Conference/Paper555/Authors"
],
[
"ICLR.cc/2018/Conference/Paper555/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Learning Parsimonious Deep Feed-forward Networks\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper introduces a skip-connection based design of fully connected networks, which is loosely based on learning latent variable tree structure learning via mutual information criteria. The goal is to learn sparse structures across layers of fully connected networks. Compared to prior work (hierarchical latent tree model), this work introduces skip-paths.\\nAuthors refer to prior work for methods to learn this backbone model. Liu et.al (http://www.cse.ust.hk/~lzhang/ltm/index.htm) and Chen et.al. (https://arxiv.org/abs/1508.00973) and (https://arxiv.org/pdf/1605.06650.pdf). \\n\\nAs far as I understand, the methods for learning backbone structure and the skip-path are performed independently, i.e. there is no end-to-end training of the structure and parameters of the layers. This will limit the applicability of the approach in most applications where fully connected networks are currently used. \\n\\nOriginality - The paper heavily builds upon prior work on hierarchical latent tree analysis and adds 'skip path' formulation to the architecture, however the structure learning is not performed end-to-end and in conjunction with the parameters. \\n\\nClarity - The paper is not self-contained in terms of methodology.\\n\\nQuality and Significance - There is a disconnect between premise of the paper (improving efficiency of fully connected layers by learning sparser structures) and applicability of the approach (slow EM based method to learn structure first, then learn the parameters). As is, the applicability of the method is limited. \\nAlso in terms of experiments, there is not enough exploration of simpler sparse learning methods such as heavy regularization of the weights.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"I am inclined to agree with R1 that there is an extensive literature on learning architectures now, and I have seen two others as part of my area chairing. This paper does not offer comparisons to existing methods for architecture learning other than very basic ones and that reduces the strength of the paper significantly. Further the broad exploration over 17 tasks is more overwhelming, than adding to an insight into the methods.\"}",
"{\"title\": \"Learning Parsimonious Deep Feed-forward Networks\", \"comment\": \"My confusion with your point #1 is a simple fact that you are proposing a method of constructing a NN using some form of a cost function. There is a lot of literature where people are trying to build NN using target evaluation metric as the cost function. For classification tasks this would be classification accuracy. Building NN here consist of adding/removing layers, changing learning rates, etc. These are so called architecture search methods. I am aware that these methods are more expensive yet they attempt to come up with a custom architecture for given problem like your method does. As such I expected to see more discussion in this directions.\\n\\nMy confusion with your point#2 stems from you claiming to provide validation on 17 different tasks, \\n12 out of these 17 tasks come from Tox21 data set. Let us look at table 3, what are NR.AhR, NR.AR, ..., SR.p53 tasks, how important is improvement on NR.AhR, is improvement on NR.AR more important than it is on NR.AhR, how significant is the difference between 0.8930 and 0.8843, what is state-of-the art on each of these sets (for feedforward, also other models), is this really correct that BSNN has 338K on all these tasks. For Table 4 similarly, what is state-of-the-art here?\\n\\nYou treat interpretability rather seriously in this paper so I do think you need to refer to other work done in that area. Second of all, given the way you treat interpretability I would expect you conducting some subjective evaluations by asking human subject to rank models based on the way they group words. I find it hard to be convinced given similarity values such as 0.1729, 0.1632, 0.1553 you compute by means of embeddings derived from word2vec model.\"}",
"{\"title\": \"Learning Parsimonious Deep Feed-forward Networks\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"There is a vast literature on structure learning for constructing neural networks (topologies, layers, learning rates, etc.) in an automatic fashion. Your work falls under a similar category. I am a bit surprised that you have not discussed it in the paper not to mention provided a baseline to compare your method to. Also, without knowing intricate details about each of 17 tasks you mentioned it is really hard to make any judgement as to how significant is improvement coming from your approach. There has been some work done on constructing interpretable neural networks, such as stimulated training in speech recognition, unfortunately these are not discussed in the paper despite interpretability being considered important in this paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Needs improvement\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The main strengths of the paper are the supporting experimental results in comparison to plain feed-forward networks (FNNs). The proposed method is focused on discovering sparse neural networks. The experiments show that sparsity is achieved and still the discovered sparse networks have comparable or better performance compared to dense networks.\\n\\nThe main weakness of the paper is lack of cohesion in contributions and difficulty in delineating the scope of their proposed approach.\", \"below_are_some_suggestions_for_improving_the_paper\": \"Can you enumerate the paper\\u2019s contributions and specify the scope of this work? Where is this method most applicable and where is it not applicable?\\n\\nWhy is the paper focused on these specific contributions? What problem does this particular set of contributions solve that is not solvable by the baselines? There needs to be a cohesive story that puts the elements together. For example, you explain how the algorithm for creating the backbone can use unsupervised data. On the other hand, to distinguish this work from the baselines you mention that this work is the first to apply the method to supervised learning problems.\\n\\nThe motivation section in the beginning of the paper motivates using the backbone structure to get a sparse network. However, it does not adequately motivate the skip-path connections or applications of the method to supervised tasks.\\n\\nIs this work extending the applicability of baselines to new types of problems? Or is this work focused on improving the performance of existing methods? Answers to these questions can automatically determine suitable experiments to run as well. It's not clear if Pruned FNNs are the most suitable baseline for evaluating the results. Can your work be compared experimentally with any of the constructive methods from the related work section? If not, why?\\n\\nWhen contrasting this work with existing approaches, can you explain how existing work builds toward the same solution that you are focusing on? It would be more informative to explain how the baselines contribute to the solution instead of just citing them and highlighting their differences.\\n\\nRegarding the experimental results, is there any insight on why the dense networks are falling short? For example, if it is due to overfitting, is there a correlation between performance and size of FNNs? Do you observe a similar performance vs FNNs in existing methods? Whether this good performance is due to your contributions or due to effectiveness of the baseline algorithm, proper analysis and discussion is required and counts as useful research contribution.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"RE: AnonReviewer4's review\", \"comment\": \"Thank you for your suggestions.\"}",
"{\"title\": \"RE: AnonReviewer1's review\", \"comment\": \"Thank you for your reviews.\\n\\n#Discussion of literature on structure learning for neural networks is missing#\\nNo. We Do have the discussion covering most of the important methods, e.g. constructive algorithm (Ash, 1989; Bello, 1992; Kwok & Yeung, 1997), RL algorithm, Genetic algorithm, pruning and so on in our Related Works section. Please take a look at it. And we also compare with a baseline method (pruning) in our experiments.\\n\\n#Unclear tasks and unclear improvement#\\nNo. Firstly, text classification is well studied in the literature and is not at all a mysterious task. In addition, the five large-scale text datasets we included are among the most important text classification datasets nowadays. The Tox21 dataset is also studied in a famous NIPS2017 paper, Self-Normalizing Neural Networks (SELU), in a similar setting. Secondly, we want to emphasize that the goal of this paper is NOT to propose state-of-the-art solutions to the 17 classification tasks, but to propose a structure learning method and compare it with baselines on the 17 tasks. Last but not the least, even when all the baseline FNN structures are fully tuned over the validation data, our method still achieves better/comparable classification performances in all the 17 tasks. This is a clear validation of the effectiveness of our structure learning method, considering the Backbone path in our model contains only 5% of the connections.\\n\\n#Paper on interpretable neural networks are not discussed#\\nThe goal of this paper is to propose a structure learning method for *Parsimonious* neural networks such that the models contain fewer parameters than standard FNNs but still achieve better performance in different tasks. The method is not directly optimizing the structures for interpretability. Better interpretability (than baselines) is just one resulting advantage of our method and hence we think it is not necessary to include a heavy discussion on papers about interpretable neural networks. If the reviewer think that it is necessary, we will add it in our revision.\"}",
"{\"title\": \"RE: AnonReviewer3's review\", \"comment\": \"Thank you for your reviews.\\n\\n#End-to-end training#\\nWe wish to remind the reviewer that we are proposing an *Unsupervised* structure learning method. One key advantage of unsupervised structure learning is that it can make use of both unlabelled and labelled data, and the learned structure can be transferred to any tasks on the same type of data. Think about the structure of convolutional layer which is used across all kinds of CV tasks. Why don't we train the connectivities of convolutional layer with the parameters in an end-to-end fashion? The reason is that we humans have seen many unlabelled scenes, we know a strong pattern in vision data and hence we design a specific structure suited to that pattern without further learning. Similarly, our method is trying to find such strong patterns in general data other than images and build structures correspondingly, followed by parameter learning in specific tasks. If you train the structure and parameters in an end-to-end manner, then it is supervised learning and task-specific, which is not what we want.\\n\\nIn addition, compared with an end-to-end method (pruning), our method has achieved higher classification AUC scores in 10 out of 12 tasks and significantly higher interpretability scores in 3 out of 4 tasks. It is clear that the end-to-end method shows no superiority to our method.\\n\\n#Originality#\\nWe want to emphasize the contributions of our paper. Note that prior works on hierarchical latent tree analysis are proposing structure learning methods for Bayesian network, while in this paper we aim at structure learning of deep feed-forward neural networks.\\n1. It is the first time that the latent tree-based structure learning method is applied to multi-layer neural network and supervised learning task (classification). Previous works on such topic are for unsupervised tasks only.\\n2. This paper proposes a method for learning multi-layer deep sparse feed-forward neural network. This is different from previous works in that previous works on latent tree model learn either multi-layer tree model (Chen et al. 2017a) or two-layer sparse model Chen et al. (2017b).\\n\\n#Inefficient due to slow EM algorithm.#\\nNo. Firstly, we use *Progressive EM* (Chen et al., 2016) and *Stepwise EM* (similar to SGD) (Sato and Ishii 2000; Cappe and Moulines 2009) in our method. They have been shown to be efficient and can easily scale up for hundreds of thousands of training samples in previous works. Secondly, structure learning is only needed during offline training, and the learned sparse connections can speed up online testing. Besides, our method is proposed not only for efficiency, but also for model fit and model storage.\\n\\n#Regularization of the weights as baseline are missing#\\nNo. The pruning method we compare with is usually regarded as a strong regularization over weights in the literature. The regularization is even stronger than l1 norm as it is producing many weights being exactly 0.\"}",
"{\"title\": \"RE: AnonReviewer1's review\", \"comment\": \"Thank you for your explanations.\\n\\n#1\\nNo. We are NOT \\u201cusing some form of a cost function\\u201d, but proposing an unsupervised learning method. If our understanding is correct, the reviewer is talking about methods of manually validating network structure over validation data. Note that all the baseline FNNs in our experiments are validated over validation data. If the reviewer thinks it necessary to have more discussion in that directions, we will include it.\\n\\n#2\\nWe agree that more introduction and references for Tox21 dataset would help reader better understand the experiment results. Thank you for your suggestions.\\n\\n#3\\nThank you for your suggestions.\"}"
]
} |
H1LAqMbRW | Latent forward model for Real-time Strategy game planning with incomplete information | [
"Yuandong Tian",
"Qucheng Gong"
] | Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents. | [
"Real time strategy",
"latent space",
"forward model",
"monte carlo tree search",
"reinforcement learning",
"planning"
] | Reject | https://openreview.net/pdf?id=H1LAqMbRW | https://openreview.net/forum?id=H1LAqMbRW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"rko8LBpXG",
"HJh2yfcgz",
"BJ-32VOxf",
"B1qenWKxM",
"SJzMLypSf"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1515177554934,
1511821235728,
1511701672798,
1511754737742,
1517250058199
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper912/Authors"
],
[
"ICLR.cc/2018/Conference/Paper912/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper912/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper912/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"We thanks reviewers for their comments.\", \"comment\": \"We thank the reviewers for their insightful comments.\\n\\nOur paper points to an interesting direction that uses learned latent space from model-free approaches as the latent space for dynamics models. In the MiniRTS game, we verified that the latent space that leads to strong performance of model-free methods is both compact and contains crucial information of the game situation, which could be interesting. We agree that the analysis can be done more thoroughly and the final performance (e.g., MCTS with learned dynamics model) is not that satisfactory, compared to model-free approaches. We will continue working on it in the future.\", \"details\": \"What is MiniRTS?\\n\\nMiniRTS is recently proposed as part of ELF platform [Tian et al (NIPS 2017)]. It is a miniature 2-player real-time strategy game with basic functionality (e.g., resource gathering, troop/facilities building, incomplete information (fog of war), multiple unit types, continuous motion of units, etc). \\n\\nThe symbols \\\"MatchPi\\\", \\u201cMatchA\\\", etc, are now defined properly in the text (paper is updated). \\n\\nWe have fixed the broken captions of Fig. 8.\", \"r2\": \"3. In Fig. 3b, red curves are the value average on won games, while blue curves are on lost games.\\n4. \\\"\\\\hat{h_t} = h_t\\\" would be cheating since the baseline would have access to the most recent observation, which the forward modeling does not. Note that the forward model can only access information in the previous frames, say, 2 frames ago. Performance wise, \\\"\\\\hat{h_t} = h_t\\\" would yield higher performance than the learned forward model.\", \"r3\": \"1. We have updated the paper to explain different training paradigms (MatchPi etc). \\n\\n2. How the forward (or dynamics) model is used in MCTS: \\nThe forward model is used to predict the future states given the current state. The predicted future state is thus used as the latent representation of child nodes, and so on. This is useful when the game is imperfect information and the game dynamics is unknown (like what MiniRTS is). In comparison, systems like AlphaGo knows the complete information and perfect game dynamics. Other than this difference, the MCTS algorithm is like what AlphaGo does: in each rollout it expands a leaf node to get its value and policy distribution, and use the value to backpropagate the winrate estimation at each intermediate nodes. \\n\\n3. In Figure 6, the PrevSeen agent is used.\\n\\n4. We haven't tried scheduled sampling / Dagger (Ross et al.) yet. We acknowledge that this is an interesting direction to explore.\", \"r1\": \"1. Fig. 1 is an illustrative fig about different ways of training forward models. Fig. 2 is the training curves for model-free agents and no MCTS is involved. \\n\\n2. MiniRTS is indeed an deterministic environment. This means that if all the initial states are fixed (including random seeds), then the game simulator will give exactly the same consequence. However, in the presence of Fog of War (each player cannot see the opponent's behavior if his troops are not nearby), the environment from one player's point of view may not be deterministic. We acknowledge that modeling uncertainty could be a good direction to work on.\"}",
"{\"title\": \"Interesting approach to learning a model, but underperforms model-free methods\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Summary: This paper proposes to use the latent representations learned by a model-free RL agent to learn a transition model for use in model-based RL (specifically MCTS). The paper introduces a strong model-free baseline (win rate ~80% in the MiniRTS environment) and shows that the latent space learned by this baseline does include relevant game information. They use the latent state representation to learn a model for planning, which performs slightly better than a random baseline (win rate ~25%).\", \"pros\": [\"Improvement of the model-free method from previous work by incorporating information about previously observed states, demonstrating the importance of memory.\", \"Interesting evaluation of which input features are important for the model-free algorithm, such as base HP ratio and the amount of resources available.\"], \"cons\": [\"The model-based approach is disappointing compared to the model-free approach.\"], \"quality_and_clarity\": [\"The paper in general is well-written and easy to follow and seems technically correct, though I found some of the figures and definitions confusing, specifically:\", \"The terms for different forward models are not defined (e.g. MatchPi, MatchA, etc.). I can infer what they mean based on Figure 1 but it would be helpful to readers to define them explicitly.\", \"In Figure 3b, it is not clear to me what the difference between the red and blue curves is.\", \"In Figure 4, it would be helpful to label which color corresponds to the agent and which to the rule-based AI.\", \"The caption in Figure 8 is malformatted.\", \"In Figure 7, the baseline of \\\\hat{h_t}=h_{t-2} seems strange---I would find it more useful for Figure 7 to compare to the performance if the model were not used (i.e. if \\\\hat{h_t}=h_t) to see how much performance suffers as a result of model error.\"], \"originality\": \"I am unfamiliar with the MiniRTS environment, but given that it is only published in this year's NIPS (and that I couldn't find any other papers about it on Google Scholar) it seems that this is the first paper to compare model-free and model-based approaches in this domain. However, the model-free approach does not seem particularly novel in that it is just an extension of that from Tian et al. (2017) plus some additional features. The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling (see below).\", \"significance\": \"I feel the paper overstates the results in saying that the learned forward model is usable in MCTS. The implication in the abstract and introduction (at least as I interpreted it) is that the learned model would outperform a model-free method, but upon reading the rest of the paper I was disappointed to learn that in reality it drastically underperforms. The baseline used in the paper is a random baseline, which seems a bit unfair---a good baseline is usually an algorithm that is an obvious first choice, such as the model-free approach.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting way of re-using pre-trained agents with a lot of room for improvement\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes to use a pretrained model-free RL agent to extract the developed state representation and further re-use it for learning forward model of the environment and planning.\\nThe idea of re-using a pretrained agent has both pros and cons. On one hand, it can be simpler than learning a model from scratch because that would also require a decent exploration policy to sample representative trajectories from the environment. On the other hand, the usefulness of the learned representation for planning is unclear. A model-free agent can (especially if trained with certain regularization) exclude a lot of information which is potentially useful for planning, but is it necessary for reactively taking actions.\\nA reasonable experiment/baseline thus would be to train a model-free agent with a small reconstruction loss on top of the learned representation.\\nIn addition to that, one could fine-tune the representation during forward model training. \\nIt would be interesting to see if this can improve the results.\\n\\nI personally miss a more technical and detailed exposition of the ideas. For example, it is not described anywhere what loss is used for learning the model. MCTS is not described and a reader has to follow references and infer how exactly is it used in this particular application which makes the paper not self-contained. \\nAgain, due to lack of equations, I don\\u2019t completely understand the last paragraph of 3.2, I suggest re-writing it (as well as some other parts) in a more explicit way.\\nI also could find the details on how figure 1 was produced. As I understand, MCTS was not used in this experiment. If so, how would one play with just a forward model?\\n\\nIt is a bit disappointing that authors seem to consider only deterministic models which clearly have very limited applicability. Is mini-RTS a deterministic environment? \\nWould it be possible to include a non-deterministic baseline in the experimental comparison?\\n\\nExperimentally, the results are rather weak compared to pure model-free agents. Somewhat unsatisfying, longer-term prediction results into weaker game play. Doesn\\u2019t this support the argument about need in stochastic prediction? \\n\\nTo me, the paper in it\\u2019s current form is not written well and does not contain strong enough empirical results, so that I can\\u2019t recommend acceptance.\", \"minor_comments\": [\"MatchA and PredictPi models are not introduced under such names\", \"Figure 1 that introduces them contains typos.\", \"Formatting of figure 8 needs to be fixed. This figure does not seem to be referred to anywhere in the text and the broken caption makes it hard to understand what is happening there.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting direction of research, but analysis is not complete and exposition is unclear.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Summary:\\n\\nThis paper studies learning forward models on latent representations of the environment, and use these for model-based planning (e.g. via MCTS) in partial-information real-time-strategy games. The testbed used is MiniRTS, a simulation environemnt for 1v1 RTS.\\n\\nForecasting the future suffers from buildup / propagation of prediction errors, hence the paper uses multi-step errors to stabilize learning.\", \"the_paper\": \"1. describes how to train strong agents that might have learned an informative latent representation of the observed state-space.\\n2. Evaluates how informative the latent states are via state reconstruction.\\n3. trains variatns of a forward model f on the hidden states of the various learned agents.\\n4. evaluates different f within MCTS for MiniRTS.\", \"pro\": [\"This is a neat idea and addresses the important question of how to learn accurate models of the environment from data, and how to integrate them with model-free methods.\", \"The experimental setting is very non-trivial and novel.\"], \"con\": [\"The manuscript is unclear in many parts -- this should be greatly improved.\", \"1. The different forward models are not explained well (what is MatchPi, MatchA, PredN?). Which forward model is trained from which model-free agent?\", \"2. How is the forward model / value function used in MCTS? I assume it's similar to what AlphaGo does, but right now it's not clear at all how everything is put together.\", \"The paper devotes a lot of space (sect 4.1) on details of learning and behavior of the model-free agents X. Yet it is unclear how this informs us about the quality of the learned forward models f. It would be more informative to focus in the main text on the aspects that inform us about f, and put the training details in an appendix.\", \"As there are many details on how the model-free agents are trained and the system has many moving parts, it is not clear what is important and what is not wrt to the eventual winrate comparisons of the MCTS models. Right now, it is not clear to me why MatchA / PredN differ so much in Fig 8.\", \"The conclusion seems quite negative: the model-based methods fare *much* worse than the model-free agent. Is this because of the MCTS approach? Because f is not good? Because the latent h is not informative enough? This requires a much more thorough evaluation.\"], \"overall\": \"I think this is an interesting direction of research, but the current manuscript does provide a complete and clear analysis.\", \"detailed\": [\"What are the right prediction tasks that ensure the latent space captures enough of the forward model?\", \"What is the error of the raw h-predictions? Only the state-reconstruction error is shown now.\", \"Figure 6 / sect 4.2: which model-free agent is used? Also fig 6 misses captions.\", \"Figure 8: scrambled caption.\", \"Does scheduled sampling / Dagger (Ross et al.) improve the long-term stability in this case?\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"There was certainly some interest in this paper which investigates learning latent models of the environment for model-based planning, particularly articulated by Reviewer3. However, the bulk of reviewer remarks focused on negatives, such as:\\n\\n--The model-based approach is disappointing compared to the model-free approach.\\n--The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling.\\n--I feel the paper overstates the results in saying that the learned forward model is usable in MCTS.\\n-- the paper in it\\u2019s current form is not written well and does not contain strong enough empirical results\"}"
]
} |
B1X4DWWRb | Learning Weighted Representations for Generalization Across Designs | [
"Fredrik D. Johansson",
"Nathan Kallus",
"Uri Shalit",
"David Sontag"
] | Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation. | [
"Distributional shift",
"causal effects",
"domain adaptation"
] | Reject | https://openreview.net/pdf?id=B1X4DWWRb | https://openreview.net/forum?id=B1X4DWWRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ryOA0TKgG",
"r1HF7_pQM",
"H1HywYblM",
"rkRtL1pHG",
"BkGC4OpQM",
"r1pGQ_aQM",
"ByozI_rlG",
"Skwvf_pmM"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1511804624315,
1515189116890,
1511261917342,
1517250181816,
1515189450355,
1515189013093,
1511519763052,
1515188830747
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper685/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper685/Authors"
],
[
"ICLR.cc/2018/Conference/Paper685/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper685/Authors"
],
[
"ICLR.cc/2018/Conference/Paper685/Authors"
],
[
"ICLR.cc/2018/Conference/Paper685/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper685/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Good theoretical results, more empirical evaluations can improve the paper\", \"rating\": \"7: Good paper, accept\", \"review\": \"Summary:\\nThis paper proposes a new approach to tackle the problem of prediction under\\nthe shift in design, which consists of the shift in policy (conditional\\ndistribution of treatment given features) and the shift in domain (marginal \\ndistribution of features).\\n\\nGiven labeled samples from a source domain and unlabeled samples from a target\\ndomain, this paper proposes to minimize the risk on the target domain by \\njointly learning the shift-invariant representation and the re-weighting \\nfunction for the induced representations. According to Lemma 1 and its finite\\nsample version in Theorem 1, the risk on the target domain can be upper bounded\\nby the combination of 1) the re-weighted empirical risk on the source domain; \\nand 2) the distributional discrepancy between the re-weighted source domain and\\nthe target domain. These theoretical results justify the objective function\\nshown in Equation 8. \\n\\nExperiments on the IHDP dataset demonstrates the advantage of the proposed\\napproach compared to its competing alternatives.\", \"comments\": \"1) This paper is well motivated. For the task of prediction under the shift in\\ndesign, shift-invariant representation learning (Shalit 2017) is biased even in\\nthe inifite data limit. On the other hand, although re-weighting methods are\\nunbiased, they suffer from the drawbacks of high variance and unknown optimal\\nweights. The proposed approach aims to overcome these drawbacks.\\n\\n2) The theoretical results justify the optimization procedures presented in\\nsection 5. Experimental results on the IHDP dataset confirm the advantage of\\nthe proposed approach.\\n\\n3) I have some questions on the details. In order to make sure the second \\nequality in Equation 2 holds, p_mu (y|x,t) = p_pi (y|x,t) should hold as well.\\nIs this a standard assumption in the literature?\\n\\n4) Two drawbacks of previous methods motivate this work, including the bias of\\nrepresentation learning and the high variance of re-weighting. According to\\nLemma 1, the proposed method is unbiased for the optimal weights in the large\\ndata limit. However, is there any theoretical guarantee or empirical evidence\\nto show the proposed method does not suffer from the drawback of high variance?\\n\\n5) Experiments on synthetic datasets, where both the shift in policy and the\\nshift in domain are simulated and therefore can be controlled, would better \\ndemonstrate how the performance of the proposed approach (and thsoe baseline \\nmethods) changes as the degree of design shift varies. \\n\\n6) Besides IHDP, did the authors run experiments on other real-world datasets, \\nsuch as Jobs, Twins, etc?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We respond to the specific concerns of Reviewer 3, both here and in our updated draft.\", \"comment\": \"Q: The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense.\", \"a\": \"The description of IHDP has been improved. We have also added a more targeted synthetic experiment (see above), that confirms our expectation that the usefulness of our method is largest when sample sizes are small. When sample sizes are large, more complex models can be fit and model misspecification can be reduced, thus reducing the usefulness of weighting methods in general. We have added a synthetic experiment in Section 6.1 to demonstrate this further.\", \"q\": \"I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.\"}",
"{\"title\": \"Reweighting for causal inference in absence of confounding\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X. The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x. If the treatment effect can be learned, then forecasts of Y under new policies that assign treatment conditional on X will still \\\"work\\\" and the distribution of X can also change without affecting the accuracy of the predictions.\", \"what_is_proposed_seems_to_be_twofold\": [\"instead of using a standard inverse probability weighting, the authors construct a bound for the prediction performance under new distributions of X and new policies and learn the weights by optimizing this bound. The goal is to avoid issues that arise if the ratio between source and target densities become very large or small and the weights in a standard approach would become very sparse, thus leading to a small effective sample size.\", \"as an additional ingredient the authors also propose \\\"representation learning\\\" by mapping x to some representation Phi(x).\", \"The goal is to learn the mapping Phi (and its inverse) and the weighting function simultaneously by optimizing the derived bound on the prediction performance.\"], \"pros\": [\"The problem is relevant and also appears in similar form in domain adaptation and transfer learning.\", \"The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al.\"], \"cons\": [\"I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others.\", \"The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense.\", \"The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_\\\\Phi, C^\\\\mathcal{F}_{n,\\\\delta} and D^{\\\\Phi,\\\\mathcal{H}}_\\\\delta. Why would we expect these quantities to be small or bounded? How does that compare to the assumptions needed for standard inverse probability weighting?\", \"I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The submission provides an interesting way to tackle the so-called distributional shift problem in machine learning. One familiar example is unsupervised domain adaptation. The main contribution of this work is deriving a bound on the generalization error/risk for a target domain as a combo of re-weighted empirical risk on the source domain and some discrepancy between the re-weighted source domain and the target domain. The authors then use this to formulate an objective function.\\n\\nThe reviewers generally liked the paper for its theoretical results, but found the empirical evaluation somewhat lacking, as do I. Especially the unsupervised domain adaptation results are very toy-ish in nature (synthetic data), whereas the literature in this field, cited by the authors, does significantly larger scale experiments. I am unsure as to how much I value I can place in the IHDP results since I am not familiar with the benchmark (and hence my lower confidence in the recommendation).\\n\\nFinally, I am not very convinced that this is the appropriate venue for this work, despite containing some interesting results.\"}",
"{\"title\": \"We thank Reviewer 1 for their comments.\", \"comment\": \"We thank Reviewer 1 for their comments.\"}",
"{\"title\": \"We respond to the specific concerns of Reviewer 2, both here and in our updated draft.\", \"comment\": \"Q: In order to make sure the second equality in Equation 2 holds, p_mu (y|x,t) = p_pi (y|x,t) should hold as well. Is this a standard assumption in the literature?\", \"a\": \"The Twins experiment, as used by Louizos et al. 2017, was primarily created to evaluate methods for dealing with hidden confounding. This is not the focus of our method as we assume ignorability. We found that in the setting of weak hidden confounding (small proxy noise), the imbalance between \\u201ctreatment groups\\u201d was relatively small, and additional balancing neither hurt nor helped. We did not run experiments on Jobs.\", \"q\": \"Besides IHDP, did the authors run experiments on other real-world datasets, such as Jobs, Twins, etc?\"}",
"{\"title\": \"Deep architecture for shift invariance in predictive modeling\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper proposes a deep learning architecture for joint learning of feature representation, a target-task mapping function, and a sample re-weighting function. Specifically, the method tries to discover feature representations, which are invariance in different domains, by minimizing the re-weighted empirical risk and distributional shift between designs.\\nOverall, the paper is well written and organized with good description on the related work, research background, and theoretic proofs. \\n\\nThe main contribution can be the idea of learning a sample re-weighting function, which is highly important in domain shift. However, as stated in the paper, since the causal effect of an intervention T on Y conditioned on X is one of main interests, it is expected to add the related analysis in the experiment section.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"We have taken the comments of the reviewers into account and updated our paper.\", \"comment\": \"We thank all of the reviewers for their helpful comments and suggestions. Addressing these issues has increased the length of the manuscript, but we are confident that this is justified by the improved quality of the paper. We have responded to the concerns of the reviewers individually below.\"}"
]
} |
Hkbd5xZRb | Spherical CNNs | [
"Taco S. Cohen",
"Mario Geiger",
"Jonas Köhler",
"Max Welling"
] | Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective.
In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression. | [
"deep learning",
"equivariance",
"convolution",
"group convolution",
"3D",
"vision",
"omnidirectional",
"shape recognition",
"molecular energy regression"
] | Accept (Oral) | https://openreview.net/pdf?id=Hkbd5xZRb | https://openreview.net/forum?id=Hkbd5xZRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1rz4yvGf",
"Sk9sGkTSM",
"B1gQIy9gM",
"Sy5OgaB2G",
"r1VD9T_SM",
"r1rikDLVG",
"HkZy7TdXM",
"Sy9FmTuQM",
"r1CVE6O7f",
"ryi-Q6_Xf",
"rySumKdhz",
"Skrq4BAHM",
"Bkv4qd3bG",
"SJ3LYkFez"
],
"note_type": [
"comment",
"decision",
"official_review",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1513710604793,
1517249185682,
1511810584515,
1524121714467,
1516980828321,
1515773853524,
1514881753000,
1514881921784,
1514882101606,
1514881794645,
1524302700718,
1517339788592,
1513028145126,
1511745876309
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper615/AnonReviewer3"
],
[
"(anonymous)"
],
[
"~Tao_Sun1"
],
[
"ICLR.cc/2018/Conference/Paper615/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper615/Authors"
],
[
"ICLR.cc/2018/Conference/Paper615/Authors"
],
[
"ICLR.cc/2018/Conference/Paper615/Authors"
],
[
"ICLR.cc/2018/Conference/Paper615/Authors"
],
[
"ICLR.cc/2018/Conference/Paper615/Authors"
],
[
"ICLR.cc/2018/Conference/Paper615/Authors"
],
[
"ICLR.cc/2018/Conference/Paper615/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper615/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Spherical correlation, not convolution\", \"comment\": \"In page 5: \\\"This says that the SO(3)-FT of the S2 convolution (as we have defined it) of two spherical signals can be computed by taking the outer product of the S2-FTs of the signals. This is shown in figure 2. We were unable to find a reference for the latter version of the S2 Fourier theorem\\\"\", \"the_result_is_presented_at_least_in\": [\"Makadia et al. (2007), eq (21),\", \"Kostelec and Rockmore (2008), eq (6.6),\", \"Gutman et al. (2008), eq (9),\", \"Rafaely (2015), eq (1.88).\", \"All mentioned references define \\\"spherical correlation\\\" as what you define as \\\"spherical convolution\\\". I believe it makes more sense to call it correlation, since it can be seen as a measure of similarity between two functions (given two functions on S2 and transformations on SO(3), the correlation function measures the similarity as a function of the transformation).\"], \"references\": \"Makadia, A., Geyer, C., & Daniilidis, K., Correspondence-free structure from motion, International Journal of Computer Vision, 75(3), 311\\u2013327 (2007).\\n Kostelec, P. J., & Rockmore, D. N., Ffts on the rotation group, Journal of Fourier analysis and applications, 14(2), 145\\u2013179 (2008).\\n Gutman, B., Wang, Y., Chan, T., Thompson, P. M., & Toga, A. W., Shape registration with spherical cross correlation, 2nd MICCAI workshop on mathematical foundations of computational anatomy (pp. 56\\u201367) (2008).\\n Rafaely B. Fundamentals of spherical array processing. Berlin: Springer; (2015).\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This work introduces a trainable signal representation for spherical signals (functions defined in the sphere) which are rotationally equivariant by design, by extending CNNs to the corresponding group SO(3). The method is implemented efficiently using fast Fourier transforms on the sphere and illustrated with compelling tasks such as 3d shape recognition and molecular energy prediction.\\n\\nReviewers agreed this is a solid, well-written paper, which demonstrates the usefulness of group invariance/equivariance beyond the standard Euclidean translation group in real-world scenarios. It will be a great addition to the conference.\", \"decision\": \"Accept (Oral)\"}",
"{\"title\": \"Non-Abelian Harmonic Analysis to Get Spherical Invariance in CNNs\", \"rating\": \"7: Good paper, accept\", \"review\": \"The focus of the paper is how to extend convolutional neural networks to have built-in spherical invariance. Such a requirement naturally emerges when working with omnidirectional vision (autonomous cars, drones, ...).\\n\\nTo get invariance on the sphere (S^2), the idea is to consider the group of rotations on S^2 [SO(3)] and spherical convolution [Eq. (4)]. To be able to compute this convolution efficiently, a generalized Fourier theorem is useful. In order to achieve this goal, the authors adapt tools from non-Abelian [SO(3)] harmonic analysis. The validity of the idea is illustrated on 3D shape recognition and atomization energy prediction. \\n\\nThe paper is nicely organized and clearly written; it fits to the focus of ICLR and can be applicable on many other domains as well.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Equivariance under non-linearity\", \"comment\": \"The paper nicely and theoretically propose an equivariant spherical cross-correlation for the rotation group. But it is not clear how the equivariance maintains in multiple layers with ReLU and BN inserted in between as the authors did in the experiments?\\n\\nSec 5.1 also shows that adding ReLU increase the difference by a large magnitude.\"}",
"{\"title\": \"Relationship with \\\"Convolutional Networks for Spherical Signals\\\"\", \"comment\": \"How to describe the relationships between these two papers?\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for the feedback; I maintain my opinion.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for these references, they are indeed very relevant and interesting*. We will add them and change the text.\\n\\nWe agree that the cross-correlation is the right term, and have fixed it in the paper. We have added further discussion of this issue in reply to reviewer 2, who raised a similar concern.\\n\\n* We do not have access to Rafaely's book through our university library, so we cannot comment on it.\"}",
"{\"title\": \"Spherical CNNs\", \"comment\": \"Thank you for the detailed and balanced review.\", \"re_related_work\": \"we have expanded the related work section a little bit in order to contrast with previous work. (Unfortunately there is no space for a very long discussion)\", \"re_convolution_vs_correlation\": \"thank you for pointing this out. Our reasoning had been that:\\n1) Everybody in deep learning uses the word \\\"convolution\\\" to mean \\\"cross-correlation\\\".\\n2) In the non-commutative case, there are several different but essentially equivalent convolution-like integrals that one can define, with no really good reason to prefer one over the other.\\n\\nBut we did not explain this properly. We think a reasonable approach is to call something group convolution if, for the translation group it specializes to the standard convolution, and similarly for group correlations. This seems to be what several others before us have done as well, so we will follow this convention. Specifically, we will define the (group) cross-correlation as:\\n psi \\\\star f(g) = int psi(g^{-1} h) f(h) dh.\\n\\nRE The S^2CNN name: we have now defined this term in the introduction, but not changed it, because the paper is called \\\"Spherical CNN\\\" and S^2-CNN is just a shorthand for that name.\", \"re_timings\": \"we have added timings, memory usage numbers, and number of parameters to the paper. It is not always possible to compare the number of parameters to related work because those numbers are not always available. However, we can reasonably assume that the competing methods did their own cross-validation to arrive at an optimal model complexity for their architecture. (Also, in deep networks, the absolute number of parameters can often vary widely between architectures that have a similar generalization performance, making this a rather poor measure of model complexity.)\", \"re_references_and_other_minor_points\": \"we have fixed all of these issues. Thanks for pointing them out.\"}",
"{\"title\": \"Spherical CNNs\", \"comment\": \"Thank you for the kind words, we're glad you like our work!\\n\\nOur models for SHREC17 and QM7 both use only about 1.4M parameters. On a machine with 1 Titan X GPU, training the SHREC17 model takes about 50 hours, while the QM7 model takes only about 3 hours. Memory usage is 8GB for SHREC (batchsize 16) and 7GB for QM7 (batchsize 20).\\n\\nWe have studied the SHREC17 paper [1], but unfortunately it does not state the number of parameters or training time for the various methods. It does seem likely that each of the competition participants did their own cross validation, and arrived at an appropriate model complexity for their method. It is thus unlikely that the strong performance of our model relative to others can be explained by its size (especially since 1.4M parameters is not considered very large anymore).\\n\\nFor QM7, it looks like Montavon et al. used about 760k parameters (we have deduced this from the description of their network architecture). Since the model is a simple multi-layer perceptron applied to a hand-designed feature representation, we expect that it is substantially faster to train than our model (though indeed comparing a spherical CNN to an engineered features+MLP approach is a bit of an apples-to-oranges comparison). Raj et al. use a non-parametric method, so there is no parameter count or training time to compare to.\\n\\n[1] M. Savva et al. SHREC\\u201917 Track Large-Scale 3D Shape Retrieval from ShapeNet Core55, Eurographics Workshop on 3D Object Retreival (2017).\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you very much for taking the time to review our work.\"}",
"{\"title\": \"Good point\", \"comment\": \"This is a good point. The network is equivariant if all the layers are equivariant, so that is what we must show. It was shown in the paper \\\"Group Equivariant Networks\\\" (section 6.2) that arbitrary pointwise nonlinearities are equivariant to the action of the group. This is true for the so-called regular representations, which act by permuting the neurons, whereas other (steerable / induced) representations may require special equivariant nonlinearities.\", \"the_regular_representation_is_what_we_denote_by_l_r_in_this_paper\": \"L_R f = f R^{-1},\\nwhere juxtaposition means composition. Applying a pointwise nonlinearity s to a feature map f can be written mathematically as:\\nC_s f = s f\\n\\nSince L_R acts by composing on the right and C_s acts by composing from the left, we have:\\nL_R C_s f = L_R (s f) = (s f) R^{-1} = s (f R^{-1}) = C_s L_R f.\\nThat is, the regular representation L_R and the nonlinear operator C_s commute.\\n\\nThis is the continuous theory. In practice, the numerical implementation results in a tiny loss of equivariance per linear layer (Fig. 3, top right). When ReLUs are used between each layer, we see in Fig 3. bottom right that the error is substantially larger, but does not increase meaningfully with depth. The reason for this is as follows: in order to measure the equivariance error Delta, we have to rotate the feature maps. Rotation of feature maps is exact (up to floating point error) only for band-limited signals, but the ReLU will introduce many high-frequency signals that cannot be exactly rotated with sub-pixel precision. So as soon as we use one layer of ReLU's, the error jumps. However, this appears to be an artefact of the measurement procedure (the numerical rotation step, to be precise) and does not seem to get worse with depth. This is mentioned in the last section before section 5.2: \\\"This indicates that the error is not due to the network layers, but due to the feature map rotation, which is exact only for bandlimited functions\\\".\\n\\nBatch normalization is exactly equivariant, as long as one uses one mean and std per feature map on SO(3). This is because both the mean and std are \\\"scalars\\\" in the geometrical sense that they are invariant under rotation. So we can multiply by them without affecting the equivariance.\\n\\nBeyond the equivariance error (Delta) experiments, the generalization results for spherical MNIST provide further support for the numerical accuracy of our implementation. If the numerical problems were severe, we would not expect to see such good generalization from a non-rotated training set to a rotated test set.\\n\\nIn my ICLR talk, I will show a figure showing the feature maps for a rotated and non-rotated input. This allows you to easily see that the network is properly equivariant.\"}",
"{\"title\": \"Workshop paper\", \"comment\": \"The paper [1] is a preliminary 4-page paper reporting on the same project, published in the ICML workshop on principled approaches to deep learning. The existence of this workshop paper was mentioned in our original submission under footnote 0. Please note that the ICLR dual submission policy explicitly allows publishing articles that have previously appeared in workshops (https://iclr.cc/Conferences/2018/CallForPapers).\\n\\n[1] T.S. Cohen, M. Geiger, J. Koehler, M. Welling, Convolutional Networks for Spherical Signals. In Principled Approaches to Deep Learning Workshop ICML 2017.\"}",
"{\"title\": \"Added Late Reviewer\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"First off, this paper was a delight to read. The authors develop an (actually) novel scheme for representing spherical data from the ground up, and test it on three wildly different empirical tasks: Spherical MNIST, 3D-object recognition, and atomization energies from molecular geometries. They achieve near state-of-the-art performance against other special-purpose networks that aren't nearly as general as their new framework. The paper was also exceptionally clear and well written.\\n\\nThe only con (which is more a suggestion than anything)--it would be nice if the authors compared the training time/# of parameters of their model versus the closest competitors for the latter two empirical examples. This can sometimes be an apples-to-oranges comparison, but it's nice to fully contextualize the comparative advantage of this new scheme over others. That is, does it perform as well and train just as fast? Does it need fewer parameters? etc.\\n\\nI strongly endorse acceptance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Spherical CNNs\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"Summary:\\n\\nThe paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns in spherical signals irrespective of how they are rotated on the sphere. The key is to make the convolutional architecture rotation equivariant.\", \"pros\": [\"novel/original proposal justified both theoretically and empirically\", \"well written, easy to follow\", \"limited evaluation on a classification and regression task is suggestive of the proposed approach's potential\", \"efficient implementation\"], \"cons\": [\"related work, in particular the first paragraph, should compare and contrast with the closest extant work rather than merely list them\", \"evaluation is limited; granted this is the nature of the target domain\"], \"presentation\": \"While the paper is generally written well, the paper appears to conflate the definition of the convolutional and correlation operators? This point should be clarified in a revised manuscript. \\n\\nIn Section 5 (Experiments), there are several references to S^2CNN. This naming of the proposed approach should be made clear earlier in the manuscript. As an aside, this appears a little confusing since convolution is performed first on S^2 and then SO(3).\", \"evaluation\": \"What are the timings of the forward/backward pass and space considerations for the Spherical ConvNets presented in the evaluation section? Please provide specific numbers for the various tasks presented.\\n\\nHow many layers (parameters) are used in the baselines in Table 2? If indeed there are much less parameters used in the proposed approach, this would strengthen the argument for the approach. On the other hand, was there an attempt to add additional layers to the proposed approach for the shape recognition experiment in Sec. 5.3 to improve performance?\", \"minor_points\": [\"some references are missing their source, e.g., Maslen 1998 and Kostolec, Rockmore, 2007, and Ravanbakhsh, et al. 2016.\", \"some sources for the references are presented inconsistency, e.g., Cohen and Welling, 2017 and Dieleman, et al. 2017\", \"some references include the first name of the authors, others use the initial\", \"in references to et al. or not, appears inconsistent\", \"Eqns 4, 5, 6, and 8 require punctuation\", \"Section 4 line 2, period missing before \\\"Since the FFT\\\"\", \"\\\"coulomb matrix\\\" --> \\\"Coulomb matrix\\\"\", \"Figure 5, caption: \\\"The red dot correcpond to\\\" --> \\\"The red dot corresponds to\\\"\"], \"final_remarks\": \"Based on the novelty of the approach, and the sufficient evaluation, I recommend the paper be accepted.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
SyqShMZRb | Syntax-Directed Variational Autoencoder for Structured Data | [
"Hanjun Dai",
"Yingtao Tian",
"Bo Dai",
"Steven Skiena",
"Le Song"
] | Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches. | [
"generative model for structured data",
"syntax-directed generation",
"molecule and program optimization",
"variational autoencoder"
] | Accept (Poster) | https://openreview.net/pdf?id=SyqShMZRb | https://openreview.net/forum?id=SyqShMZRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BJVJ4_6XG",
"S1EYCMIbz",
"SJy4ZMU4G",
"Sk58kFogf",
"HJMcxIJWf",
"rJ7ZTaYxf",
"H1KTirvZf",
"BkQglHagz",
"HJ-PkBpez",
"ByHD_eqxf",
"ryFoAfUWz",
"SkUs6e5lG",
"HyPg7k6Bz",
"B1396zI-f",
"SkWtkrTlG"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment"
],
"note_created": [
1515189212483,
1512611452225,
1515753767083,
1511915345603,
1512165513658,
1511804154635,
1512688576634,
1512030186804,
1512030040866,
1511815261391,
1512611488885,
1511816606483,
1517249263381,
1512611219594,
1512030073534
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
],
[
"ICLR.cc/2018/Conference/Paper958/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Paper revision 2\", \"comment\": \"In addition to our revision 1, in which we extensively revised all experiments involving ZINC dataset, we have made an updated revision 2 which mostly addresses the writing and presentation issues. Besides the refinement of wording and typos, this version includes the following modification:\\n\\n1) We added Figure 2, where we explicitly show how the modern compiler works through the example of two-stage check (i.e., CFG parsing and Attribute Grammar check). Section 2 is now augmented with more detailed explanations of background knowledge.\\n\\n2) We added Figure 3, which shows the proposed syntax-directed decoder step by step through an example. Through the examples we put more effort in explaining key concepts in our method, such as \\u2018inherited constraints\\u2019 and \\u2018lazy linking\\u2019. \\n\\n3) Experiment section is revised with more details included. \\n\\n4) We added a conclusion section as suggested by the reviewer.\"}",
"{\"title\": \"Reply to \\\"Review\\\"\", \"comment\": \"Thanks for your effort in providing this detailed and useful review!\", \"we_present_our_clarification_in_the_following\": \">> Use of data and comparison with baselines:\\n\\nWe would first note that the anonymous accusation was set to \\u201c17 Nov 2017 (modified: 28 Nov 2017), readers: ICLR 2018 Conference Reviewers and Higher\\u201d. That\\u2019s why it was not visible to us until Nov 28, i.e., the original review release date. This gives us no chance to clarify anything before the review deadline. We have replied to it actively since Nov 28. \\n**Note the thread is invisible to us again since Dec 2. **\\n\\n1) We have experimented both kekulization and non-kekulization for baselines, and have reported the best they can get in all experiments. For example, in Table 2 the GVAE baseline results are improved compared to what was reported in GVAE paper.\\n\\n2) The anonymous commenter is using different kekulization (RDKIT, rather than our used Marvin), different baseline implementation (custom implementation, rather than the public one in GVAE\\u2019s paper) and possibly different evaluation code (since there is no corresponding evaluation online). For a reproducible comparision, we released our implementation, data, pretrained model and evaluation code at: https://github.com/anonymous-author-80ee48b2f87/cvae-baseline\\n\\n3) To make further clarification, we ran our method on the vanilla (non-kekulised) data. Our performance is actually boosted (76.2% vs 72.8% reported in the paper).\\nThe details of results from these experiments above can be seen in our public reply titled \\u201cWe released baseline CVAE code, data and evaluation code for clarification\\u201d and \\u201cOur reconstruction performance without kekulization on Zinc dataset\\u201d. \\n\\nIn either setting still, our method outperforms all baselines on reconstruction. We are sorry that this may have led to some confusions. To avoid further possible misunderstandings, we have extensively rerun all experiments involving ZINC dataset. Though differences are observed, the conclusion in each experiment remains the same. For example, our reconstruction performance is boosted (76.2% vs 72.8%). Since we didn\\u2019t address aromaticity semantics by the paper submission deadline, the valid prior fraction drops to 43.5%, but it is still much higher than baselines (7.2% GVAE, 0.7% CVAE). Please find the updated paper for more details. \\n\\n>> prior knowledge and limitations \\n\\nWe are targeting on domains where strict syntax and semantics are required. For example, the syntax and semantics are needed to compile a program, or to parse a molecule structure. So such prior knowledge comes naturally with the application. Our contribution is to incorporate such existing syntax and semantics in those compilers, into an on-the-fly generation process of structures. \\n\\nIn general, when numerous amount of data is available, a general seq2seq model would be enough. However, obtaining the useful drug molecules is expensive, and thus data is quite limited. Using knowledges like syntax (e.g., in GVAE paper), or semantics (like in our paper) will greatly reduce the amount of data needed to obtain a good model.\\n\\nIn our paper, we only addressed 2-3 semantic constraints, where the improvement is significant. Similarly, in \\u201cHarnessing Deep Neural Networks with Logic Rules (Hu et.al, ACL 16)\\u201d, incorporating several intuitive rules can greatly improve the performance of sentiment analysis, NER, etc. So we believe that, incorporating the knowledge with powerful deep learning achieves a good trade-off between human efforts and model performance. \\n\\n>> Typos and other writing issue:\\n\\nWe thank you very much for your careful reading and pointing out the typos and writing issues in our manuscript! We have incorporated your suggested changes in the current revision, and are keeping conducting further detailed proofreading to fix as much as possible the writing issues in the future revisions.\"}",
"{\"title\": \"Presentation improved but still lacking\", \"comment\": \"The presentation of the paper has definately improved, but I find the language used in the paper still below the quality needed for publication. There are still way too many grammatical and syntactical errors.\"}",
"{\"title\": \"Fighting for our honestness\", \"comment\": \"We will not accept the accusation of \\u201cdeliberately misleading and dishonest\\u201d. Please read the paper carefully before writing down the public comments of groundless moral judgement, which will lead to misunderstanding to public.\\n\\nFirst of all, we need to mention that, for the two baselines (CVAE and GVAE), **we also tried the kekule form of the data** along with **many other possible settings**, as we mentioned in the paper in page 15, right above Appendix C. We report whatever best result these baselines can achieve in our paper (page 8) from these settings. That\\u2019s why for GVAE, the result reported by us in Table 2 is **even better** than what reported in GVAE paper! Before going further, we would like to emphasize that these are our efforts to be honest with baselines.\\n\\nSecondly, we are not reporting \\u201cvalid reconstruction\\u201d as you mentioned. We are reporting two completely separate metrics namely (1) **exact** reconstruction and (2) valid **prior**, faithfully following the protocol in GVAE\\u2019s paper (in its Appendix C). From the number you are mentioned, we highly doubt that you are referring to accuracy in single character-level (not reconstructing entire SMILES), since we also get ~75% character accuracy for CVAE with kekule data. \\n\\nThirdly, we don\\u2019t think the kekule form makes the problem simpler. (1) Take the benzene ring for example. Non-kekule form is \\u201cc1ccccc1\\u201d but the kekule form is \\u201cC1=CC=CC=C1\\u201d. The generator should output the alternating single and double bonds in the exact same way. This actually makes the problem harder for some baselines! (2) Also the SMILES space you mentioned \\u201c26^85 with kekulisation vs 36^120 without\\u201d is puzzling without further clarifying how it is calculated, as the magic numbers differ from statistics of the ZINC data, where in kekule data, the maximum length is similar (114 vs 120), while average length is longer but still close (48 vs 44). We used \\u201cMarvin (https://chemaxon.com/products)\\u201d to get the kekule form. (3) We don\\u2019t think kekulisation is \\u201clossy\\u201d as you mentioned, at least for Zinc dataset. What we did is to get the canonical form (using RDKIT) of both kekule and normal SMILES, and we verified that for all SMILES the canonical form matched. We would argue that at least for our experiment that covers our method and previous baselines, the two forms have the same theoretical representation power in this dataset.\\n\\nGiven the above reasons, it is reasonable that the result we got for CVAE is much worse with kekule form under our metrics. Again, we emphasize the metric used in the code in CVAE is totally different. If you also use the CVAE code from https://github.com/mkusner/grammarVAE, then you will see it outputs something like [\\u201closs\\u201d, \\u201cacc\\u201d]. We run the same code for kekule data, and it outputs \\u201cacc\\u201d roughly 75%, which is similar to what you mentioned. **However**, this is the single character-level accuracy, not the accuracy of recovering entire SMILES. Accuracy for successfully reconstructing the entire SMILES as a whole is almost zero as we tried (e.g., 0.75^100).\", \"so_we_suggest_to_please_make_several_things_clear_first_for_a_constructive_discussion\": \"1) Is the same Kekule form used? Our data has 114 max length but you reported 85.\\n2) Is the same CVAE code from grammarVAE used? \\n3) Are we reporting the same metric? Since our single character-level accuracy matches your number which is roughly 75%, this may indicate a confusion in choice of accuracy. **Note that reconstructing the entire sequence as a whole exactly is significantly harder for sequence model**.\\n\\nWe reiterate that what you mentioned under the clarification above is already addressed in our paper. If you think your experiments are consistent with above (code, data, metric, etc), could you please kindly share your code and kekule data through anonymous link, so that we can calibrate our possible disagreement and make clear the possible issue? \\n\\nFinally we expect the anonymity can post the comments to public for timely discussion, otherwise, we cannot receive the update email from the OpenReview system in time, which may delay our reply and hurt the constructiveness of the discussion.\"}",
"{\"title\": \"Our reconstruction performance without kekulization on Zinc dataset\", \"comment\": \"To further clarify the reconstruction accuracy, we here report performance (our model and baselines) without using the kekulization transformation on Zinc dataset, in supplement to numbers using kekulization already reported in our manuscript. We include baseline results from GVAE paper for direct comparison.\\n\\nSD-VAE (ours): 76.2%; GVAE: 53.7%; CVAE: 44.6%\\n\\nCompare to what reported for SD-VAE with kekulization in current revision (72.8%), our performance is slightly boosted without kekulization. This shows that kekulization itself doesn\\u2019t have positive impact for reconstruction in our method. Our conclusion that the reconstruction accuracy of our SD-VAE is much better than all baselines still holds. \\n\\nNevertheless, to avoid possible misunderstanding, we\\u2019ll refine the experiment section by including more experiments, once the open review system allows.\"}",
"{\"title\": \"Interesting idea but poor presentation\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper presents an approach for improving variational autoencoders for structured data that provide an output that is both syntactically valid and semantically reasonable. The idea presented seems to have merit , however, I found the presentation lacking. Many sentences are poorly written making the paper hard to read, especially when not familiar with the presented methods. The experimental section could be organized better. I didn't like that two types of experiment are now presented in parallel. Finally, the paper stops abruptly without any final discussion and/or conclusion.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Paper revision 1\", \"comment\": \"To avoid further possible misunderstandings we have update our paper, in which we have extensively revised all experiments involving ZINC dataset. This addresses concerns on use of ZINC data and comparison with previous methods.\\n\\nThe conclusion in each experiment **remains the same** though some differences are observed. Examples of differences are as following: Our reconstruction performance is boosted (76.2% vs 72.8%); And since we didn\\u2019t address semantics specific to aromaticity by the paper submission deadline, the valid prior fraction drops to 43.5%, but it is still much higher than baselines (7.2% GVAE, 0.7% CVAE).\\n\\nPlease find the updated paper for more details.\"}",
"{\"title\": \"We released baseline CVAE code, data and evaluation code for clarification\", \"comment\": \"To address the anonymous commenter\\u2019s concerns on the CVAE baseline, the initial release of CVAE\\u2019s code (training code based on GVAE\\u2019s authors\\u2019code), with two versions of kekule data and vanilla data and the reconstruction evaluation script, are available at\", \"https\": \"//github.com/anonymous-author-80ee48b2f87/cvae-baseline\\n\\nwhere we also uploaded our trained CVAE, together with pretrained model obtained from GVAE\\u2019s authors.\", \"here_we_briefly_summarize_the_current_results\": \"(1) - CVAE, vanilla setting, pretrained model : 44.854%\\n(2) - CVAE, vanilla setting, our retraining: 43.218%\\n(3) - CVAE, Marvin Suite kekulised **tried for all methods in our paper**: 11.6%\\n(4) - CVAE, rdkit kekulised (provided by anonymous commenter, never been tried in our paper): 38.17% \\n\\nWe reported the best form of SMILES for CVAE in our paper. If you believe there\\u2019s any issue, please let us know asap and we are happy to investigate.\\n\\nFinally, we thank all the anonymous comments about the paper. If you have any concerns about the paper, please make the comments public while you specifying readers. Making such comments to reviewers only will not allow us to address the possible misunderstandings, or improve the paper timely when we make possible mistakes.\"}",
"{\"title\": \"Discrepancy in data and evaluation code on where your concern is -- part 1\", \"comment\": \"We need to reiterate that for CVAE and GVAE, **we also tried the kekule form of the data** along with **many other possible settings**. In Table 1, GVAE got similar results as before, but CVAE got worse results. So we report whatever best result for the baselines, using the code from GVAE implementation. This is supported by the improvement of GVAE in Table 2.\\n\\nWe never try to hide any details. Instead we elaborate all details and only due to the space limit, we put some of the stuffs in appendix (but still referred in main paper). That\\u2019s why you know details about datasets, settings, etc. So we suggest to figure out the discrepancy in scientific facts first, before making moral judgement. \\n\\nPlease note that we mostly take issue with your argument is on the sequence reconstruct accuracy you get for baseline CVAE (75%) with your data, on top of which as a premise you argue that the performance we report may not surpass the baseline. However, its validity looks questionable to us, and we investigated into the details about your evidence by running the experiments on every possible settings including the one you use, and our finding **does not** support the soundness of your premise. We provided the codes and models that support our claim, and would like to calibrate this discrepancy first to make further discussion meaningful.\\n\\n== Kekulisation (data):\\n\\nWe first introduce a basic common knowledge in Biochemistry that ``kekulisation form\\u2019\\u2019 is *NOT* unique. For the same underlying molecule, there may be many variants of kekulisation form due to different rules in rewriting SMILES, and it is pretty normal that two toolkits shows different kekulisations. \\n\\nWe appreciate your providing the scripts. The script looks good, but it generates different data than ours. Your script uses kekulisation from **rdkit**, which leads to data form different from our kekulisation using **Marvin Suite**. This leads to the following discrepancy:\\n```\", \"non_kekulised\": \"max 120 min 9 mean 44.31076 std 9.32734\", \"our_kekulised\": \"max 114 min 10 mean 48.73391 std 10.03919\\nyour kekulised (as your script does):\\n max 85 min 9 mean 44.85288 std 9.61040\\n```\\nThis indicates even just from these numbers, non-kekulised and kekulised (both ours and yours) form of the ZINC are quite similar, in contrast to your argument that the later is much simpler. As we mentioned, alternating single and double bonds makes the kekule form much harder for some baselines.\", \"we_would_like_to_emphasize_two_points\": \"First, both non-kekulised & kekulised are the representation corresponding to the same underlying molecules. Second, not only limited to our method but also for a large family of sequence models in applications as translation, summarization, etc., the distribution of length of sequence matters more than maximum length in analysis of data space, so we do not agree with your estimation of difficulty with a lsose bound inferred solely from maximum length.\"}",
"{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Let me first note that I am not very familiar with the literature on program generation,\\nmolecule design or compiler theory, which this paper draws heavily from, so my review is an educated guess. \\n\\nThis paper proposes to include additional constraints into a VAE which generates discrete sequences, \\nnamely constraints enforcing both semantic and syntactic validity. \\nThis is an extension to the Grammar VAE of Kusner et. al, which includes syntactic constraints but not semantic ones.\\nThese semantic constraints are formalized in the form of an attribute grammar, which is provided in addition to the context-free grammar.\\nThe authors evaluate their methods on two tasks, program generation and molecule generation. \\n\\nTheir method makes use of additional prior knowledge of semantics, which seems task-specific and limits the generality of their model. \\nThey report that their method outperforms the Character VAE (CVAE) and Grammar VAE (GVAE) of Kusner et. al. \\nHowever, it isn't clear whether the comparison is appropriate: the authors report in the appendix that they use the kekulised version of the Zinc dataset of Kusner et. al, whereas Kusner et. al do not make any mention of this. \\nThe baselines they compare against for CVAE and GVAE in Table 1 are taken directly from Kusner et. al though. \\nCan the authors clarify whether the different methods they compare in Table 1 are all run on the same dataset format?\", \"typos\": [\"Page 5: \\\"while in sampling procedure\\\" -> \\\"while in the sampling procedure\\\"\", \"Page 6: \\\"a deep convolution neural networks\\\" -> \\\"a deep convolutional neural network\\\"\", \"Page 6: \\\"KL-divergence that proposed in\\\" -> \\\"KL-divergence that was proposed in\\\"\", \"Page 6: \\\"since in training time\\\" -> \\\"since at training time\\\"\", \"Page 6: \\\"can effectively computed\\\" -> \\\"can effectively be computed\\\"\", \"Page 7: \\\"reset for training\\\" -> \\\"rest for training\\\"\"], \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Reply to \\\"Interesting idea but poor presentation\\\"\", \"comment\": \"We thank you for providing reviews.\\n\\nWe\\u2019ll refine the paper to include more introduction about background, and more detailed explanations about our method. \\n\\nWe\\u2019ll include final discussion/conclusion section.\"}",
"{\"title\": \"Strong paper presents state-of-the-art results\", \"rating\": \"7: Good paper, accept\", \"review\": \"NOTE:\\n\\nWould the authors kindly respond to the comment below regarding Kekulisation of the Zinc dataset? Fair comparison of the data is a serious concern. I have listed this review as a good for publication due to the novelty of ideas presented, but the accusation of misrepresentation below is a serious one and I would like to know the author's response.\\n\\n*Overview*\\n\\nThis paper presents a method of generating both syntactically and semantically valid data from a variational autoencoder model using ideas inspired by compiler semantic checking. Instead of verifying the semantic correctness offline of a particular discrete structure, the authors propose \\u201cstochastic lazy attributes\\u201d, which amounts to loading semantic constraints into a CFG and using a tailored latent-space decoder algorithm that guarantees both syntactic semantic valid. Using Bayesian Optimization, search over this space can yield decodings with targeted properties.\\n\\nMany of the ideas presented are novel. The results presented are state-of-the art. As noted in the paper, the generation of syntactically and semantically valid data is still an open problem. This paper presents an interesting and valuable solution, and as such constitutes a large advance in this nascent area of machine learning.\\n\\n*Remarks on methodology*\\n\\nBy initializing a decoding by \\u201cguessing\\u201d a value, the decoder will focus on high-probability starting regions of the space of possible structures. It is not clear to me immediately how this will affect the output distribution. Since this process on average begins at high-probability region and makes further decoding decisions from that starting point, the output distribution may be biased since it is the output of cuts through high-probability regions of the possible outputs space. Does this sacrifice exploration for exploitation in some quantifiable way? Some exploration of this issue or commentary would be valuable. \\n\\n*Nitpicks*\\n\\nI found the notion of stochastic predetermination somewhat opaque, and section 3 in general introduces much terminology, like lazy linking, that was new to me coming from a machine learning background. In my opinion, this section could benefit from a little more expansion and conceptual definition.\\n\\nThe first 3 sections of the paper are very clearly written, but the remainder has many typos and grammatical errors (often word omission). The draft could use a few more passes before publication.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper presents a more complex version of the grammar-VAE, which can be used to generate structured discrete objects for which a grammar is known, by adding a second 'attribute grammar', inspired by Knuth.\\n\\nOverall, the idea is a bit incremental, but the space is wide open and I think that structured encoder/decoders is an important direction. The experiments seem to have been done carefully (with some help from the reviewers) and the results are convincing.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Reply to \\\"Strong paper presents state-of-the-art results\\\"\", \"comment\": \"Thanks for your effort in providing this detailed and constructive review!\", \"we_present_our_clarification_in_the_following\": \">>NOTE:\\n\\nWe would first note that the anonymous accusation was set to \\u201c17 Nov 2017 (modified: 28 Nov 2017), readers: ICLR 2018 Conference Reviewers and Higher\\u201d. That\\u2019s why it was not visible to us until Nov 28, i.e., the original review release date. This gives us no chance to clarify anything before the review deadline. We have replied to it actively since Nov 28. \\n**Note the thread is invisible to us again since Dec 2. **\", \"to_summarize_our_clarification\": \">> Use of data\\n\\n1) We have experimented both kekulization and non-kekulization for baselines, and have reported the best they can get in all experiments. For example, in Table 2 the GVAE baseline results are improved compared to what was reported in GVAE paper.\\n\\n2) The anonymous commenter is using different kekulization (RDKIT, rather than our used Marvin), different baseline implementation (custom implementation, rather than the public one in GVAE\\u2019s paper) and possibly different evaluation code (since there is no corresponding evaluation online). For a reproducible comparision, we released our implementation, data, pretrained model and evaluation code at: https://github.com/anonymous-author-80ee48b2f87/cvae-baseline\\n\\n3) To make further clarification, we ran our method on the vanilla (non-kekulised) data. Our performance is actually boosted (76.2% vs 72.8% reported in the paper).\\nThe details of results from these experiments above can be seen in our public reply titled \\u201cWe released baseline CVAE code, data and evaluation code for clarification\\u201d and \\u201cOur reconstruction performance without kekulization on Zinc dataset\\u201d. \\n\\nIn either setting still, our method outperforms all baselines on reconstruction. We are sorry that this may have led to some confusions. To avoid further possible misunderstandings, we have extensively rerun all experiments involving ZINC dataset. Though differences are observed, the conclusion in each experiment remains the same. For example, our reconstruction performance is boosted (76.2% vs 72.8%). Since we didn\\u2019t address aromaticity semantics by the paper submission deadline, the valid prior fraction drops to 43.5%, but it is still much higher than baselines (7.2% GVAE, 0.7% CVAE). Please find the updated paper for more details. \\n\\n>>sacrifice of exploration\\n\\nCVAE, GVAE and our SD-VAE are all factorizing the joint probability of entire program / SMILES text in some way. CVAE factorizes in char level, GVAE in Context Free Grammar (CFG) tree, while ours factorizes both CFG and non-context free semantics. Since every method is factorizing the entire space, each structure in this space should have the possibility (despite its magnitude) of being sampled. \\n\\nBias is not always a bad thing. Some bias will help the model quickly concentrate to the correct mode. Definitely, different methods will bias the distribution in a different way. For example, CVAE is biased towards the beginning of the sequence. GVAE is biased by several initial non-terminals. \\n\\nOur experiments on diversity of generated molecules (table 3) demonstrate that, both GVAE and our method can generate quite diverse molecules. So we think both methods don\\u2019t have noticeable mode collapse problem on this dataset.\\n\\n>> writings:\\n\\nThanks for the suggestions. We are adding more effort in explaining our algorithm and improve writing in revisions. We have revised our experiments sections for clarifying the most important issue, and will keep improving the writing.\\n\\nTo briefly answer the \\u201clazy linking\\u201d: We don\\u2019t sample the actual value of the attribute at the first encounter; Instead, later when the actual content is generated, we use bottom-up calculation to fill the value. For example, when generating ringbond attribute, we only sample its existence. The ringbond information (bond index and bond type) are filled later. \\n\\nAs a side note, this idea comes from \\u201clazy evaluation\\u201d in compiler theory where a value is not calculated until it is needed.\"}",
"{\"title\": \"Discrepancy in data and evaluation code on where your concern is -- part 2\", \"comment\": \"(Due to the space limit, see comment below for part 1)\\n\\n== Code:\\n\\nAs mentioned before, for training CVAE baseline we use the code from GVAE (https://github.com/mkusner/grammarVAE) instead of making an implementation in other framework. **However, the reconstruction evaluation code is not available in this repo. Thus we made our evaluation code following GVAE paper\\u2019s protocol , and made it public.**\", \"what_we_achieved_with_cvae_and_our_evaluation_code_for_exact_sequence_reconstruction_is\": \"```\\n(1) - CVAE, vanilla data, pretrained model : 44.854% (As we showed in the paper)\\n(2) - CVAE, vanilla data, our retraining: 43.218%\\n(3) - CVAE, our kekulised data: 11.6%\\n(4) - CVAE, your kekulised data: 38.17% \\n```\\nWith the following scripts (If you want to evaluate on your own, please put them on the GVAE\\u2019s repo and run the first one after modifying paths to model weight / data files):\", \"https\": \"//gist.github.com/anonymous-author-80ee48b2f87/aa0ff838eabc372d537c57199ebc31f4\\n\\nNote that (1) is consistent with what GVAE paper reports, and (2) is consistent with (1), which gives us confidence on the correctness of both our evaluation script and our running of code from GVAE\\u2019s repo.\\n\\nSee NOTE 1 below for everything needed to reproduce our evaluation.\", \"focus\": \"The observation is that CVAE performs worse on kekulised data ( (1, 2) vs. (3, 4) in evaluation above), as we have mentioned before, **in contrast to** receiving huge performance gain to become a strong baseline ( (1, 2) vs. 75% as you have claimed). Again, in our paper, we ran the baselines on both settings and report the best results!\\n\\nSo new we observe discrepancy in the data and the metric evaluation between our findings and your argument. Since data and metric are keystones for your argument, we would like to verify it by asking you how do you **in detail** get your number (75%). Could you please point to the code for evaluation, CVAE implementation for training and data you used for a fair comparison?\", \"note_1\": \"the initial release of baseline\\u2019s code (with two versions of kekule data and vanilla data and the reconstruction accuracy evaluation script; training code based on GVAE\\u2019s authors\\u2019code) are available at https://github.com/anonymous-author-80ee48b2f87/cvae-baseline , where we also uploaded our trained CVAE, together with pretrained model obtained from GVAE\\u2019s authors.\", \"note_2\": \"Before getting numbers for the experiment above, we fixed a problem in our script that collects statistics for the previous reply, so now the reported sequence level accuracy for CVAE on kekulised data is ~11.6%, and character level accuracy is >97%. Since this is a bug in collecting numbers rather than the model, our conclusion, which says kekulisation makes it harder for CVAE instead of making it perform better as you argued, remains **the same**.\\n\\n== Final words:\\n\\nFinally, thank you for the follow up. But more details about what you\\u2019ve tried are required, which is necessary for the scientific discussion among us. We look forward to calibrating mismatching of numbers in the metric which are fundamentals of further discussion.\\n\\nNevertheless, we will also report the results of our method on the vanilla settings in the revision of our manuscript, once allowed by openreview.\"}"
]
} |
rJoXrxZAZ | HybridNet: A Hybrid Neural Architecture to Speed-up Autoregressive Models | [
"Yanqi Zhou",
"Wei Ping",
"Sercan Arik",
"Kainan Peng",
"Greg Diamos"
] | This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive
models for raw audio waveform generation. As an example, we propose
a hybrid model that combines an autoregressive network named WaveNet and a
conventional LSTM model to address speech synthesis. Instead of generating
one sample per time-step, the proposed HybridNet generates multiple samples per
time-step by exploiting the long-term memory utilization property of LSTMs. In
the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance.
HybridNet achieves a 3.83 subjective 5-scale mean opinion score on
US English, largely outperforming the same size WaveNet in terms of naturalness
and provide 2x speed up at inference. | [
"neural architecture",
"inference time reduction",
"hybrid model"
] | Reject | https://openreview.net/pdf?id=rJoXrxZAZ | https://openreview.net/forum?id=rJoXrxZAZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ryOLIn5lf",
"Byp0z9TQz",
"rJ43n56XM",
"r16uKJ5gG",
"Sk3XOcp7f",
"ByDRVIuZG",
"SyYnU16rz"
],
"note_type": [
"official_review",
"comment",
"comment",
"official_review",
"comment",
"official_review",
"decision"
],
"note_created": [
1511863887998,
1515197140678,
1515199659723,
1511811445029,
1515198500317,
1512756431359,
1517250224814
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper580/AnonReviewer1"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper580/AnonReviewer2"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper580/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Right name. Low innovation. Samples please!\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper presents HybridNet, a neural speech (and other audio) synthesis system (vocoder) that combines the popular and effective WaveNet model with an LSTM with the goal of offering a model with faster inference-time audio generation.\", \"summary\": \"The proposed model, HybridNet is a fairly straightforward variation of WaveNet and thus the paper offers a relatively low novelty. There is also a lack of detail regarding the human judgement experiments that make the significance of the results difficult to interpret.\\n\\nLow novelty of approach / impact assessment:\\nThe proposed model is based closely on WaveNet, an existing state-of-the-art vocoder model. The proposal here is to extend WaveNet to include an LSTM that will generate samples between WaveNet samples -- thus allowing WaveNet to sample at a lower sample frequency. WaveNet is known for being relatively slow at test-time generation time, thus allowing it to run at a lower sample frequency should decrease generation time. The introduction of a local LSTM is perhaps not a sufficiently significant innovation. \\n\\nAnother issue that lowers the assessment of the likely impact of this paper is that there are already a number of alternative mechanism to deal with the sampling speed of WaveNet. In particular, the cited method of Ramachandran et al (2017) uses caching and other tricks to achieve a speed up of 21 times over WaveNet (compared to the 2-4 times speed up of the proposed method). The authors suggest that these are orthogonal strategies that can be combined, but the combination is not attempted in this paper. There are also other methods such as sampleRNN (Mehri et al. 2017) that are faster than WaveNet at inference time. The authors do not compare to this model.\", \"inappropriate_evaluation\": \"While the model is motivated by the need to reduce the generation of WaveNet sampling, the evaluation is largely based on the quality of the sampling rather than the speed of sampling. The results are roughly calibrated to demonstrate that HybridNet produces higher quality samples when (roughly) adjusted for sampling time. The more appropriate basis of comparison is to compare sample time as a function of sample quality.\", \"experiments\": \"Few details are provided regarding the human judgment experiments with Mechanical Turkers. As a result it is difficulty to assess the appropriateness of the evaluation and therefore the significance of the findings. I would also be much more comfortable with this quality assessment if I was able to hear the samples for myself and compare the quality of the WaveNet samples with HybridNet samples. I will also like to compare the WaveNet samples generated by the authors' implementation with the WaveNet samples posted by van den Oord et al (2017). \\n\\n\\nMinor comments / questions:\\n\\nHow, specifically, is validation error defined in the experiments? \\n\\nThere are a few language glitches distributed throughout the paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The technique is orthogonal to existing techniques to speedup WaveNet\", \"comment\": \"We thank the reviewer's feedback but we do feel the hybrid method has its own merits. It is orthogonal to existing techniques including caching. Even with caching, the critical path caused by dependencies between samples still exists (You cannot generate the next sample earlier). Caching does not fundamentally address this dependency problem. Also caching is subject to hardware. For a different hardware platform (e.g. mobile), there might not be sufficient cache or memory for this purpose.\\n\\nMathematically, the hybrid method can be built on top of caching, and still achieve 2x-4x speedup. For instance, caching reduces per sample generation time by k (~20). The total generation time for a full utterance of n samples would be n*(1/k)*T, where T is the original per sample generation time. With the hybrid method, it can be further reduced to n*(1/k)*T*(1/4). \\n\\nWith respect to the evaluation, we do have a figure of comparison of inference time (Figure 6). We feel it is a fair comparison when we fix the accuracy while comparing the inference time. And yes, we agree that the key point of the paper is not to improve accuracy, thus the figures should better convey the key point (reference time). \\n\\nIn terms of the definition of validation error, we partition the training data into 5% validation data and 95% training data and run validation every 250 iterations. It is not the final test error. Audio quality is measured with MOS as described in the result section. \\n\\nIn terms of audio quality, yes we feel confident to upload samples. The Mechanical Turkers consistently gives better MOS scores for this hybrid model, compared to a WaveNet. Me, personally, listened the samples many times and can confirm that the scores reflect the quality. \\nWe would love to compare with samples posted by van den Oord et al (2017).\"}",
"{\"title\": \"Accuracy is not the main goal of this work. This work is orthogonal to other techniques.\", \"comment\": \"We really appreciate the reviewer's comments. We also really like Reviewer1's feedback that accuracy is not the main purpose of this paper. We are not trying to outperform SOTA in terms of accuracy but only provide a way to speedup an autoregressive model like WaveNet. We understand that the WaveNet team also have made great progress improving their MOS scores using various techniques (please find their recent paper :) ), but even with those changes, our technique can still be applied to a model that is fundamentally a \\\"WaveNet\\\" and still achieve 2-4x speedup.\\n\\nLike we explained to Reviewer1, mathematically, the hybrid method can be built on top of other techniques including caching, and still achieve 2x-4x speedup. For instance, caching reduces per sample generation time by k (~20). The total generation time for a full utterance of n samples would be n*(1/k)*T, where T is the original per sample generation time. With the hybrid method, it can be further reduced to n*(1/k)*T*(1/4). \\n\\nThe speedup can be beyond 2x. The inference time can be drastically reduced (~2x each time step added) by increasing the number of steps produced by the LSTM. The audio quality will not degrade noticeably until 6-7 steps (~32-64x speed up) compared to base line. We would love to add more evaluation in future version.\"}",
"{\"title\": \"Good results but lacking details about design decisions\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"TL;DR of paper: for sequential prediction, in order to scale up the model size without increasing inference time, use a model that predicts multiple timesteps at once. In this case, use an LSTM on top of a Wavenet for audio synthesis, where the LSTM predicts N steps for every Wavenet forward pass. The main result is being able to train bigger models, by increasing Wavenet depth, without increasing inference time.\\n\\nThe idea is simple and intuitive. I'm interested in seeing how well this approach can generalize to other sequential prediction domains. I suspect that it's easier in the waveform case because neighboring samples are highly correlated. I am surprised by how much an improvement \\n\\nHowever, there are a number of important design decisions that are glossed over in the paper. Here are a few that I am wondering about:\\n* How well do other multi-step decoders do? For example, another natural choice is using transposed convolutions to upsample multiple timesteps. Fully connected layers? How does changing the number of LSTM layers affect performance?\\n* Why does the Wavenet output a single timestep? Why not just have the multi-step decoder output all the timesteps?\\n* How much of a boost does the separate training give over joint training? If you used the idea suggested in the previous point, you wouldn't need this separate training scheme.\\n* How does performance vary over changing the number of steps the multi-step decoder outputs?\\n\\nThe paper also reads like it was hastily written, so please go back and fix the rough edges.\\n\\nRight now, the paper feels too coupled to the existing Deep Voice 2 system. As a research paper, it is lacking important ablations. I'll be happy to increase my score if more experiments and results are provided.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"We thank the reviewer's great feedback.\", \"comment\": \"We thank the reviewer's great feedback. In terms of your question:\\n* How well do other multi-step decoders do?\\nYes, we have the same question at the early stage of the project. We tried a variety of approaches to generate multiple samples, including a transposed convolution, a vanilla RNN, a high-way, etc. None of them get comparable performance to LSTMs. \\n\\n* Why does the Wavenet output a single timestep? Why not just have the multi-step decoder output all the timesteps? \\nWe tried having multi-step decoder to output all timesteps, but unintuitively it is worse than having one sample generated by WaveNet. As pointed out in the result section, LSTM can effectively reduce variance in the output distribution, but this also could reduce the sharpness and naturalness of the audio. \\n\\n* How much of a boost does the separate training give over joint training? If you used the idea suggested in the previous point, you wouldn't need this separate training scheme.\\nThe audio quality is substantially better with ground-truth training. Thanks for the suggestion, we will try this idea out. \\n\\n* How does performance vary over changing the number of steps the multi-step decoder outputs?\\nThe inference time can be drastically reduced (~2x each time step added) by increasing the number of steps. The audio quality will not degrade noticeably until 6-7 steps (~32-64x speed up) compared to base line.\"}",
"{\"title\": \"Right choice of problem. Introduces significant independence assumptions.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"By generating multiple samples at once with the LSTM, the model is introducing some independence assumptions between samples that are from neighbouring windows and are not conditionally independent given the context produced by Wavenet. This reduces significantly the generality of the proposed technique.\", \"pros\": [\"Attempting to solve the important problem of speeding up autoregressive generation.\", \"Clarity of the write-up is OK, although it could use some polishing in some parts.\", \"The work is in the right direction, but the paucity of results and lack of thoroughness reduces somewhat the work's overall significance.\"], \"cons\": [\"The proposed technique is not particularly novel and it is not clear whether the technique can be used to get speed-ups beyond 2x - something that is important for real-world deployment of Wavenet.\", \"The amount of innovation is on the low side, as it involves mostly just fairly minor architectural changes.\", \"The absolute results are not that great (MOS ~3.8 is not close to the SOTA of 4.4 - 4.5)\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper presents a hybrid architecture which combines WaveNet and LSTM for speeding-up raw audio generation. The novelty of the method is limited, as it\\u2019s a simple combination of existing techniques. The practical impact of the approach is rather questionable since the generated audio has significantly lower MOS scores than the state-of-the-art WaveNet model.\"}"
]
} |
B12Js_yRb | Learning to Count Objects in Natural Images for Visual Question Answering | [
"Yan Zhang",
"Jonathon Hare",
"Adam Prügel-Bennett"
] | Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%. | [
"visual question answering",
"vqa",
"counting"
] | Accept (Poster) | https://openreview.net/pdf?id=B12Js_yRb | https://openreview.net/forum?id=B12Js_yRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"r14Imk6HM",
"rkkDeiB-M",
"SJ11jzclf",
"S1dXkASrz",
"Sy-67iHZz",
"r13as6sXf",
"H1GhmwqgG",
"HkRBLTo7G",
"S15E4oHbM",
"Hkwum9YgM",
"B17QZjB-M",
"ByYnNiB-f",
"SkQCv5nvM",
"B1bfGw-ff",
"Sk0dZsS-M",
"HJt5Gsrbf",
"B1-TZiS-M"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1517249355896,
1512579158717,
1511824087405,
1516785439655,
1512580024840,
1515080647849,
1511842730282,
1515079238114,
1512580146119,
1511789422691,
1512579355132,
1512580273127,
1519327178766,
1513349640864,
1512579446061,
1512579729432,
1512579517063
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
],
[
"ICLR.cc/2018/Conference/Paper136/Authors"
]
],
"structured_content_str": [
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Initially this paper received mixed reviews. After reading the author response, R1 and and R3 recommend acceptance.\\n\\nR2, who recommended rejecting the paper, did not participate in discussions, did not respond to author explanations, did not respond to AC emails, and did not submit a final recommendation. This AC does not agree with the concerns raised by R2 (e.g. I don't find this model to be unprincipled).\\n\\nThe concerns raised by R1 and R3 were important (especially e.g. comparisons to NMS) and the authors have done a good job adding the required experiments and providing explanations.\\n\\nPlease update the manuscript incorporating all feedback received here, including comparisons reported to the concurrent ICLR submission on counting.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Rebuttal Part 1/4: Hand-crafted nature\", \"comment\": \"Thank you for your review. We are happy to see that you think the paper is well written and that the deduplication steps in the module are interesting. Given the aptness of your comments, it seems like you understand the paper better than you are giving yourself credit for.\", \"in_summary_to_your_main_complaints\": \"We argue that a hand-crafted approach is reasonable for the current state of VQA, NMS baselines will be supplied, and comparisons to (Chattopadhyay et al., 2017) are not useful.\\n\\n- Proposed model is pretty hand-crafted, would recommend the authors to use something more general, like graph convolutional neural networks.\\n\\nIn summary, we think that with current VQA models, counting needs to be hand-crafted to some extent, hand-crafting counting has various useful properties, and that we tried a non-handcrafted approach similar to graph convolutional networks in the past without success.\\n\\nWe think that with the current state of VQA models on real images, it is unreasonable to expect a general model to learn to count without hand-designing some aspect of it in order for the model to learn. Pointed out many times such as in (Jabri et al., 2016), [1], and seen from the balanced pair accuracies in Table 2, much of current VQA performance is due to fitting better to spurious dataset biases with little \\\"actual\\\" learning of how to answer the questions. The necessity for a modularized approach is also recognized in a recently published work in NIPS [2], where they combine a variety of different types of models, each suited to a different pre-defined task (e.g. one face detection network, one scene classification network, etc.). The aspect that makes the counting task within VQA special is that there is some relatively easy-to-isolate logic to it, which is the focus of our module through soft deduplication and aggregation. Even in humans, counting is a highly structured process when going beyond the range where humans can subitize. While it would certainly be better if a neural network could discover the logic required for counting by itself, we think that a hand-engineered approach is perfectly valid for solving this problem given the current state of research and performance on VQA.\\n\\nThe hand-crafted nature gives the component several useful properties. Due to the structure of the component, it performs more-or-less correct counting by default, even when none of the parameters in it have been learnt yet. This allows it to generalize more easily to a test set with fewer training samples and under much noise, as is the case for VQA. Since all steps within the component have a clear motivation, the parameters that it learns are interpretable and can be used for explaining why it predicted a certain count. Changing the input has a predictable effect on the output due to the component structure enforcing monotonicity. This is particularly useful in comparison to a general deep neural network, which suffers from adversarial inputs causing unexpected predictions. The simple nature of the module with relatively few parameters keeps the computational costs low and allows it to be integrated into non-VQA tasks fairly easily. Note that the modeling assumptions that we make are not specific to VQA, but are assumptions about what a sensible counting method should do in ideal cases.\\n\\nIn our experience, integrating other types of models into VQA models is difficult without either inhibiting general performance or simply achieving essentially the same level of performance. As far as we are aware, there has not been any work which successfully uses a graph-based approach to VQA on real images. We did try to integrate relation networks (Santoro et al., 2017) into a VQA model, without much success in terms of performance on counting nor in any other category (though this obviously does not mean that a successful integration is not possible). Relation networks are a natural choice for VQA v2, perhaps more so than the neural networks for arbitrary graphs you suggest: they have been shown to work well for VQA on the CLEVR dataset and treat objects as nodes in a complete graph, similar to what our module uses as input. With our module, we at least show that a graph-based representation can find some use in VQA on real images in the first place and might motivate further research into graph-based approaches. In general, the sorts of graph-based approaches that you mention have only been successfully applied on the abstract VQA dataset so far [3], where a precise scene graph of synthetic images is used as input, not real images. On that dataset, good improvements in counting have been achieved by a general graph-based network. We imagine that this is due to the much less noisy nature of scene graphs on synthetic data compared to using pixel-based representations or object proposals on real images, making counting a much easier task in the abstract VQA case.\"}",
"{\"title\": \"Improve object counting with a lot of heuristics\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper tackles the object counting problem in visual question answering. It is based on the two-stage method that object proposals are generated from the first stage with attention. It proposes many heuristics to use the object feature and attention weights to find the correct count. In general, it treats all object proposals as nodes on the graph. With various agreement measures, it removes or merges edges and count the final nodes. The method is evaluated on one synthetic toy dataset and one VQA v2 benchmark dataset. The experimental results on counting are promising. Although counting is important in VQA, the method is solving a very specific problem which cannot be generalized to other representation learning problems. Additionally, this method is built on a series of heuristics without sound theoretically justification, and these heuristics cannot be easily adapted to other machine learning applications. I thus believe the overall contribution is not sufficient for ICLR.\", \"pros\": \"1. Well written paper with clear presentation of the method. \\n2. Useful for object counting problem.\\n3. Experimental performance is convincing.\", \"cons\": \"1. The application range of the method is very limited. \\n2. The technique is built on a lot of heuristics without theoretical consideration.\", \"other_comments_and_questions\": \"1. The determinantal point processes [1] should be able to help with the correct counting the objects with proper construction of the similarity kernel. It may also lead to simpler solutions. For example, it can be used for deduplication using A (eq 1) as the similarity matrix. \\n\\n2. Can the author provide analysis on scalability the proposed method? When the number of objects is very large, the graph could be huge. What are the memory requirements and computational complexity of the proposed method? \\nIn the end of section 3, it mentioned that \\\"without normalization,\\\" the method will not scale to an arbitrary number of objects. I think that it will only be a problem for extremely large numbers. I wonder whether the proposed method scales. \\n\\n3. Could the authors provide more insights on why the structured attention (etc) did not significantly improve the result? Theoritically, it solves the soft attention problems. \\n\\n4. The definition of output confidence (section 4.3.1) needs more motivation and theoretical justification. \\n\\n[1] Kulesza, Alex, and Ben Taskar. \\\"Determinantal point processes for machine learning.\\\" Foundations and Trends\\u00ae in Machine Learning 5.2\\u20133 (2012): 123-286.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to update\", \"comment\": \"Thank you for your update; we are glad to hear that you found the rebuttal convincing.\", \"with_regards_to_your_comment_about_worries_that_the_proposed_model_may_be_hard_to_reproduce_due_to_its_complexity\": \"As we mention in a footnote in the paper, we will open-source all of our code soon. The complexity that you perceive can be boiled down to a sequence of simple tensor operations, so we think that it should be reasonably straightforward to implement in any modern Deep Learning framework. Here is the snippet of our model implementation in PyTorch (we do not rely on the dynamic computation graph feature of PyTorch), with the important bit being the forward function of the Counter class: https://gist.github.com/anonymous/669509edc32eb28cc508221de47baa43 . We will clean this and the rest of the code up to be easier to follow before release.\"}",
"{\"title\": \"Rebuttal Part 1/3: Applications beyond VQA\", \"comment\": \"Thank you for your review. We are glad to see that you think the paper is well written and that the evidence for the usefulness to object counting is convincing. Given this, we are slightly surprised by the low rating as we think that the pros you list are very concrete compared to the cons.\\nIn summary, we disagree with the main claims that the application range is very limited and that it is built on heuristics without theoretical considerations.\\n\\n\\n- Method is solving a very specific problem which cannot be generalized to other representation learning problems / application range of the method is very limited\\n\\nWe disagree that the application of this method is limited to VQA. In fact, the toy task that we perform experiments on is clearly not a VQA task and shows that the component is applicable beyond VQA. Counting tasks have many practical real-world applications on their own.\\n\\nAn immediate research area wherein it can be used aside from VQA is image caption generation, where an attention LSTM can be used to attend over objects (Anderson et al, 2017). In this task, counting information should be useful for generating good captions and the attention mechanism has the same limitations as we discuss in section 3, which our counting component can be used for. Any task where counting of specific objects (without necessarily conditioning on a question input) is required but no ground truth bounding boxes are available -- which limits the use of some conventional methods for training counting models -- can use a pre-trained region proposal network. The score of a binary classification on each proposal whether it shows the object to count can be used as attention weight in the component to eliminate duplicates without the need for post-processing with non-maximum suppression and score thresholding, both of which require hyper-parameter tuning and disallow end-to-end training. This system can be trained end-to-end with the counting module, allowing a sensible approach for handling duplicate and overlapping bounding boxes.\\n\\nMore generally, tasks where a set of potential objects (wherein each object is given a score of how relevant it is) with possible duplicates needs to be counted, and duplicates can be identified through a pairwise distance metric (in the image case these are the 1 - IoU distances of bounding boxes), the component can be used for counting the objects with duplicates eliminated in a fully differentiable manner. Most importantly, appropriate relevancy scores and distances do not need to be specified explicitly as they can be learnt from data.\\n\\nAs we have shown in the paper, the component is robust enough to count without per-object ground-truth as supervision; only the aggregate count is needed. This makes it applicable to a wide variety of counting tasks beyond VQA.\\n\\n\\n- Heuristics cannot be easily adapted to other machine learning applications\\n\\nWhile the properties that we use are specifically targeted towards counting, we think that there is value to be gained for the wider research community from our general approach to the problem. Our insight about the necessity of using the attention map itself, not just the feature vector coming out of the attention, may lead to recognition of problems that soft attention can introduce in other domains such as NLP, which in turn can lead to new solutions. The approach of a learned interpolation between correct behaviours, enforced by the network structure through monotonicity, may also be useful. For the monotonicity property, networks with more nonlinearities such as Deep Lattice Networks (You et al., 2017) can be used as well as we mention in Appendix A. Our way of treating the counting problem as a graphical problem and decomposing it into intra- and inter-object relations may find use in problems where there is some notion of object under uncertainty. The approach of creating a fully differentiable model despite many operations naively being non-differentiable, in particular when we want to remove certain edges but instead use equivalence under a sum to our benefit, contributes to the growing literature (e.g. (Jaderberg et al., 2015)) of making operations required for certain tasks differentiable and thus trainable in a deep neural network setting.\"}",
"{\"title\": \"Summary of rebuttal to main concerns\", \"comment\": \"Due to the length of our detailed point-by-point rebuttals, we would like to give a quick summary of our responses to the main concerns that the reviewers had.\\n\\n# Reviewer 3 (convinced by our rebuttal and increased the rating)\\n\\n- Too handcrafted\\nThe current state-of-art in VQA on real images is nowhere near good enough for learning to count to be feasible using general models without hand-crafting some aspects of them for counting specifically. Hand-crafting gives the component many of its useful properties and guarantees, such as allowing us to understand why it made specific predictions. We are the first to use a graph-based approach with any significant benefit on VQA for real images, which in the future could certainly be generalized, but we had no success with generalizations so far.\\n\\n- NMS baseline missing\\nWe have updated the paper with NMS results, strengthening our main results.\\n\\n- Comparison with (Chattopadhyay et al., 2017) missing\\nTheir experimental setup majorly disadvantages VQA models, so their results are not comparable. We have updated the paper to make this clearer.\\n\\n\\n# Reviewer2 (No response to rebuttal yet)\\n\\n- Application range is very limited\\nWhile the reviewer claims that our component is entirely limited to VQA, this is not true since even the toy dataset that we use has not much to do with VQA -- it is a general counting task. Counting tasks have much practical use and we updated the paper to explicitly state how the component is applicable outside of VQA.\\n\\n- Built on a lot of heuristics without theoretical consideration\\nOur component does not use an established mathematical framework (such as Determinantal Point Processes as the reviewer suggests in a comment) but we are justifying every step with what properties are needed to count correctly. That is, we are mathematically correctly modelling a sensible counting mechanism and disagree with the claim that these are just a bunch of heuristics. In both theory and practice, the counting mechanism gives perfect answers when ideal case assumptions apply and sensible answers when they do not apply. The lack of traditional theory also seems to be a complaint about large parts of Deep Learning and recent Computer Vision research in general.\\n\\nEspecially given the strong positives that this reviewer lists, we do not think that a final rating of 4 is fair towards our work.\\n\\n\\n# Reviewer1 (No response to rebuttal yet)\\n\\n- Fails to show improvement over a couple of important baselines\\nWe think that the reviewer must have misunderstood something; we do not know what this could possibly be referring to. If the reviewer is referring to baselines in the paper, all results show a clear improvement of our component over all existing methods. If the reviewer is referring to baselines not in the paper, then we do not see how this can be the case: we only left out baselines that are strictly weaker in all aspects than the ones we show in the paper. You can verify that our results on the number category (51.39%) outperforms everything, including ensemble models of state-of-the-art techniques with orthogonal improvements, on the the official leaderboard: https://evalai.cloudcv.org/web/challenges/challenge-page/1/leaderboard (our results are hidden for anonymity)\\n\\n- Qualitative examples of A, D, and C are needed\\nWe have updated the paper to include some qualitative examples.\"}",
"{\"title\": \"Model is too hand-crafted and key experiments missing\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"\", \"summary\": [\"This paper proposes a hand-designed network architecture on a graph of object proposals to perform soft non-maximum suppression to get object count.\"], \"contribution\": [\"This paper proposes a new object counting module which operates on a graph of object proposals.\"], \"clarity\": [\"The paper is well written and clarity is good. Figure 2 & 3 helps the readers understand the core algorithm.\"], \"pros\": [\"De-duplication modules of inter and intra object edges are interesting.\", \"The proposed method improves the baseline by 5% on counting questions.\"], \"cons\": [\"The proposed model is pretty hand-crafted. I would recommend the authors to use something more general, like graph convolutional neural networks (Kipf & Welling, 2017) or graph gated neural networks (Li et al., 2016).\", \"One major bottleneck of the model is that the proposals are not jointly finetuned. So if the proposals are missing a single object, this cannot really be counted. In short, if the proposals don\\u2019t have 100% recall, then the model is then trained with a biased loss function which asks it to count all the objects even if some are already missing from the proposals. The paper didn\\u2019t study what is the recall of the proposals and how sensitive the threshold is.\", \"The paper doesn\\u2019t study a simple baseline that just does NMS on the proposal domain.\", \"The paper doesn\\u2019t compare experiment numbers with (Chattopadhyay et al., 2017).\", \"The proposed algorithm doesn\\u2019t handle symmetry breaking when two edges are equally confident (in 4.2.2 it basically scales down both edges). This is similar to a density map approach and the problem is that the model doesn\\u2019t develop a notion of instance.\", \"Compared to (Zhou et al., 2017), the proposed model does not improve much on the counting questions.\", \"Since the authors have mentioned in the related work, it would also be more convincing if they show experimental results on CL\"], \"conclusion\": [\"I feel that the motivation is good, but the proposed model is too hand-crafted. Also, key experiments are missing: 1) NMS baseline 2) Comparison with VQA counting work (Chattopadhyay et al., 2017). Therefore I recommend reject.\"], \"references\": [\"Kipf, T.N., Welling, M., Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.\", \"Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R. Gated Graph Sequence Neural Networks. ICLR 2016.\"], \"update\": \"Thank you for the rebuttal. The paper is revised and I saw NMS baseline is added. I understood the reason not to compare with certain related work. The rebuttal is convincing and I decided to increase my rating, because adding the proposed counting module achieve 5% increase in counting accuracy. However, I am a little worried that the proposed model may be hard to reproduce due to its complexity and therefore choose to give a 6.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Related submission: Interpretable Counting for Visual Question Answering\", \"comment\": \"We would like to point out a related paper that was submitted to ICLR 2018: Interpretable Counting for Visual Question Answering, https://openreview.net/forum?id=S1J2ZyZ0Z\", \"they_also_tackle_the_problem_of_counting_in_vqa_with_a_sequential_counting_method_with_several_differences_in_the_approach\": [\"They use a more-or-less generic network for sequential counting and design a specific loss, while we design a specific network component and use a generic loss.\", \"We use the standard full VQA dataset, whereas they create their own dataset by taking only counting questions from the VQA and Visual Genome datasets. This makes our results comparable to prior work on VQA, showing a clear benefit over existing results in counting. In total we use more questions to train with (since we are using all VQA questions, not just counting ones), but fewer counting questions (since we are not using counting questions from Visual Genome), so the impact of this difference on counting performance is unclear.\", \"It is unclear to us whether their method is usable within usual VQA architectures due to their loss not applying when the question is not a counting question. Our model is just a regular VQA model with our component attached to it without relying on a specific loss function, so the usual losses that are used in VQA can be used directly. This allows our component to be easily used in other VQA models unlike their method.\", \"Their method has the advantage of interpretability of the outputs. To understand a predicted count one can look at the *set of objects* it counted (this is something that our Reviewer3 wanted). Our method has the advantage of interpretability of the model weights and activations. To understand a predicted count one can look at the *activations through the component* with a clear interpretation for the activations, i.e. understanding *how* the model made the decision (though unlike their method, without a set of objects being obtained in the process, but a score for each object between 0 and 1).\", \"In terms of performance, we can make some very rough comparisons with their numbers. The UpDown baseline that they re-implement is the same model architecture as used for the single-model results in our Table 1 by (Teney et al., 2017). This baseline model gets 47.4% accuracy on their dataset, which improves to 49.7% with their method (2.3% absolute improvement). Meanwhile, on number questions (a superset of counting questions, though mostly consisting of counting questions) with the regular VQA dataset, the same model architecture gets 43.9%, which improves to 51.4% with our model, a clearly much larger benefit (7.5%). Part of this is due to a stronger base model, but even then, the stronger baseline we compare against has a number accuracy of 46.6%,meaning that we have an absolute improvement of 4.8% with our model.\", \"The improvement through our model on just the counting questions is even larger as we show in Table 2.\"]}",
"{\"title\": \"Rebuttal Part 2/3: Theory, determinantal point processes and scalability\", \"comment\": \"- Built on a lot of heuristics without theoretical consideration\\n\\nWe disagree that there is no theoretical consideration in our method. Mathematically, we are handling the edge cases for counting correctly and all other behaviour is based on a learned interpolation between the edge cases, guaranteeing at least some sort of sensible counting. It is true that it is not based on a well-established mathematical framework, but we are successfully solving a practical problem with a practical solution. Every step within the component is theoretically justified with what property is necessary in order for the component to produce correct counts, followed by a way of achieving each property. After building up the model this way, it produces perfect counts in the cases when our modeling assumptions hold (the properties that we claimed are necessary lead to this) and the performance degrades gracefully under uncertainties when our modeling assumptions do not hold (another consequence of the properties that we enforce). Thus, we disagree with your claim that these are theoretically unmotivated heuristics; every part of the component has a reasonable, theoretically-based motivation based on what a correct counting mechanism should do. AnonReviewer1 agrees that all steps are reasonably motivated in their review.\\n\\nWe do not think that this paper should be rejected simply because it is does not use an established mathematical model to solve a task. This complaint seems to apply to much of Deep Learning and Computer Vision research in general, not this paper in particular.\\n\\n\\n- Determinantal point process should be able to help\\n\\nThis connection looks interesting, but from the survey it is not clear that it is at all applicable to counting. The mathematics may be well grounded, but the assumptions about diversity seem very ad-hoc. Thus, we think that using DPPs for counting would be an unjustified heuristic. Our approach may not be grounded in a mathematical formulation, but we are correctly handling edge cases and allowing a suitable interpolation to be learned from data. To our knowledge, DPPs have found very little use in the field of Deep Learning so far. We are only aware of [1] which uses DPPs for model compression, which is evidently unrelated to the task of counting.\\n\\n\\n- Scalability of the proposed method\\n\\nAs stated in section 4.2.2, the time complexity is Theta(n^3) where n is the number of objects used. This can be reduced using the alternative similarity measure that we mention before that to Theta(n^2), though all results reported use the former similarity. The space complexity is Theta(n^2), as the matrices A, D, and C each have n^2 elements. Here are some numbers of our implementation, showing approximate time taken for one training epoch of the whole model on VQA v2 and amount of memory allocated on a Titan X (Pascal) GPU. The times for the low max object counts are very rough and are averaged across a few epochs -- as training time typically changes by about 10 seconds epoch by epoch -- and memory usage can vary slightly between runs.\\n\\nmax objects, time (minutes), memory (MiB)\\n1, 5:50, 3701\\n10 (default), 6:00, 3715\\n25, 6:15, 4095\\n50, 9:50, 7393\\n60, 12:50, 10779\\n\\nAs you can see, increasing the number of objects from 10 to 25 incurs only small additional computational costs and even going to 50 objects, operating on a 2500 entry matrix per example, is still quite reasonable. For practical cases that we are dealing with in VQA, where we limit the maximum number of objects to 10 -- this covers the vast majority of counting questions -- using the counting component uses marginal amounts of extra computational resources. We do not claim that the method will be applicable to huge graphs and it is probably not the best mechanism for counting a large number of objects. There are also several ways of reducing the run time for large numbers of objects through e.g. a k-d tree with the reasonable assumption that when there are many objects to count, an object does not overlap with all other objects. The main difficulty of VQA is that most of the time the number of objects to count is relatively small; in contrast, the queries of what objects to count and the spatial relationships between objects can be complex.\"}",
"{\"title\": \"Counting in VQA\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"Summary\", \"This paper mainly focuses on a counting problem in visual question answering (VQA) using attention mechanism. The authors propose a differentiable counting component, which explicitly counts the number of objects. Given attention weights and corresponding proposals, the model deduplicates overlapping proposals by eliminating intra-object edges and inter-object edges using graph representation for proposals. In experiments, the effectiveness of proposed model is clearly shown in counting questions on both a synthetic toy dataset and the widely used VQA v2 dataset.\", \"Strengths\", \"The proposed model begins with reasonable motivation and shows its effectiveness in experiments clearly.\", \"The architecture of the proposed model looks natural and all components seem to have clear contribution to the model.\", \"The proposed model can be easily applied to any VQA model using soft attention.\", \"The paper is well written and the contribution is clear.\", \"Weaknesses\", \"Although the proposed model is helpful to model counting information in VQA, it fails to show improvement with respect to a couple of important baselines: prediction from image representation only and from the combination of image representation and attention weights.\", \"Qualitative examples of intermediate values in counting component--adjacency matrix (A), distance matrix (D) and count matrix (C)--need to be presented to show the contribution of each part, especially in the real examples that are not compatible with the strong assumptions in modeling counting component.\", \"Comments\", \"It is not clear if the value of count \\\"c\\\" is same with the final answer in counting questions.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Rebuttal Part 2/4: Joint fine-tuning of proposals and NMS baseline\", \"comment\": \"- Proposals are not jointly finetuned, did not study recall of the proposals and how sensitive the threshold is\\n\\nCan you clarify what threshold you are referring to in your comment? There is no hard threshold anywhere in our model.\\n\\nThis is certainly a valid concern, but applies to all VQA models using object proposals, not just our counting module. If the loss for counting is biased, then so is the loss for the rest of the model (e.g. \\\"what color is the car\\\" without an object proposal on the car). Joint training is nontrivial, since the architecture that generates the object proposal bounding boxes and features (Faster R-CNN) uses a two stage approach for training anyway and requires ground truth bounding boxes of objects. It is not clear to us whether joint training is at all possible in VQA, since it does not have ground truth bounding boxes available. Empirically, we are getting a substantial improvement in counting performance, so we think that the lack of joint training is not a major issue. This issue is certainly something that can be looked into more in the future, but we do not see it as a shortcoming of our module in particular.\\n\\nWhile we do not think that it is our responsibility to evaluate proposal recall, we are looking into manually labeling a small subset of training examples to get a sense of how much of an issue this is in general. One thing to keep in mind is that the loss pushes the VQA model as a whole to predict a certain answer, not just the counting component itself. That means that a bias towards not recognizing some types of objects (either in the object proposal network or the attention mechanism) can be accounted for by the rest of the model by biasing the count predictions of these types of objects slightly upwards. Bounding boxes that capture different parts of one object (e.g. one capturing the upper half of a person and one capturing the lower half of the same person) can also still lead to a correct count prediction if the attention mechanism recognizes that half the attention weight as usual should be given to those. In general, it might be enough for the counting module to produce a sensible prediction as long as some number of bounding boxes cover the all the required objects, not necessarily with one box per object.\\n\\n\\n- Doesn't study a simple baseline that just does NMS on the proposal domain\\n\\nThank you for pointing out the lack of this baseline. We agree that this should have been included and we have started running experiments for this. Initial experiments are suggesting that when the counting module is replaced with the one-hot encoded number of objects determined by NMS (we are trying thresholds between 0.3 and 0.7) the performance is not much different, if at all, from the baseline without NMS. This applies to using one of the two attention maps (like the counting module) as well as the sum of the two attention maps (lack of gradient means that the model can't specialize the first attention map to locate the objects to count, so using the sum of the two attention maps might be more reasonable) for scoring the proposals, which suggests that the piecewise constant gradient of NMS is a major issue. Once we have the full results, we will certainly include this information in a revision to the paper.\"}",
"{\"title\": \"Rebuttal Part 3/3: Issues of scale with existing methods, structured attention, and output confidence\", \"comment\": [\"Claim of \\\"without norm\\\" the method doesn't scale to arbitrary numbers.\"], \"to_keep_things_clear\": \"we are talking about methods that apply unnormalized attention weights to a set of feature vectors and trying to count from the resulting single feature vector (or multiple if there are multiple attention glimpses). When referring to not being able to scale to arbitrary numbers, we are referring to numbers even beyond 2. Using this method, scaling the whole feature vector should have the effect of scaling the count to predict, since that is exactly what happens when the attention weights are not normalized and the input is simply duplicated (as per the example in section 3). The issue is that the model has to learn that joint scaling of all features (2048 features per glimpse in our case) is related to count, but scaling of individual features is still related to different levels of expression of that feature in the input. These two properties seem contradictory to us; when feature vectors in the input set can vary from each other greatly, it is unclear to us how the joint scaling of all features property can be learnt at all beyond tiny changes in scale. It also contradicts the common notion of being able to approximately linearly interpolate in the latent space of deep neural networks, since the magnitude of a feature is no longer directly related to the expression of that feature, but depends on the magnitude of all other features in a fairly complex relationship. Empirically, using a sigmoid activation or simple averaging across the set of feature vectors without attention has not helped in several previous works, neither for counting nor for overall performance as mentioned at the end of section 3.\\n\\nThus, we highly doubt that sigmoid activation or similar methods that do not normalize attention weights to sum to 1, despite leaving more information about counting in the feature vector than softmax normalization, can lead to feature vectors from which counting can be learned at all. As we discuss in section 2, any improvement in counting you see despite this requires counting information to already be present in the input, which limits the objects that can be counted to things that the object proposal network can distinguish. When saying that it does not scale to arbitrary numbers, we were conservative in that statement in that we can imagine that in special cases it might be possible to learn to relate very small joint feature scaling to counting information, but not generally or in practice.\\n\\nWe realize that this is a slightly alternative explanation than we provide in the paper and will update the paper to make this clearer accordingly.\\n\\n- Insights on why structured attention did not significantly improve the result\\n\\nWith structured attention, each individual glimpse can select a contiguous region of pixels associated to individual objects unlike regular soft attention. However, it still lacks a way of extracting information from the attention map itself, which is necessary for counting as we argue in section 3. In order to attend to multiple objects, multiple glimpses are needed as well. This makes structured attention on pixels very similar to soft attention on object proposals; the structure in the attention acts as an implicit object detector. Thus, while structured attention solves one problem with soft attention -- the same that using object proposals solves -- it is not enough to actually count.\\n\\n\\n- Output confidence needs more motivation and theoretical justification\\n\\nWe think that we have provided sufficient motivation for the output confidence in the paper.\", \"here_is_an_expanded_version_of_the_consequences_of_it\": \"The output confidence can learn to suppress the magnitude of the counting features on an example-by-example basis by how close the values of vector a and matrix D are to the ideal values. When for certain values of D and a the predicted count is inaccurate during training, the gradient update reduces the magnitude of the counting features for those values through the output confidence. This lets the counting component learn when it is inaccurate and allows the VQA model using the component to compensate for it instead of blindly trusting that the counting features are always reliable. We found this to be a useful -- though not absolutely necessary -- step that slightly improves counting performance in practice.\", \"additional_references\": \"[1] Zelda Mariet and Suvrit Sra. Diversity Networks: Neural Network Compression using Determinantal Point Processes. In ICLR, 2016.\"}",
"{\"title\": \"Code release\", \"comment\": \"We have released our code at https://github.com/Cyanogenoid/vqa-counting\"}",
"{\"title\": \"Paper revision 1\", \"comment\": [\"We have updated the paper clarifying some things that reviewers maybe misunderstood and added experimental results that the reviewers wanted to see.\", \"Clarified that results in (Chattopadhyay et al., 2017) are not comparable at the end of section 2. (Reviewer3)\", \"Improved explanation of why attention with sigmoid normalization (or similar) produces feature vectors that do not lend themselves to count at all in section 3. (Reviewer2)\", \"Included NMS results in Table 2. (Reviewer3)\", \"Clarified comparisons with (Zhou et al., 2017) in section 5.2.1. (Reviewer3)\", \"Included some qualitative examples of matrices A, D, and C in Appendix E. (Reviewer1)\", \"Explicitly state how the component has use outside of VQA in section 6. (Reviewer2)\"]}",
"{\"title\": \"Rebuttal Part 3/4: Comparison with Chattopadhyay and symmetry breaking\", \"comment\": \"- Doesn't compare experiment numbers with (Chattopadhyay et al., 2017)\\n\\nThere are several major differences that make a direct comparison to the results in their work not useful (we have confirmed these differences with Chattopadhyay).\\n\\n1. They create a subset of the counting question subset of VQA v1, but their model is not trained on it. It is trained on the ~80 000 COCO training images with a ground truth labeling of how many objects there are for each of the 80 COCO classes, in essence giving them ~6 400 000 counts to train with. In contrast, there are only ~50 000 counting questions in the training set of VQA v2 (which is around twice the size of VQA v1), with the added difficulty of the types of objects being arbitrarily complex (e.g. \\\"how many people\\\" vs \\\"how many people wearing brown hats\\\").\\n2. When they evaluate their model on VQA, they select a small subset (roughly 10%--20% of the counting question subset in VQA v1) where the ground-truth count of the COCO class that their NLP processing method extracts from the question is the same as the VQA ground-truth count. During evaluation, they run their method on the input image as usual, and simply use the output corresponding to the extracted class as prediction. This means that they are essentially evaluating on a subset of the COCO data that they previously evaluated on already, or conversely, only using the subset of VQA that basically matches the COCO validation data anyway. We feel that it is a stretch to call this a VQA task, since at no point any VQA is actually performed in their model.\\n3. The VQA models are solving a slightly different task: unlike their proposed models, the VQA models are processing a natural language question, which may go wrong for the VQA models but is ensured to be correct for their proposed models (since they discard any examples where their NLP processing scheme gets it wrong). Additionally, VQA models are trained to not only try to answer counting questions, but also other questions.\\n\\nDue to these disadvantages to regular VQA models in their setup, we doubt that the performance of their model can be adequately compared to ours. In order for a comparison to be useful, we would have to train with the same training data that they used, which we feel is too much of a departure from the VQA setting in this paper; the general structure of the models they use don't have much to do with VQA models in the first place (their models regress counts for the 80 COCO classes simultaneously, whereas VQA models have an additional input -- the question -- which determines what to count and then classify a count answer).\\n\\nWe agree that superficially their results look related and we will clarify this matter in our next paper revision.\\n\\n\\n- Doesn't handle symmetry breaking\\n\\nWe think that when the goal is to count, it is better for counting performance to not break symmetries, without having the limitation of producing discrete instances. For example, consider the case where there are 4 non-overlapping objects, each with a weight of 1/2. All edges have the same weight, but it is not clear whether there is a sensible way as to how the symmetry should be broken here. There is much precedence in Machine Learning for this type of approach, e.g. in a mixture of 2 Gaussians model, a sample in-between the two distributions is assigned to each distribution with an equal weight, rather than having a hard assignment of this sample to one distribution or the other.\\n\\nWe agree that having instances has clear benefits over the density map style approach in terms of interpretability. However, we don't think that current attention models are good enough yet, i.e. consistently produce scores either very close to 0 or 1, to be able for an approach with instances to be as accurate as without. Thus, we think that the density-map-like approach is appropriate for counting and not a problem.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thank you for the review. We are glad to hear that you think that everything is reasonably motivated, the results are good, and that there is a clear contribution with good writing.\\n\\n\\n- Fails to show improvement with respect to a couple of important baselines\\n\\nCan you elaborate on what you mean with \\\"image representation only\\\" and \\\"combination of image representation and attention weights\\\"? We are not sure whether you are referring to existing experiments in the paper or experiments that you would like to see (we are happy to include these baselines if reasonable). Just to clarify the existing baselines that we already compare against: all the models in Table 1 and Table 2 use soft attention with softmax normalization on object proposal features. We did not list models using pixel representations, since they are outperformed by models using object proposals in all question categories of VQA (Andersen et al., 2017). Models with attention have been shown to outperform models without attention many times in the literature (e.g. survey in [1]).\\n\\n\\n- Qualitative examples of matrices A, D, and C are needed\\n\\nThank you for the good idea. We will include some examples of these in a revision of the paper.\\n\\n\\n- Unclear whether c is the same as the predicted answer\\n\\nc is not necessarily the predicted answer; it is just a feature (which gets turned into a linear interpolation of appropriate one-hot encoded vectors as per equation 8) that the answer classifier makes use of. Since not all questions in VQA are counting questions, the model learns how and when to use this feature. The existing model descriptions in 5.1 and 5.2, along with the diagram of the VQA model architecture, should make this clear.\", \"additional_references\": \"[1] Damien Teney, Qi Wu, and Anton van den Hengel. Visual Question Answering: A Tutorial. In IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 63-75. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8103161&isnumber=8103076\"}",
"{\"title\": \"Rebuttal Part 4/4: Comparisons to Zhou and CLEVR\", \"comment\": \"- Not much improvement compared to (Zhou et al., 2017)\\n\\nThis is not a like-to-like comparison. Note that their model is an ensemble of 8 models wherein each individual model already performs significantly better than our baseline without counting module, due to the use of their state-of-the-art multimodal pooling method and pre-trained word embeddings. To be precise, their single model has a better overall accuracy by about 1.3%, which widens to a difference of about 3.2% after ensembling (we have only recently obtained their single-model results and will update the paper accordingly to make this clearer). Their single-model also exploits the existing primitive features better and starts with 2.7% better accuracy in number questions (these are the primitive counting features we discuss in section 2). Despite this difference in starting performance, our relatively simple baseline without their elaborate multimodal fusion outperforms their single model by over 2% and even their ensemble by about 0.3% in the number category, just by including the counting component in the model and without ensembling our model. Since their method should improve the quality of attention maps, we expect the benefit of the counting module -- which relies on the quality of attention maps -- to stack with their improvements. Keep in mind that their soft attention uses regular softmax normalization, which means that the limitations with respect to counting that we point out in section 3 apply to their model. We emphasize that the main comparison in Table 1 to make is: the performance on the number category of the baseline with counting module improves substantially compared to the baseline without the counting module and is also the best-ever reported accuracy on number questions. This shows that the more detailed results in Table 2 on the validation set are not simply due to hill-climbing on the validation set, since the test set of VQA v2 in Table 1 is only allowed to be evaluated on at most 5 times in total.\\n\\n\\n- More convincing with results on CL\\n\\nMore results are almost always more convincing, but we feel like there is not much value to be gained by additionally evaluating on CLEVR (assuming that you mean CLEVR with CL) and there is a limited amount of experiments that we can put in a paper. This is mainly due to our use of bounding boxes -- non-standard for this dataset and thus making comparisons to existing work less useful -- and our focus on being able to count in the difficult setting demanded of by VQA v2: noisy attention maps (due to language and attention model with free-form human-posed questions) and noisy bounding boxes overlapping in complex ways (due to object proposal model on real images). These would be present in CLEVR to some extent as well, but in terms of synthetic tasks, we think that it is more useful for us to study counting behaviour on our toy dataset and in terms of VQA tasks, VQA v2 is more suitable for showing the benefits of our module than CLEVR.\", \"additional_references\": \"[1] Damien Teney, Qi Wu, and Anton van den Hengel. Visual Question Answering: A Tutorial. In IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 63-75. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8103161&isnumber=8103076\\n[2] Ilija Ilievski and Jiashi Feng. Multimodal Learning and Reasoning for Visual Question Answering. In NIPS, 2017. http://papers.nips.cc/paper/6658-multimodal-learning-and-reasoning-for-visual-question-answering.pdf\\n[3] Damien Teney, Lingqiao Liu, and Anton van den Hengel. Graph-Structured Representations for Visual Question Answering. In CVPR, 2017. http://openaccess.thecvf.com/content_cvpr_2017/papers/Teney_Graph-Structured_Representations_for_CVPR_2017_paper.pdf\"}"
]
} |
Sk1NTfZAb | Key Protected Classification for GAN Attack Resilient Collaborative Learning | [
"Mert Bülent Sarıyıldız",
"Ramazan Gökberk Cinbiş",
"Erman Ayday"
] | Large-scale publicly available datasets play a fundamental role in training deep learning models. However, large-scale
datasets are difficult to collect in problems that involve processing of sensitive information.
Collaborative learning techniques provide a privacy-preserving solution in such cases, by enabling
training over a number of private datasets that are not shared by their owners.
Existing collaborative learning
techniques, combined with differential privacy, are shown to be resilient against a passive
adversary which tries to infer the training data only from the model parameters. However, recently, it has
been shown that the existing collaborative learning techniques are vulnerable to an active adversary that runs a GAN
attack during the learning phase. In this work, we propose a novel key-based collaborative learning technique that is
resilient against such GAN attacks. For this purpose, we present a collaborative learning formulation in which class scores
are protected by class-specific keys, and therefore, prevents a GAN attack. We also show that
very high dimensional class-specific keys can be utilized to improve robustness against attacks, without increasing the model complexity.
Our experimental results on two popular datasets, MNIST and AT&T Olivetti Faces, demonstrate the effectiveness of the proposed technique
against the GAN attack. To the best of our knowledge, the proposed approach is the first collaborative learning
formulation that effectively tackles an active adversary, and, unlike model corruption or differential privacy formulations,
our approach does not inherently feature a trade-off between model accuracy and data privacy. | [
"privacy preserving deep learning",
"collaborative learning",
"adversarial attack"
] | Reject | https://openreview.net/pdf?id=Sk1NTfZAb | https://openreview.net/forum?id=Sk1NTfZAb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SkS7qR3-M",
"B197fWcbf",
"BJMyLesGG",
"Hyku30n-G",
"r1qWlNtlM",
"SJyQ2wqlf",
"S1ZKptezz",
"SkMMwxszG",
"SJ227bqbM",
"S1xr_5abG",
"SJ43M9ZMz",
"B1xjZwmbz",
"S1s8B1aBf",
"S1B5euOZz",
"ry0o8eofG",
"SyOgBeizf"
],
"note_type": [
"official_review",
"comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"decision",
"comment",
"official_comment",
"official_comment"
],
"note_created": [
1513052701418,
1512866338514,
1513977305548,
1513053287184,
1511763970510,
1511844886817,
1513295224925,
1513977609709,
1512866740131,
1513101368384,
1513362091569,
1512432023641,
1517249875226,
1512763532685,
1513977510172,
1513977071898
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper997/AnonReviewer4"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper997/Authors"
],
[
"ICLR.cc/2018/Conference/Paper997/AnonReviewer4"
],
[
"ICLR.cc/2018/Conference/Paper997/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper997/AnonReviewer2"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper997/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper997/Authors"
],
[
"ICLR.cc/2018/Conference/Paper997/Authors"
]
],
"structured_content_str": [
"{\"title\": \"The paper is unclear and needs more work\", \"rating\": \"3: Clear rejection\", \"review\": \"In this paper, the authors proposed a counter measure to protect collaborative training of DNN against the GAN attack in (Hitaj et al. 2017). The motivation of the paper is clear and so is the literature review. But for me the algorithm is not clearly defined and it is difficult to evaluate how the proposed procedure works. I am not saying that this is not the solution. I am just saying that the paper is not clear enough to say that it is (or it is not). From, my perspective this will make the paper a clear reject.\\n\\nI think the authors should explain a few things more clearly in order to make the paper foolproof. The first one seems to me the most clear problem with the approach proposed in the paper:\\n\\n1 $\\\\psi(c)$ defines the mapping from each class to a high dimensional vector that allows protection against the GAN attack. $\\\\psi(c)$ is suppose to be private for each class (or user if each class belong only to one user). This is the key aspect in the paper. But if more than one user have the same class they will need to share this key. Furthermore, at test time, these keys need to be known by everyone, because the output of the neural network needs to be correlated against all keys to see which is the true label. Of course the keys can only be released after the training is completed. But the adversary can also claim to have examples from the class it is trying to attack and hence the legitimate user that generated the key will have to give the attacker the key from the training phase. For example, let assume the legitimate user only has ones from MNIST and declares that it only has one class. The attacker says it has two classes the same one that the legitimate user and some other label. In this case the legitimate user needs to share $\\\\psi(c)$ with the attacker. Of course this sounds \\u201cfishy\\u201d and might be a way of finding who the attacker is, but there might be many cases in which it makes sense that two or more users shares the same labels and in a big system might be complicated to decide who has access to which key.\\n\\n2 I do not understand the definition of $\\\\phi(x)$. Is this embedding fixed for each user? Is this embedding the DNN? In Eq. 4 I would assume that $\\\\phi(x)$ is the DNN and that it should be $\\\\phi_\\\\theta(x)$, because otherwise the equation does not make sense. But this is not clearly explained in the paper and Eq 4 makes no sense at all. In a way the solution to the maximization in Eq 4 is Theta=\\\\infty. Also the term $\\\\phi(x)$ is not mentioned in the paper after page 5. My take is that the authors want to maximize the inner product, but then the regularizer should go the other way around.\", \"3_in_the_paper_in_page_5_we_can_read\": \"\\u201cHere, we emphasize the first reason why it is important to use l2-normalized class keys and embedding outputs: in this manner, the resulting classification score is by definition restricted to the range [-1; +1],\\u201d If I understand correctly the authors are dividing the inner product by ||$\\\\psi(c)|| ||$\\\\phi(x)||. I can see that we can easily divide by ||$\\\\psi(c)||, but I cannot see how we can do dive by ||$\\\\phi(x)||, if this term depends on \\\\theta. If this term does not depend on \\\\theta, then Eq 4 does not make sense.\\n\\nTo summarize, I have the impression that there are many elements in the paper that does not makes sense in the way that they are explained and that the authors need to tell the paper in a way that can be easily understood and replicated. I recommend the authors to run the paper by someone in their circle that could help them rewrite the paper in a way that is more accessible.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Misleading\", \"comment\": \"The Hitaj et al. CCS'17 paper is misleading; only upon close scrutiny does one realize that, when they refer to differential privacy, they mean with crazy parameters.\\n\\nIt's analogous to claiming that RSA cryptography is broken and then only on page 3 clarifying that what you really mean is that RSA with 16-bit keys is susceptible to a brute force factoring attack. \\n\\nIn particular, the above quote from this submission does not clarify this issue. It says \\\"differential privacy fails to prevent the attack\\\" without providing details. This is on its face false, as the default interpretation is \\\"differential privacy with reasonable parameters.\\\"\"}",
"{\"title\": \"Update\", \"comment\": \"We address the problem of sharing samples for a common class, in the revised version of the paper. We have added a new section (Section 5.5) where we discuss and empirically verify that participants may have training examples of overlapping classes without sharing their private keys.\\n\\nWe have also added new attacking results for MNIST showing that there can be multiple attackers in CLF (indeed every participant can be an attacker) in Figure-6. For such cases, the GAN attacks still fail without damaging the learning process. The reconstructions show that generators trained by attackers can capture likelihood of data given the guessed key. However these likelihoods are far from data distributions of the handwritten digits. Which is the expected outcome of our methodology and reflects our success.\\n\\nFurthermore we speak of how we benefit from the fixed layer in Section 5.4. By using a fixed layer, we are able to control complexity of local models, which is crucial in preventing participants to overfit their local datasets in one epoch of local training.\"}",
"{\"title\": \"Really?\", \"comment\": \"This reviewer does not have a problem with the paper under study, but believes that Hitaj et al. paper is wrong.\\n\\nMy take is that this review should be removed, because it is only concern with the validity of a already publish work and they should talk to CCS'17 committee about it. \\n\\nAlso, the code for Hitaj et al. 2017 is available if the reviewer thinks the parameters are incorrectly set, they should work with the code to show that the authors maliciously played with the parameters and publish a paper or a blog showing why it does not work. The blog link above does not do that. I think this is the best way to show that Hitaj et al. is not valid. But trashing other conferences with grievances is an old technique that some people use all too frequently and it is becoming really tiring.\"}",
"{\"title\": \"The weak assumption on the adversary undermines the usefulness of the protection scheme\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper is a follow-up work to the CCS'2017 paper on the GAN-based attack on collaborative learning system where multiple users contribute their private and sensitive data to joint learning tasks. In order to avoid the potential risk of adversary's mimic based on information flow among distributed users, the authors propose to embed the class label into a multi-dimensional space, such that the joint learning is conducted over the embedding space without knowing the accurate representation of the classes. Under the assumption that the adversary can only generate fake and random class representations, they show their scheme is capable of hiding information from individual samples, especially over image data.\\n\\nThe paper is clearly written and easy to understand. The experiments show interesting results, which are particularly impressive with the face data. However, the reviewer feels the assumption on the adversary is generally too weak, such that slightly smarter adversary could circumvent the protection scheme and remain effective on sample recovery.\\n\\nBasically, instead of randomly guessing the representations of the classes from other innocent users, the adversary could apply GAN to learn the representation based on the feedback from these users. This can be easily done by including the representations in the embedding space in the parameters in GAN for learning.\\n\\nThis paper could be an interesting work, if the authors address such enhanced attacks from the adversary and present protection results over their existing experimental settings.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea to mitigate the GAN attack\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Collaborative learning has been proposed as a way to learn over federated data while preserving privacy. However collaborative learning has been shown to be suscepti\\nble to active attacks in which one of the participants uses a GAN to reveal information about another participant.\\n\\nThis paper proposes a collaborative learning framework (CLF) that mitigates the GAN attack. The framework involves using the neural net to learn a mapping of the inp\\nut to a high-dimensional vector and computing the inner product of this vector to a random class-specific key (the final class prediction is the argmax of this inner product). The class-specific key can be chosen randomly by each participant. By choosing sufficiently long random keys, the probability of an attacker guessing the key can be reduced. Experiments on two datasets show that this scheme successfully avoids the GAN attack.\\n \\n1. Some of the details of key sharing are not clear and would appear to be important for the scheme to work. For example, if participants have instances associated with the same class, then they would need to share the key. This would require a central key distribution scheme which would then allow the attacker to also get access to the key.\\n\\n2. I would have liked to see how the method works with an increasing fraction of adversarial participants (I could only see experiments with one adversary). Similarly, I would have liked to see experiments with and without the fixed dense layer to see its contribution to effective learning.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"RE: Misleading\", \"comment\": \"But that's the point, OTHERS have used RSA with 16-bit keys and Hitaj et al. CCS'17 show this is ill-considered. It's an attack paper, no new scheme is proposed. It is reported that properly set DP will thwart these attacks (but at the cost of utility, see the conclusions).\"}",
"{\"title\": \"Update\", \"comment\": \"We address the problem of sharing samples for a common class, In the revised version of the paper. We have added a new section (Section 5.5) where we discuss and empirically verify that participants may have training examples of overlapping classes without sharing their private keys. (Taken partially from our answer to AnonReviwer2.)\\n\\nThank you very much for pointing out the ambiguity in the formulation. It has been corrected now.\\n\\nSince \\\\phi_{\\\\theta}(.) is a deterministic mapping that outputs a vector, we just compute the L2 norm of the output vector, simply as a function of the output vector.\"}",
"{\"title\": \"related work\", \"comment\": \"Differential privacy is tangential to the work in this submission and the flaws of the Hitaj et al. paper should not be held against it.\\n\\nI am commenting because the quote about the related work needs to be clarified. Both Shokri & Smatikov and Hitaj et al. use differential privacy with extremely large parameters, which render it meaningless.\"}",
"{\"title\": \"Relevance\", \"comment\": \"This submission makes a false statement. It is mathematically impossible to reconstruct training examples while satisfying differential privacy. That statement needs to be corrected. And it is relevant to the motivation for this work.\\n\\nI did not mean to start a debate about the Hitaj et al. paper. My comment is only about the false statement in this submission, which is justified by citing the Hitaj et al. paper.\"}",
"{\"title\": \"\\\"differential privacy fails to prevent the attack\\\"\", \"comment\": \"The above statement in this paper is false or, at best, misleading. The fact that it is attributed to someone else, doesn't change that.\"}",
"{\"title\": \"Differential Privacy\", \"comment\": \"This paper states (page 2, second paragraph):\\n\\nHowever, it has recently been shown that [collaborative learning frameworks (CLFs)] can be vulnerable to not only passive attacks, but also much more powerful active attacks, i.e., training-time attacks, for which the CLF with differential privacy fails to prevent the attack and there is no known prevention technique in general (Hitaj et al., 2017). More specifically, a training participant can construct a generative adversarial network (GAN) (Goodfellow et al., 2014) such that its GAN model learns to reconstruct training examples of one of the other participants over the training iterations. \\n\\nThis is given as the motivation for this work, but this statement is very flawed. Hitaj et al. do not \\\"break\\\" differential privacy. The problem is that they use differential privacy with extremely large parameter values, which yields a meaningless privacy guarantee.\\n\\nFrank McSherry has posted a detailed critique of the Hitaj et al. paper here:\", \"https\": \"//github.com/frankmcsherry/blog/blob/master/posts/2017-10-27.md\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"While the reviewers feel there might be some merit to this work, they find enough ambiguities and inaccuracies that I think this paper would be better served by a resubmission.\"}",
"{\"title\": \"RE: Differential Privacy\", \"comment\": \"The CCS\\u201917 (Hitaj et al.) paper mentions several times they don't \\\"break\\\" DP or use DP in any way, but they show that DP is inadequate when epsilon is large (as used and implemented by others) or at the record level. See throughout the paper (https://acmccs.github.io/papers/p603-hitajA.pdf) and the conclusions in particular.\\n\\nSo the blog misses several crucial points and this paper (\\\"Key Protected Classification\\u2026 \\u201c) also provides clear evidence of the privacy risks of CLFs.\"}",
"{\"title\": \"We need more explanation for your concerns\", \"comment\": \"In our approach, we protect participants by hiding class scores from any other participant in CLF. For this purpose, we let participants to create private keys for its local training classes. Please note that private keys are completely randomly distributed, and participants do not share any information about their keys throughout training. (The revised paper, we believe, explains the procedure much more clearly.)\\n\\nTherefore, we do not see how a GAN attack without a guidance score or feedback signal can be executed to reconstruct the private class keys. \\n\\nWe will be more than happy to discuss if you can elaborate this objection.\"}",
"{\"title\": \"Clarification on DP\", \"comment\": \"We thank for the interesting comments and suggestions in this thread. We have just published the comprehensively revised paper where we have removed all of the controversial arguments regarding differential privacy (DP), as suggested by the reviewers.\\n \\nOur paper, however, is not (directly) about DP: we show that our proposed approach allows privacy-preserving collaborative training without introducing DP or other techniques that corrupt model parameters / parameter updates with noise injection. More importantly, our CLF formulation is resilient against active GAN attacks (Hitaj et.al. 2017).\\n \\nIn more detail, there are two main reasons why we think our approach is of significance:\\n \\n(1) DP typically requires making a difficult trade-off decision between model accuracy and privacy. In particular, the privacy budget per parameter plots in Shokri et al. (2015) show that in order to reach an acceptable (90%) level of test-set accuracy on MNIST, one may need to use very high \\\"epsilon\\\" values (ie. very low noise), which may significantly reduce the effectiveness of DP in terms of privacy preservation. Our approach does not necessarily involve such a trade-off between privacy and accuracy (except that using excessively high-dimensional class keys may lead to issues during training).\\n \\n(2) Our approach prevents CLF against GAN attacks (Hitaj et al. 2017), which can be difficult to avoid using DP, without (significantly) sacrificing the classification accuracy. \\n \\nTherefore, in summary, what we propose is not built upon DP, instead, it can be seen as a new and alternative approach for privacy preserving collaborative training that builds upon participant-specific keys, as opposed to hiding information through mixing models updates/parameters with noise.\"}"
]
} |
HyHmGyZCZ | Comparison of Paragram and GloVe Results for Similarity Benchmarks | [
"Jakub Dutkiewicz",
"Czesław Jędrzejek"
] | Distributional Semantics Models(DSM) derive word space from linguistic items
in context. Meaning is obtained by defining a distance measure between vectors
corresponding to lexical entities. Such vectors present several problems. This
work concentrates on quality of word embeddings, improvement of word embedding
vectors, applicability of a novel similarity metric used ‘on top’ of the
word embeddings. In this paper we provide comparison between two methods
for post process improvements to the baseline DSM vectors. The counter-fitting
method which enforces antonymy and synonymy constraints into the Paragram
vector space representations recently showed improvement in the vectors’ capability
for judging semantic similarity. The second method is our novel RESM
method applied to GloVe baseline vectors. By applying the hubness reduction
method, implementing relational knowledge into the model by retrofitting synonyms
and providing a new ranking similarity definition RESM that gives maximum
weight to the top vector component values we equal the results for the ESL
and TOEFL sets in comparison with our calculations using the Paragram and Paragram
+ Counter-fitting methods. For SIMLEX-999 gold standard since we cannot
use the RESM the results using GloVe and PPDB are significantly worse compared
to Paragram. Apparently, counter-fitting corrects hubness. The Paragram
or our cosine retrofitting method are state-of-the-art results for the SIMLEX-999
gold standard. They are 0.2 better for SIMLEX-999 than word2vec with sense
de-conflation (that was announced to be state-of the-art method for less reliable
gold standards). Apparently relational knowledge and counter-fitting is more important
for judging semantic similarity than sense determination for words. It is to
be mentioned, though that Paragram hyperparameters are fitted to SIMLEX-999
results. The lesson is that many corrections to word embeddings are necessary
and methods with more parameters and hyperparameters perform better.
| [
"language models",
"vector spaces",
"word embedding",
"similarity"
] | Reject | https://openreview.net/pdf?id=HyHmGyZCZ | https://openreview.net/forum?id=HyHmGyZCZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BJR58kaBf",
"SJcyMXTmM",
"SJWbIA3eG",
"S1ZbRMqlM",
"HJmKXVcgz"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1517250198002,
1515168225774,
1512003064978,
1511824888927,
1511830395975
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper476/Authors"
],
[
"ICLR.cc/2018/Conference/Paper476/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper476/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper476/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper proposes a method for refining distributional semantic representation at the lexical level. The reviews are fairly unanimous in that they found both the initial version of the paper, which was deemed quite rushed, and the substantial revision unworthy of publication in their current state. The weakness of both the motivation and the experimental results, as well as the lack of a clear hypothesis being tested, or of an explanation as to why the proposed method should work, indicates that this work needs revision and further evaluation beyond what is possible for this conference. I unfortunately must recommend rejection.\"}",
"{\"title\": \"The comments for all reviewers\", \"comment\": \"The original paper was very significantly changed, expanded (8,5 instead 6 pages). This work concentrates on quality of word embeddings, improvement of word embedding vectors, applicability of a novel similarity metric used \\u2018on top\\u2019 of the word embeddings. The comparison of our cosine retrofitting to Paragram + Counterfitting for SIMLEX -999; and our RESM + cosine retrofitting to Paragram was done.\", \"in_particular_this_revision_provides_the_following\": \"1.\\tImproves the clarity of the original version by almost twice as many experimental details; also in the area of what is state-of-the-art and what is not (using reliable gold standards, and concentrating on absolute results rather than on result changes often caused by a single effect).\\n2. Removes a major deficiency of the original paper by including and addressing the Paragram and Paragram + Counter-fitting methods\\u2019 results.\\n3. Adds all references that were considered necessary by reviewers. It is not that we were not aware of most of them. Notice that there was a one page limit on references. It seems we were one of a very few to obey this rule.\\n4. In addition to TOEFL and ESL we included the SIMLEX-999 standard. We consider them the only reliably annotated sets at the moment for two reasons already mentioned by [1].\\n5. The main results in Table 3 were augmented by Paragram, and Paragram + Counter-fitting methods and the multi-sense aware methods (Pilehvar and Navigli) .\", \"there_are_many_important_conclusions_reached_in_this_paper\": \"mostly many corrections to word embeddings are necessary for state-of-the-art results, and methods with more parameters and hyperparameters perform better.\\n\\n\\n[1] Hill, Reichart, and Korhonen. Simlex-999: Evaluating semantic models with (genuine)\\nsimilarity estimation. Computational Linguistics, , 2015.\"}",
"{\"title\": \"A set of retrofitting methods for measuring lexical similarity\", \"rating\": \"3: Clear rejection\", \"review\": \"I hate to say that the current version of this paper is not ready, as it is poorly written. The authors present some observations of the weaknesses of the existing vector space models and list a 6-step approach for refining existing word vectors (GloVe in this work), and test the refined vectors on 80 TOEFL questions and 50 ESL questions. In addition to the incoherent presentation, the proposed method lacks proper justification. Given the small size of the datasets, it is also unclear how generalizable the approach is.\", \"pros\": \"1. Experimental study on retrofitting existing word vectors for ESL and TOEFL lexical similarity datasets\", \"cons\": \"1. The paper is poorly written and the proposed methods are not well justified.\\n 2. Results on tiny datasets\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Main point of paper is unclear and unproven\", \"rating\": \"2: Strong rejection\", \"review\": \"The paper suggests taking GloVe word vectors, adjust them, and then use a non-Euclidean similarity function between them. The idea is tested on very small data sets (80 and 50 examples, respectively). The proposed techniques are a combination of previously published steps, and the new algorithm fails to reach state-of-the-art on the tiny data sets.\\n\\nIt isn't clear what the authors are trying to prove, nor whether they have successfully proven what they are trying to prove. Is the point that GloVe is a bad algorithm? That these steps are general? If the latter, then the experimental results are far weaker than what I would find convincing. Why not try on multiple different word embeddings? What happens if you start with random vectors? What happens when you try a bigger data set or a more complex problem?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a ranking-based similarity metric for distributional semantic models. The main idea is to learn \\\"baseline\\\" word embeddings, retrofitting those and applying localized centering, to then calculate similarity using a measure called \\\"Ranking-based Exponential Similarity Measure\\\" (RESM), which is based on the recently proposed APSyn measure.\", \"i_think_the_work_has_several_important_issues\": \"1. The work is very light on references. There is a lot of previous work on evaluating similarity in word embeddings (e.g. Hill et al, a lot of the papers in RepEval workshops, etc.); specialization for similarity of word embeddings (e.g. Kiela et al., Mrksic et al., and many others); multi-sense embeddings (e.g. from Navigli's group); and the hubness problem (e.g. Dinu et al.). For the localized centering approach, Hara et al.'s introduced that method. None of this work is cited, which I find inexcusable.\\n\\n\\n2. The evaluation is limited, in that the standard evaluations (e.g. SimLex would be a good one to add, as well as many others, please refer to the literature) are not used and there is no comparison to previous work. The results are also presented in a confusing way, with the current state of the art results separate from the main results of the paper. It is unclear what exactly helps, in which case, and why.\\n\\n\\n3. There are technical issues with what is presented, with some seemingly factual errors. For example, \\\"In this case we could apply the inversion, however it is much more convinient [sic] to take the negative of distance. Number 1 in the equation stands for the normalizing, hence the similarity is defined as follows\\\" - the 1 does not stand for normalizing, that is the way to invert the cosine distance (put differently, cosine distance is 1-cosine similarity, which is a metric in Euclidean space due to the properties of the dot product). Another example, \\\"are obtained using the GloVe vector, not using PPMI\\\" - there are close relationships between what GloVe learns and PPMI, which the authors seem unaware of (see e.g. the GloVe paper and Omer Levy's work).\\n\\n\\n4. Then there is the additional question, why should we care? The paper does not really motivate why it is important to score well on these tests: these kinds of tests are often used as ways to measure the quality of word embeddings, but in this case the main contribution is the similarity metric used *on top* of the word embeddings. In other words, what is supposed to be the take-away, and why should we care?\\n\\nAs such, I do not recommend it for acceptance - it needs significant work before it can be accepted at a conference.\", \"minor_points\": [\"Typo in Eq 10\", \"Typo on page 6 (/cite instead of \\\\cite)\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
SkfNU2e0Z | Statestream: A toolbox to explore layerwise-parallel deep neural networks | [
"Volker Fischer"
] | Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks. | [
"model-parallel",
"parallelization",
"software platform"
] | Reject | https://openreview.net/pdf?id=SkfNU2e0Z | https://openreview.net/forum?id=SkfNU2e0Z | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"H1VGzc9zf",
"SJMlG95Mf",
"HkeBFwYgf",
"r1ka-99zM",
"SJ1tsSFgf",
"B1KY-MqgG",
"rkY6EkarG"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"decision"
],
"note_created": [
1513951756198,
1513951722502,
1511778616475,
1513951672740,
1511770999029,
1511821696948,
1517249728912
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper391/Authors"
],
[
"ICLR.cc/2018/Conference/Paper391/Authors"
],
[
"ICLR.cc/2018/Conference/Paper391/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper391/Authors"
],
[
"ICLR.cc/2018/Conference/Paper391/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper391/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Comment on reviews\", \"comment\": \"Please see the comment below the first review.\"}",
"{\"title\": \"Comment on reviews\", \"comment\": \"Please see the comment below the first review.\"}",
"{\"title\": \"A potentially interesting toolbox not supported by enough examples\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper introduces a new toolbox for deep neural networks learning and evaluation. The central idea is to include time in the processing of all the units in the network. For this, the authors propose a paradigm switch: form layerwise-sequential networks, where at every time frame the network is evaluated by updating each layer \\u2013 from bottom to top \\u2013 sequentially; to layerwise-parallel networks, where all the neurons are updated in parallel. The new paradigm implies that the layer update is achieved by using the stored previous state and the corresponding previous state of the previous layer. This has three consequences. First, every layer now use memory, a condition that already applies for RNNs in layerwise-sequential networks. Second, in order to have a consistent output, the information has to flow in the network for a number of time frames equal to the number of layers. In Neuroscience, this concept is known as reaction time. Third, since the network is not synchronized in terms of the information that is processed in a specific time frame, there are discrepancies w.r.t. the layerwise-sequential networks computation: all the techniques used to train deep NNs have to be reconsidered.\\n\\nOverall, the concept is interesting and timely especially for the rising field of spiking neural networks or for large and distributed architectures. The paper, however, should probably provide more examples and results in terms of architectures that can been implemented with the toolbox in comparison with other toolboxes. The paper presents a single example in which either the accuracy and the training time are not reported. While I understand that the main result of this work is the toolbox itself, more examples and results would improve the clarity and the implications for such paradigm switch. Another concern comes from the choice to use Theano as back-end, since it's known that it is going to be discontinued. Finally I suggest to improve the clarity and description of Figure 2, which is messy and confusing especially if printed in B&W.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Comment on reviews\", \"comment\": \"We thank the reviewers for their feedback on our work. Considering that responses over reviewers greatly overlapped, we only wrote one comment and put it under the first with a brief note below the other two reviews.\\n\\nOne major concern across reviewers is the lack of compelling examples. We understand and share this concern. Because, we experienced some difficulties in the past explaining the general idea / concept of layerwise parallel networks, we chose to introduce and compare (on a textual level) the two approaches and their implications in some length. On the basis of reviewer's summaries, we think the core idea is well explained (we will try to improve Fig. 1 in the future). Another goal of the paper is to raise awareness inside the community that there are ways to integrate time into networks which are better suited to bridge the gap between spiking and current deep networks than the ones currently used (e.g. rollout or convolution over time). \\n\\nWhile we where able to integrate tensorflow support for our toolbox (dependence solely on theano was a concern of two reviewers), we cannot provide meaningful additional examples in the scope of this submission for several reasons: time, pending IP concerns, open technical details, sufficient presentation quality, page restriction.\\n\\nAgain, we want to thank the reviewers for their effort and fair feedback.\"}",
"{\"title\": \"The paper describes a toolbox for parallel neuron updating written in Theano.\", \"rating\": \"3: Clear rejection\", \"review\": \"Quality and clarity\\n\\nThe paper goes to some length to explain that update order in a neural network matters in the sense that different update orders give different results. While standard CNN like architectures are fine with the layer parallel updating process typically used in standard tools, for recurrent networks and also for networks with connections that skip layers, different update orders may be more natural, but no GPU-accelerated toolboxes exist that support this. The authors provide such a toolbox, statestream, written Theano.\\n\\nThe paper's structure is reasonably clear, though the text has very poor \\\"flow\\\": the english could use a native speaker straightening out the text. For example, a number of times there are phrases like \\\"previously mentioned\\\", which is ugly. \\n\\nMy main issue is with the significance of the work. There are no results in the paper that demonstrate a case where it is useful to apply fully parallel updates. As such, it is hard to see the value of the contribution, also since the toolbox is written in Theano for which support has been discontinued.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of \\\"STATESTREAM: A TOOLBOX TO EXPLORE LAYERWISE-PARALLEL DEEP NEURAL NETWORKS\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks. They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies.\\n\\nWhile the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice. In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper presents a toolbox for the exploration of layerwise-parallel deep neural networks. The reviewers were consistent in their analysis of this paper: it provided an interesting class of models which warranted further investigation, and that the toolbox would be useful to those who are interested in exploring further. However, there was a lack of convincing examples, and also some concern that Theano (no longer maintained) was the only supported backend. The authors responded to say that they had subsequently incorporated TensorFlow support, they were not able to provide any more examples due to several reasons: \\u201ctime, pending IP concerns, open technical details, sufficient presentation quality, page restriction.\\u201d I agree with the consensus reached by the reviewers.\"}"
]
} |
SyBBgXWAZ | Optimal transport maps for distribution preserving operations on latent spaces of Generative Models | [
"Eirikur Agustsson",
"Alexander Sage",
"Radu Timofte",
"Luc Van Gool"
] | Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.
After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample.
In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation.
Our experimental results validate that the proposed operations give higher quality samples compared to the original operations. | [
"Generative Models",
"GANs",
"latent space operations",
"optimal transport"
] | Reject | https://openreview.net/pdf?id=SyBBgXWAZ | https://openreview.net/forum?id=SyBBgXWAZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HyBft3dgM",
"S1jQT_G7z",
"SJ13MSaxf",
"S1w_ryprM",
"SJuG7tqxz",
"Bk6VytfXz",
"Bylsiz3QG",
"H1EeeKM7G"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1511733516628,
1514470690652,
1512030887166,
1517249902827,
1511850768225,
1514471220993,
1515101080331,
1514471403979
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper1101/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1101/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper1101/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2018/Conference/Paper1101/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Evaluation Metric and Actual Problem Being Solved Unclear\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Authors note that models may be trained for a certain distribution (e.g. uniform or Gaussian) but then \\\"used\\\" by interpolating or jittering known examples, which has a different distribution. While the authors are clear about the fact that this is a mismatch, I did not find it well-motivated why it was \\\"the right thing to do\\\" to match the training prior, given that the training prior is potentially not at all representative or relevant. The fact that a Gaussian/prior distribution is used in the first place seems like a matter of convenience rather than it being the \\\"right\\\" distribution for the problem goals, and that makes it less clear that it's important to match this \\\"convenience\\\" distribution. The key issue I had throughout is \\\"what is the real-world problem metric or evaluation criteria and how does this proposal directly help\\\"?\\n\\nFor example, authors cover the usual story that random Gaussian examples lie on a thin sphere shell in high-d space, and thus interpolation of those examples will like on a thin shell of slightly less radius. In contrast, the Uniform distribution on a hypercube [-1,1]^D in D dimensions \\\"looks\\\" like a sharp-pointy star with 2^D sharp points and all the mass in those 2^D corners. But the key question is, what are these examples being used for, and what are the trade-offs between interpolation (which tends to be fairly safe) and extrapolation of the given examples?\\n\\nThis is echoed in the experiments, which I found unsatsifactory for the same key issue: \\\"What is the criteria for \\u201chigher-quality interpolated samples\\u201d? in the examples they give, it seems to be the sharpness of the images. Is that realistic/relevant? These are pretty images, but the evaluation criteria is unclear.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to review\", \"comment\": [\"Thanks for the feedback!\", \"We followed your suggestion in the major comment and significantly polished and shortened the paper.\", \"as suggested, we focus on explaining the effect distribution mismatch through the norm distribution, moving unnecessary details to the appendix.\", \"we moved Lemma 1 to appendix as well as the detailed calculations of the examples, while summarizing the Gaussian case in Table 1.\", \"We now mention how simple the formulas end up in the Gaussian case. This is because the operators we consider are additive in the samples, which means the results of the operations are still Gaussian - requiring only a multiplicative adjustment for matching the variance.\", \"Working on the hypersphere is also a valid approach. This setting is very similar to our framework applied to the Gaussian prior when taking the prior dimension towards infinity - and the projection to the sphere can be interpreted as the transport map. Note however by fixing points to lie exactly on the sphere one introduces a dependency between the coordinates (which means you can't do distribution matching coordinate-wise), but this dependency is very small since an i.i.d. Gaussian will already be on the sphere w.h.p. We actually tried this setting at some point before, but found it (surprisingly) less stable for DCGAN, e.g. resulting in collapse for the icon dataset.\", \"We adjust the motivation, as you mention interpolations and other operations are interesting on their own, and overfitting can be measured through other means.\", \"on VAEs vs GANs, we are currently only discussing the sampling in the test setting - where one only samples from p(z) ( see Figure 5 in https://arxiv.org/pdf/1606.05908.pdf )\", \"Typos/inconsistencies should now be fixed\", \"We added plots showing the 1D-to-1D monotone transport maps for Uniform and Gaussian, see Figure 3 revised edition.\", \"We will add a citation to David MacKay for the mass distribution of a Gaussian. However we didn't find a nice reference which gives the same result for arbitrary distributions with i.i.d components.\", \"In Figure 15 in the appendix, we show example interpolations with twice as many points, so the transition is clearer. We note that the color may change sharply when interpolating between examples if the inbetween color is not 'realistic' for the data.\"]}",
"{\"title\": \"A clear and detailed explanation of a problem which arises when manipulating latent space samples for GANs and VAEs, and a novel solution using heavy machinery but which is simple to apply in practice.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors demonstrate experimentally a problem with the way common latent space operations such as linear interpolation are performed for GANs and VAEs. They propose a solution based on matching distributions using optimal transport. Quite heavy machinery to solve a fairly simple problem, but their approach is practical and effective experimentally (though the gain over the simple SLERP heuristic is often marginal). The problem they describe (and so the solution) deserves to be more widely known.\", \"major_comments\": \"The paper is quite verbose, probably unnecessarily so. Firstly, the authors devote over 2 pages to examples that distribution mismatches can arise in synthetic cases (section 2). This point is well made by a single example (e.g. section 2.2) and the interesting part is that this is also an issue in practice (experimental section). Secondly, the authors spend a lot of space on the precise derivation of the optimal transport map for the uniform distribution. The fact that the optimal transport computation decomposes across dimensions for pointwise operations is very relevant, and the matching of CDFs, but I think a lot of the mathematical detail could be relegated to an appendix, especially the detailed derivation of the particular CDFs.\", \"minor_comments\": \"It seems worth highlighting that in practice, for the common case of a Gaussian, the proposed method for linear interpolation is just a very simple procedure that might be called \\\"projected linear interpolation\\\", where the generated vector is multiplied by a constant. All the optimal transport theory is nice, but it's helpful to know that this is simple to apply in practice.\\n\\nMight I suggest a very simple approach to fixing the distribution mismatch issue? Train with a spherical uniform prior. When interpolating, project the linear interpolation back to the sphere. This matches distribution, and has the attractive property that the entire geodesic between two points lies in a region with typical probability density. This would also work for vicinity sampling.\\n\\nIn section 1, overfitting concerns seem like a strange way to motivate the desire for smoothness. Overfitting is relatively easy to compensate for, and investigating the latent space is interesting regardless.\\n\\nWhen discussing sampling from VAEs as opposed to GANs, it would be good to mention that one has to sample from p(x | z) not just p(z).\\n\\nLots of math typos such as t - 1 should be 1 - t in (2), \\\"V times a times r\\\" instead of \\\"Var\\\" in (3) and \\\"s times i times n\\\" instead of \\\"sin\\\", etc, sqrt(1) * 2 instead of sqrt(12), inconsistent bolding of vectors. Also strange use of blackboard bold Z to mean a vector of random variables instead of the integers.\\n\\nCould cite an existing source for the fact that most mass for a Gaussian is concentrated on a thin shell (section 2.2), e.g. David MacKay Information Theory, Inference and Learning Algorithms.\\n\\nAt the end of section 2.4, a plot of the final 1D-to-1D optimal transport function (for a few different values of t) for the uniform case would be incredibly helpful.\\n\\nSection 3 should be a subsection of section 2.\\n\\nFor both SLERP and the proposed method, there's quite a sudden change around the midpoint of the interpolation in Figure 2. It would be interesting to plot more points around the midpoint to see the transition in more detail. (A small inkling that samples from the proposed approach might change fastest qualitatively near the midpoint of the interpolation perhaps maybe be seen in Figure 1, since the angle is changing fastest there??)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper exposes a simple recipe to manipulate the latent space of generative models in such a way to minimize the mismatch between the prior distribution and that of the manipulated latent space. Manipulations such as linear interpolation are commonplace in the literature, and this work will be helpful to improve assessment on that front.\\n\\nReviewers found this paper interesting, yet unpolished and incomplete. In subsequent iterations, the paper has significantly improved on those fronts, however the AC believes an extra iteration will make this work even more solid. Thus, unfortunately this paper cannot be accepted at this time.\"}",
"{\"title\": \"An interesting paper that seems to be written in a rush\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"This paper is concerned with the mismatch between the input distribution used for training and interpolated input. It extends the discussion on this phenomenon and the correction method proposed by White (2016), and proposes an optimal transport-based approach, which essentially makes use of the trick of change of variables. The discussion of the phenomenon is interesting, and the proposed method seems well motivated and useful. There are a number of errors or inconsistencies in the paper, and the experiments results, compared to those given by SLERP, see rather weak. My big concern about the paper is that it seems to be written in a rush and needs a lot of improvement before being published. Below please see more detailed comments.\", \"In Introduction, the authors claim that \\\"This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution.\\\" Here the learned generative network might still apply even if the input distribution changes (e.g., see the covariate shift setting); should one claim that the support of the test input distribution may not be contained in the support of the input distribution for training? Is there any previous result supporting this?\", \"Moreover, I am wondering whether Sections 2.2 and 2.3 can be simplified or improved--the underlying idea seems intuitive, but some of the statements seem somewhat confusing. For instance, what does equation (6) mean?\", \"Note that a parenthesis is missing in line 3 below (4). In (6), the dot should follow the equation.\", \"Line 1 of page 7: here it would be nice to make it clear what p_{y|x} means. How did you obtain values of f(x) from this conditional distribution?\", \"Theorem 2: here does one assume that F_Y is invertible? (Maybe this is not necessary according to the definition of F_Y^{[-1]}...)\", \"Line 4 above Section 4.2: the sentence is not complete.\", \"Section 4.2: It seems that Figure 3 appears in the main text earlier than Figure 2. Please pay attention to the organization.\", \"Line 3, page 10: \\\"slightly different, however...\\\"\", \"Line 3 below Figure 2: I failed to see \\\"a slight loss in detain for the SLERP version.\\\" Perhaps the authors could elaborate on it?\", \"The paragraph above Figure 3 is not complete.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thanks for the feedback! \\n\\nWhile sticking to the main story we have significantly polished the paper. \\nWe have improved discussion of the experiments, acknowledging that there is not a really noticeable difference between the SLERP heuristic and our matched interpolation in practice. This is perhaps not so surprising since SLERP does tend to match the biggest distribution difference (the norm mismatch) quite OK in practice (see Fig. 2 revised paper). \\nNonetheless, our proposed framework has many benefits which we have also better highlighted in the paper:\\n\\t- it gives a new and well grounded perspective on how to do operations in the latent space of distributions\\n\\t- it is straightforward to implement, especially for a Gaussian prior (see Tab. 1 revised paper).\\n - it generalizes to almost any operation you can think of, not just interpolation (see e.g. random walk in Fig. 11 (revised paper)).\", \"regarding_specific_comments\": [\"while the trained model -might- apply also for a different distribution, for the linear interpolation we typically see a clear difference. Note we do not claim that the supports of the distributions do not overlap - we only claim this for the distribution of the norms.\", \"we significantly simplified the explanation and motivation of Sec 2.1-2.2 (old version), removing the synthetic example (including eq (6)) and better focus on the (more relevant in practice) norm distribution difference - with detailed calculations moved to appendix. The subsections are merged into the intro of Sec 2 in the revised edition.\", \"These changes were also in line with suggestions from AnonReviewer3 on simplifying the paper.\", \"p_{y|x} has been clarified in the text, it was referring to f(x) being a random variable where f(x) is drawn from the conditional distribution over y given a fixed x. If this is unclear/confusing in our notation, we can also instead just cite the fact that KP is a relaxation of MP.\", \"Theorem 2: while the derivations would be easier if F_Y were invertible, it is not needed. F_Y is always monotonic, and F_Y^{[-1]} denotes the pseudo-inverse (hence the bracket [-1] ). See https://en.wikipedia.org/wiki/Cumulative_distribution_function#Inverse_distribution_function_(quantile_function) and (Santambrogio, 2015) for more details.\", \"other typos/mistakes: should be fixed in revised version\"]}",
"{\"title\": \"Additional quantitative comparison\", \"comment\": \"Dear Reviewers, thanks again for your feedback and happy new year!\\n\\nSince two reviewers felt the experiments could be stronger, we have added a new revision with additional quantitative experiments (Section 3.3. and Table 2.) which compare the interpolation operations using Inception scores. These results mirror what was qualitatively observed in Section 3.2 -- namely that when compared with the original models, the linear interpolation gives a significant quality degradation (up to 29% lower Inception scores), while our matched operations do not degrade the quality (less than 1% observed difference in scores).\\n\\nRegarding individual comments we refer to the individual responses previously posted. All other Figures, Tables and Sections referenced there have the same numbers as in the previous revised edition, so you only need to look at the latest revision.\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you for your review. \\nWe discuss your raised concerns and hope you reconsider the rating.\\n\\n\\\"It is not well-motivated why it is 'the right thing to do' to match the training prior, given that the training prior is potentially not at all representative or relevant...\\\" \\\"... [using the prior] seems like a matter of convenience ...\\\"\\n\\nWhile true that the specific prior chosen is a matter of convenience, after it has been chosen it is *the prior that the model is trained for*. This is the standard practice when training GANs, so our point is that after you train your model you need to respect the prior you chose. So then you might say that the \\\"wrong\\\" prior was chosen, but it is well known that any distribution (in principle) can be sampled from via a mapping G applied to samples of a fixed (e.g. uniform) distribution z. See Multivariate Inverse Transform Sampling (e.g. slide 24 in https://www.slac.stanford.edu/slac/sass/talks/MonteCarloSASS.pdf ).\\n\\n\\n\\\"...what is the real-world problem metric or evaluation criteria and how does this proposal directly help?\\\"\\n\\nThe goal of this work is to improve upon how generative models such as GANs are visualized and explored when working with operations on samples. Too see why this is relevant in Section 1.1 (revised edition) we mention eight papers (out of many more) in the recent literature which use such operations to explore their models. A 'real-world' use case hinges on real-world use cases of generative models, but just to give an example you could imagine an application that allows a user to 'navigate' the latent space of a generated model to synthesize a new example (say logo/face/animated character) for use in some real world application. Such exploration of the model needs to allow for various operations to adjust the synthesized samples.\\n\\n\\nRegarding the 'usual thin sphere story' we note that the radius difference is quite significant, see Figure 2 (revised edition) which shows the radius distribution for the latent spaces typically used in the literature. Our approach completely sidesteps the issue.\\n\\n\\nFor the experiments, we have added more examples of latent space operations and a discussion on the differences. A key property of our proposed approach is that it is 'safe': if you repeatedly look at some output of any operation (say e.g. midpoint of the matched interpolation), it will have exactly the same distribution as random samples from the model. Hence no matter what kind of image quality assessment you would use, it would be the (statistically) the same as for samples from the model without any operations.\"}"
]
} |
rkmoiMbCb | Tandem Blocks in Deep Convolutional Neural Networks | [
"Chris Hettinger",
"Tanner Christensen",
"Jeff Humpherys",
"Tyler J Jarvis"
] | Due to the success of residual networks (resnets) and related architectures, shortcut connections have quickly become standard tools for building convolutional neural networks. The explanations in the literature for the apparent effectiveness of shortcuts are varied and often contradictory. We hypothesize that shortcuts work primarily because they act as linear counterparts to nonlinear layers. We test this hypothesis by using several variations on the standard residual block, with different types of linear connections, to build small (100k--1.2M parameter) image classification networks. Our experiments show that other kinds of linear connections can be even more effective than the identity shortcuts. Our results also suggest that the best type of linear connection for a given application may depend on both network width and depth. | [
"resnet",
"residual",
"shortcut",
"convolutional",
"linear",
"skip",
"highway"
] | Reject | https://openreview.net/pdf?id=rkmoiMbCb | https://openreview.net/forum?id=rkmoiMbCb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HkL83kfVz",
"Sy_eGOTXf",
"Hk1CjNZyM",
"r1SGBDUVG",
"ry1gCAFlM",
"rJyDBCYgG",
"BkYT2CzkM",
"ByMyU16Sz",
"BJKUbSHJf",
"B1m4fdpmz",
"HJ3dHdW4z",
"Byzn-uamG",
"HkLjGd6mM",
"BkWQMuIBz",
"BJVy_t14M",
"ryUzS184G"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment"
],
"note_created": [
1515482189866,
1515188719891,
1510194119336,
1515775259148,
1511808487006,
1511806295377,
1510300865457,
1517250009536,
1510457681539,
1515188778578,
1515451764227,
1515188649886,
1515188893638,
1516827161149,
1515325404310,
1515742478417
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"ICLR.cc/2018/Conference/Paper947/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper947/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper947/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper947/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper947/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"ICLR.cc/2018/Conference/Paper947/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
],
[
"~Oshrat_Bar1"
],
[
"ICLR.cc/2018/Conference/Paper947/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Reproducibility Challenge - Answers (Part 1)\", \"comment\": \"We'd be more than happy to help you recreate our experiments. Hopefully the following will answer most of your questions. We'll also provide a follow-up later this week with code samples and tables of hyperparameter values.\\n\\n1. We do mean stride in the usual (spatial) sense. Much like pooling, it reduces the height and width of our image channels. When using identity connections or 1x1 convolutions, using stride 2 simply amounts to taking the sub-image of pixels with even coordinates.\\n\\nAs you observe, this needs to happen on both the linear and nonlinear sides of a block so that they can be added together. When there are two nonlinear layers in a block, we only change the stride on the first one.\\n\\n2. The specific initialization method we used was the 'variance scaling' method from the keras package, which uses a standard deviation of sqrt(scale/n) where n is the number of inputs to the layer (which is just the width of the previous layer). We determined the scale parameter experimentally, so we'll have to put together a list of the ones we ended up using for each experiment.\\n\\nExact values don't seem to be very important in this case. They just need to be large enough to get the network learning, but not so large that it becomes unstable. All of our values were between 0.3 and 1.2.\\n\\n3. We used a batch size of 125 for all experiments.\\n\\nAll of the data sets we used come with designated test sets that we used as such.\\n\\nFor our hyperparameter grid searches, we used 20% of the given training data as validation data and the remaining 80% as training data. We made sure that classes were equally represented in both the training and validation sets.\\n\\n4. Our weight decay and dropout values were a little different in each experiment, as dictated by our grid search. We'll make tables of these for you and get them to you soon. However, you may also want to perform your own grid searches for these values. We tried weight decay values from 0.0000 to 0.0004 and dropout rates from 0.0 to 0.3 for each experiment. \\n\\n5. For data augmentation, we only used shifts (both vertical and horizontal) and flips (only horizontal). The shifts were limited to 10% of image height/width, so 3.2 pixels.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"First, we'd like to clarify what we see as the central thesis of our paper. We aren't replacing identity shortcuts so much as generalizing them. Tandem blocks include standard (identity) residual blocks as a special case. An identity shortcut is just a 1x1 convolution without non-linearity whose weight matrix is fixed as an identity matrix. The intent of our paper is to show that the latter property is unnecessary and limiting. The weight matrix of the linear shortcut doesn't need to be fixed (it can be learnable) and it doesn't need to be an identity matrix (either at initialization or after training). The linear convolution doesn't even need to be 1x1, which is particularly surprising. Because a number of notable papers contain assertions to the contrary (that identity connections are necessary and/or optimal), we believe that our contribution is both new and important. However, we failed to clearly express this and have revised the paper accordingly. We are grateful to the reviewer for pointing out the weaknesses of the submitted draft.\\n\\nThe linked paper (\\u201cIdentity Mappings in Deep Residual Networks\\u201d by He et al.) does explore the idea of using learnable linear 1x1 convolutions instead of identity mappings, as does the original ResNet paper. Both conclude that identity connections are superior on the grounds that they work better in extremely deep networks because they don't scale gradients. We did not intend to claim to be the first to use linear 1x1s in this way. Instead, our primary aim was to challenge the conclusion that identity connections are superior. We have now clarified this in the revised paper.\\n\\nMuch of the initial explanation for why identity shortcut connections were important had to do with building extremely deep networks. However, Zagoruyko and Komodakis showed that wider, shallower networks are superior even with traditional resblocks (https://arxiv.org/pdf/1605.07146.pdf). So it's important to ask what types of shortcut connections work best in these cases.\\n\\nIn reading this review, it was clear that we needed to explain more thoroughly our experimental procedures, including our use of train/val/test splits and hyperparameter grid search. As is traditional, we do not use the test set for hyperparameter selection, but rather a separate validation set. The test set is only used for final evaluation. We hope this is now clear in the paper.\\n\\nUnfortunately, we don't have a good explanation for the effects of batch normalization in our experiments. We expected it to help, but this simply wasn't what we observed. This question certainly merits further investigation.\\n\\nWe should clarify that our results are competitive with those achieved in other ResNet papers. We mention this primarily to establish that we correctly recreated their architectures for our experiments, making the comparisons fair. Our networks may not beat more complex architectures (such as Inception) on a per-parameter basis, but that isn't the goal. We're only investigating the question of shortcut connections, so we tried not to introduce any extra variables.\\n\\nThe differences between architectures in some experiments were indeed too small to indicate that one architecture was better than another, and we don't want to imply otherwise. Our goal is to show that non-identity connections were better than identities in some experiments and comparable in others. Both cases contradict the near-universal assertions that identity connections are somehow special or optimal. It is important to make clear that we didn\\u2019t just switch identity connections to linear connections, rather we also reduced the number of neurons per layer so that the total number of parameters did not increase in our comparisons. In other words, we narrowed the layers to make the contests fair.\\n\\nWe would love to provide results on larger datasets, however, our computational resources are an issue. Testing extremely deep networks would also be interesting, but we would expect to observe the same thing as everyone else\\u2014that extremely deep networks take much longer to train and offer at best marginally better performance.\\n\\nWe have referenced and discussed all of the figures explicitly in the revised text.\", \"important\": \"At the reviewer's suggestion, we confirmed using the singular value decomposition that linear connections with standard initializations (zero mean and small variance) did not learn identity maps and that linear connections initialized to the identity did not stay there. In other words, these maps are truly non-identity in nature. This was an excellent suggestion from the reviewer and has (in our opinion) substantially strengthened our argument and the paper.\\n\\nWe noted that dropout is a kind of regularization, this and the typos are fixed.\"}",
"{\"title\": \"Well-written, easily digestable, somewhat marginal paper\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper investigates the effect of replacing identity skip connections with trainable convolutional skip connections in ResNet. The authors find that in their experiments, performance improves. Therefore, the power of skip connections is due to their linearity rather than due to the fact that they represent the identity.\\n\\nOverall, the paper has a clear and simple message and is very readable. The paper contains a good amount of experiments, but in my opinion not quite enough to conclude that identity skip connections are inherently worse. The question is then: how non-trivial is it that tandem networks work? For someone who understands and has worked with ResNet and similar architectures, this is not a surprise. Therefore, the paper is somewhat marginal but, I think, still worth accepting.\\n\\nWhy did you choose a single learning rate for all architectures and datasets instead of choosing the optimal one for each archtitecture and dataset? Was it a question of computational resources? Using custom step sizes would strenghten your experimental results significantly. In the absence of this, I would still ask that you create an appendix where you specify exactly how hyperparameters were chosen.\", \"other_comments\": [\"\\\"and that it\\u2019s easier for a layer to learn from a starting point of keeping things the same (the identity map) than from the zero map\\\" I don't understand this comment. Networks without skip connections are not initialized to the zero map but have nonzero, usually Gaussian, weights.\", \"in section 2, reason (ii), you seem to imply that it is a good thing if a network behaves as an ensemble of shallower networks. In general, this is a bad thing. Therefore, the fact that ResNet with tandom networks is an ensemble of shallower networks is a reason for why it might perform badly, not well. I would suggest removing reason (ii).\", \"in section 3, reason (iii), you state that removing nonlinearities from the skip path can improve performance. However, using tandom blocks instead of identity skip connections does not change the number of nonlinearity layers. Therefore, I do not see how reason (iii) applies to tandem networks.\", \"\\\"The best blocks in each challenge were competitive with the best published results for their numbers of parameters; see Table 2 for the breakdown.\\\" What are the best published results? I do not see them in table 2.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"final thought\", \"comment\": \"The main criticisms from other reviewers seems to be that networks tested were too small. In their experiments, the authors conducted 5 runs of 65 settings each (table 2). To me this is enough. I think the authors are only trying to say \\\"It CAN be beneficial to shift parameters to the skip path\\\" not \\\"it IS beneficial to shift parameters to the skip path\\\". Of course, for any given dataset / architecture, keeping parameters on the residual path might be better. So for 100m parm network / ImageNet, the conclusion of the paper may not apply. I think that's ok though. One can rarely give universal guarantees in the space of deep learning anyways.\"}",
"{\"title\": \"Weak contribution.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper is well written, has a good structure and is easy to follow. The paper investigates the importance of having the identity skip connections in residual block. The authors hypothesize that changing the identity mapping into a linear function would be beneficial. The main contribution of the paper is the Tandem Block, that is composed of two paths, linear and nonlinear, the outcome of two paths is summed at the end of the block. Similarly, as for residual blocks in ResNets, one can stack together multiple Tandem Blocks. However, this contribution seems to be rather limited. He at. al. (2016) introduces a Tandem Block like structure, very similar to B_(1x1)(2,w), see Fig. 2(e) in He at. al. (2016). Moreover, He et. al (2016) shows in Tab 1 that for a ResNet 101 this tandem like structure performs significantly worse than identity skip connections. This should be properly mentioned, discussed and reflected in the contributions of the paper.\", \"result_section\": \"My main concern is that it seems that the comparison of different Tandem Blocks designs has been performed on test set (e. g. Table 2 displays the highest test accuracies) . Figs 3, 4, 5 and 6 together with Tab. 2 monitors test set. The architectural search together with hyperparameters selection should be performed on validation set.\", \"other_issues\": [\"Section 1: \\u201c\\u2026 ResNets have overcome the challenging technical obstacles of vanishing/exploding gradients\\u2026 \\u201c. It is clear how ResNet address the issue of vanishing gradients, however, I\\u2019m not sure if ResNet can also address the problem of exploding gradients. Can authors provide reference for this statement?\", \"Experiments: The authors show that on small size networks Tandem Block outperforms Residual Blocks, since He at. al. (2016) in Tab 1 showed a contrary effect, does it mean that the observations do not scale to higher capacity networks? Could the authors comment on that?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well structured analysis paper on shortcut connections but contributions/results are not compelling\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper performs an analysis of shortcut connections in ResNet-like architectures. The authors hypothesize that the success of shortcut connections comes from the combination of linear and non-linear features at each layer and propose to substitute the identity shortcuts with a convolutional one (without non-linearity). This alternative is referred to as tandem block. Experiments are performed on a variety of image classification tasks such as CIFAR-10, CIFAR-100, SVHN and Fashion MNIST.\\n\\nThe paper is well structured and easy to follow. The main contribution of the paper is the comparison between identity skip connections and skip connections with one convolutional layer.\\n\\nMy main concerns are related to the contribution of the paper and experimental pipeline followed to perform the comparison. First, the idea of having convolutional shortcuts was already explored in the ResNet paper (see https://arxiv.org/pdf/1603.05027.pdf). Second, given Figures 3-4-5-6, it would seem that the authors are monitoring the performance on the test set during training. Moreover, results on Table 2 are reported as the ones with \\u201cthe highest test accuracy achieved with each tandem block\\u201d. Could the authors give more details on how the hyperparameters of the architectures/optimization were chosen and provide more information on how the best results were achieved?\\n\\nIn section 3.5, the authors mention that batchnorm was not useful in their experiments, and was more sensitive to the learning rate value. Do the authors have any explanation/intuition for this behavior?\\n\\nIn section 4, authors claim that their results are competitive with the best published results for a similar number of parameters. It would be beneficial to add the mentioned best performing models in Table 2 to back this statement. Moreover, it seems that in some cases such as SVHN the differences between all the proposed blocks are too minor to draw any strong conclusions. Could those differences be due to, for example, luck in picking the initialization seed? How many times was each experiment run? If more than once, what was the std?\\n\\nThe experiments were performed on relatively shallow networks (8 to 26 layers). I wonder how the conclusions drawn scale to much deeper networks (of 100 layers for example) and on larger datasets such as ImageNet.\\n\\nFigures 3-5 are not referenced nor discussed in the text.\\n\\nFollowing the design of the tandem blocks proposed in the paper, I wonder why the tandem block B3x3(2,w) was not included.\\n\\nFinally, it might be interesting to initialize the convolutions in the shortcut connections with the identity, and check what they have leant at the end of the training.\", \"some_typos_that_the_authors_might_want_to_fix\": [\"backpropegation -> backpropagation (Introduction, paragraph 3)\", \"dropout is a kind of regularization as well (Introduction, second to last paragraph)\", \"nad -> and (Sect 3.1. paragraph 1)\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Can you see my review?\", \"comment\": \"Dear authors,\\n\\nI posted my review recently. I am curious: Can you see the review? Because when I log out of my account, I can no longer see it. Hence, the review is (so far) not public. I am wondering whether at least you can see it.\\n\\nThanks,\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper presents a good analysis on the use of different linear maps instead of identity shortcuts for resnet.\\nIt is interesting to the community but the experimental justification is insufficient.\\n1) As pointed out by the reviewer that this work shows \\\"that on small size networks Tandem Block outperforms Residual Blocks, since He at. al. (2016) in Tab 1 showed a contrary effect, does it mean that the observations do not scale to higher capacity networks?\\\", the paper would be much stronger if with experiments justify this claim.\\n2) \\\"extremely deep networks take much longer to train\\\" is not a valid reason to not conduct such exps.\"}",
"{\"title\": \"Your review didn't post, but did come as an email.\", \"comment\": \"Dear Reviewer,\\n\\nWe can't see your review on OpenReview, but we did receive it via email. We appreciate your analysis and look forward to answering your questions and making the appropriate revisions during the discussion period. Hopefully the review will post soon.\\n\\nThanks,\\nThe Authors\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We appreciate the thoughtfulness that went into this review. We feel that we have substantially improved the paper as a result of this review and the other two.\\n\\nFirst, it is important to note that we aren't replacing identity shortcuts so much as generalizing them. Tandem blocks include standard (identity) residual blocks as a special case. An identity shortcut is just a 1x1 convolution without non-linearity whose weight matrix is fixed as an identity matrix. The intent of our paper is to show that the latter property is unnecessary and limiting. The weight matrix of the linear shortcut doesn't need to be fixed (it can be learnable) and it doesn't need to be an identity matrix (either at initialization or after training). The linear convolution doesn't even need to be 1x1, which is particularly surprising. Because a number of notable papers contain assertions to the contrary (that identity connections are necessary and/or optimal), we believe that our contribution is both new and important. However, we failed to clearly express this and have revised the paper accordingly.\\n\\nSecond, it is important to make clear that we didn\\u2019t just switch identity connections to linear connections, rather we also reduced the number of neurons per layer in the linear case so that the total number of parameters did not increase in our comparisons. In other words, we narrowed the layers to make the contests fair. This wasn\\u2019t as clear as it could have been in the initial version of the paper. We hope it is clearer now.\\n\\nTo address the reviewer\\u2019s question about learning rates, we did do a grid search across a number of learning rate schedules, testing them separately for each architecture, and (surprisingly) the same rate schedule turned out to be optimal for every architecture. In Section 3 we clarified our approach. We also clarified how we performed the searches for dropout and weight decay parameters, which convinced us to use different values for different architectures; see Section 3 for details.\", \"response_to_other_comments\": \"The comment about learning from the zero map has been clarified to indicate that we initialized weights with small Gaussian values, as is standard practice.\\n\\nWe removed the paragraph about tandem networks acting as ensembles of shallower networks, per the reviewer's suggestion. We removed the paragraph about removing nonlinearities for the same reason and Section 2 is clearer as a result.\\n\\nWe should clarify that our results are competitive with those achieved in other ResNet papers. We mention this primarily to establish that we correctly recreated their architectures for our experiments, making the comparisons fair. Our networks may not beat more complex architectures (such as Inception) on a per-parameter basis, but that isn't the goal. We're only investigating the question of shortcut connections, so we tried not to introduce any extra variables.\"}",
"{\"title\": \"I like the revision\", \"comment\": \"I like the authors thoughtful response to my points and those raised by other reviewers. Also I was not aware that the major ResNet papers took a position in favor of identity connections. I am more convinced than before that this paper is an accept.\", \"one_note\": \"Regarding the statement that ResNet combats exploding gradients, which one of the other reviewers objected to, this has been demonstrated in \\\"Gradients explode - deep networks are shallow - ResNet explained\\\" as submitted to this conference: https://openreview.net/forum?id=HkpYwMZRb (I hope you'll allow me to be so bold as to shill my own paper :)\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We appreciate the thoughtfulness that went into this review. We feel that we have substantially improved the paper as a result of this review and the other two.\\n\\nFollowing the reviewers comments, we have clarified that we aren't contrasting residual blocks with tandem blocks. It is more accurate to say that tandem blocks generalize residual blocks, including identity connections as a special case.\\n\\nThe paper \\u201cIdentity Mappings in Deep Residual Networks\\u201d by He et al does explore the idea of using learnable linear 1x1 convolutions instead of identity mappings, as does the original ResNet paper. Both conclude that identity connections are superior on the grounds that they work better in extremely deep networks because they don't scale gradients. We did not intend to claim to be the first to use linear 1x1s in this way. Instead, our primary aim was to challenge the conclusion that identity connections are superior. We have now clarified this and discussed the relevant papers in the revised paper.\\n\\nMuch of the initial explanation for why identity shortcut connections were important had to do with building extremely deep networks. However, Zagoruyko and Komodakis showed that wider, shallower networks are superior even with traditional resblocks (https://arxiv.org/pdf/1605.07146.pdf). So it's important to ask what types of shortcut connections work best in these cases. Our experiments show that learnable linear connections are as good as or better than identity connections in networks of practical size.\\n\\nIn reading this review, it was clear that we needed to explain more thoroughly our experimental procedures, including our use of train/val/test splits and hyperparameter grid search. As is traditional, we do not use the test set for hyperparameter selection, but rather a separate validation set. The test set is only used for final evaluation. We hope this is now clear in the paper.\\n\\nWe have fixed an incorrect statement to reflect the fact that identity connections don't prevent exploding gradients. We thank the reviewer for calling that to our attention.\\n\\nIt is important to differentiate between network capacity and network depth. Zagoruyko and Komodakis used networks of tremendous capacity (but not particularly great depth) and outperformed the original ResNets which were much deeper. We would love to provide results for much larger networks (in terms of parameter count) and also on larger datasets. However, our computational resources are an issue. Testing extremely deep networks would also be interesting, but we would expect to observe the same thing as everyone else\\u2014that extremely deep networks take much longer to train and offer at best marginally better performance.\"}",
"{\"title\": \"Summary of Changes\", \"comment\": [\"In response to the reviewers' comments, we have made a number of improvements to the paper. Most importantly, we:\", \"Clarified that the primary purpose of the paper was not to introduce novel architectures, but to challenge the conclusion that identity shortcuts are superior to other linear shortcuts. We show experimentally that this is not the case for any of the networks we trained.\", \"Added a small section (based on a reviewer's suggestion) showing that the linear connections in our networks did not learn to imitate identity connections, even if they were initialized with identity weight matrices. This supports the conclusion that learnable weights add real value to linear connections.\", \"Explained that we used validation data to determine hyperparameters and test data only for our final comparisons between architectures, following the standard practice. We also clarified that we were comparing average performance across series of five runs for each experiment. Both points were unclear in our original submission.\", \"Removed some introductory comments that were confusing or distracted from our main points.\", \"Stressed that differences in performance were not due to some architectures having unfair advantages due to greater numbers of parameters. We were careful to keep parameter counts as close as possible by adjusting layer widths separately for each architecture.\", \"Discussed the relevant literature more thoroughly in the first two sections of the paper.\", \"We also made a number of minor corrections and clarifications.\"]}",
"{\"title\": \"Experimental Scale\", \"comment\": \"We appreciate that all of our reviewers responded to our updated paper and we are pleased to see that we managed to address nearly all of their questions and concerns. The only remaining criticisms regard the size of the networks in our experiments. While we were careful to recreate the meta-architecture of Zagoruyko and Komodakis to ensure the most direct and relevant comparisons possible, we understand the desire to see the same experiments done on a larger scale. We did as much as we could in this regard with the financial and computational resources available to us. We believe that our results and analysis make a compelling case for questioning the conventional wisdom in this area and motivate further (and larger) experiments.\"}",
"{\"title\": \"ICLR 2018 Reproducibility Challenge - Questions\", \"comment\": \"Hi,\\nAs a final project in deep learning seminar in Tel Aviv University, I am reproducing the experiments described in your paper.\\nI\\u2019ll much appreciate it if you can answer a few questions regarding the implementation details.\\n\\n1. Tandem blocks with stride 2\\nI assume stride 2 in the spatial dimensions.\\nAs I understand, on those blocks, the linear layer is either 1x1 convolution or 3x3 convolution (not identity) as the input and output differs in the third dimension.\\nIs there a stride 2 in the linear part as well?\\nIn blocks with l=2, there are 2 nonlinear layers (3x3 convolution) and only 1 linear layer (1x1 convolution). Assuming stride 2 in all the 3 convolutions, linear output and nonlinear output have different dimensions and can\\u2019t be summed together. How do you cope with that?\\n\\n2. Initialization\\nIn the paper, you mentioned that the initialization was done as in He et al. (2015).\\nAs I understand, std in layer l is sqrt( 2/nl), while nl is the number of the kernel parameter.\\nYou also mentioned a \\u201cbase standard deviation\\u201d that varied considerably from network to network.\\nHow does the \\u201cbase standard deviation\\u201d affect the std computation?\\nWhat \\u201cbase standard deviation\\u201d did you use in each model?\\nWhat is the std used for the softmax output layer weights?\\nIs it correct that you initialized biases to 0?\\n\\n3. Training\\nWhat was the batch size that you used in each of the experiments?\\nYou mentioned a use of validation set. What portion of the training set was used for training in each of the experiments? \\n\\n4. Regularization\\nPlease specify the weight decay and dropout rates that you used for each of the architectures.\\n\\n5. Data Augmentation\\nPlease specify the details and amount of the augmentation that you used.\\n\\nThanks,\\nOshrat Bar\\nTel Aviv University\\[email protected]\"}",
"{\"title\": \"Reproducibility Challenge - Answers (Part 2)\", \"comment\": \"The code and hyperparameters we used are now available at https://github.com/tandemblock/iclr2018\\n\\nWe hope this will further clarify our methods and make it easy to reproduce our results.\"}"
]
} |
Hk5elxbRW | Smooth Loss Functions for Deep Top-k Classification | [
"Leonard Berrada",
"Andrew Zisserman",
"M. Pawan Kumar"
] | The top-$k$ error is a common measure of performance in machine learning and computer vision. In practice, top-$k$ classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-$k$ classification can bring significant improvements.
Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-$k$ optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require $\mathcal{O}(\binom{n}{k})$ operations, where $n$ is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of $\mathcal{O}(k n)$. Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of $k=5$. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy. | [
"classification",
"loss",
"smooth loss functions",
"deep",
"performance",
"deep neural networks",
"loss function",
"family",
"algorithm"
] | Accept (Poster) | https://openreview.net/pdf?id=Hk5elxbRW | https://openreview.net/forum?id=Hk5elxbRW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HykoG7Oef",
"SyFTqXImz",
"H15E5Q8mz",
"BJbqjU0eM",
"HJfBTmUQz",
"ryOmoYZZM",
"S1Ori7U7z",
"rklMmkpSf"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"decision"
],
"note_created": [
1511694999279,
1514711745238,
1514711601783,
1512102793433,
1514712377641,
1512311583976,
1514711872517,
1517249287603
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper547/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper547/Authors"
],
[
"ICLR.cc/2018/Conference/Paper547/Authors"
],
[
"ICLR.cc/2018/Conference/Paper547/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper547/Authors"
],
[
"ICLR.cc/2018/Conference/Paper547/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper547/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Promising extension of SVM's top-k loss to deep models\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper is clear and well written. The proposed approach seems to be of interest and to produce interesting results. As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this paper is a promising contribution to handle those issues better.\\n\\nThe paper proposes to use a top-k loss such as what has been explored with SVMs in the past, but with deep models. As the loss is not smooth and has sparse gradients, the paper suggests to use a smoothed version where maximums are replaced by log-sum-exps.\\n\\nI have two main concerns with the presentation.\\n\\nA/ In addition to the main contribution, the paper devotes a significant amount of space to explaining how to compute the smoothed loss. This can be done by evaluating elementary symmetric polynomials at well-chosen values.\\n\\nThe paper argues that classical methods for such evaluations (e.g., using the usual recurrence relation or more advanced methods that compensate for numerical errors) are not enough when using single precision floating point arithmetic. The paper also advances that GPU parallelization must be used to be able to efficiently train the network.\\n\\nThose claims are not substantiated, however, and the method proposed by the paper seems to add substantial complexity without really proving that it is useful.\\n\\nThe paper proposes a divide-and-conquer approach, where a small amount of parallelization can be achieved within the computation of a single elementary symmetric polynomial value. I am not sure why this is of interest - can't the loss evaluation already be parallelized trivially over examples in a training/testing minibatch? I believe the paper could justify this approach better by providing a bit more insights as to why it is required. For instance:\\n\\n- What accuracies and train/test times do you get using standard methods for the evaluation of elementary symmetric polynomials?\\n- How do those compare with CE and L_{5, 1} with the proposed method?\\n- Are numerical instabilities making this completely unfeasible? This would be especially interesting to understand if this explodes in practice, or if evaluations are just a slightly inaccurate without much accuracy loss.\\n\\n\\nB/ No mention is made of the object detection problem, although multiple of the motivating examples in Figure 1 consider cases that would fall naturally into the object detection framework. Although top-k classification considers in principle an easier problem (no localization), a discussion, as well as a comparison of top-k classification vs., e.g., discarding localization information out of object detection methods, could be interesting.\", \"additional_comments\": [\"Figure 2b: this visualization is confusing. This is presented in the same figure and paragraph as the CIFAR results, but instead uses a single synthetic data point in dimension 5, and k=1. This is not convincing. An actual experiment using full dataset or minibatch gradients on CIFAR and the same k value would be more interesting.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Response to Reviewer 2: Approximate Evaluation of the Elementary Symmetric Polynomials\", \"comment\": \"We thank the reviewer for the feedback. Is the reviewer suggesting to select scores that are large enough to have a non-negligible impact on the value of the loss? If that is the case, this is indeed an interesting approach for an approximate algorithm if the exact computation happens to be too expensive in practice. In our case, we are able to perform exact evaluations of the elementary symmetric polynomials. We further point out that for such an approach, it may be more efficient to compute a chosen number of the largest scores rather than to perform a full sorting (time complexity in O(C) instead of O(C log C)).\"}",
"{\"title\": \"General Comment to Reviewers\", \"comment\": [\"We thank all the reviewers for their helpful comments. We have revised the paper, with the following main changes:\", \"Improved visualization in Figure 2, as suggested by Reviewer 1.\", \"Comparison with the Summation Algorithm in a new Appendix D, as suggested by Reviewer 1. We demonstrate the practical advantages of the divide-and-conquer algorithm for our use cases on GPU.\", \"Formal proof of Lemma 3 instead of a sketch of proof.\", \"Improved results on top-5 error on ImageNet: with a better choice of the temperature parameter, we have improved the results of our method. Our method now obtains on-par performance with CE when all the data is available, and still outperforms it on subsets of the dataset.\"]}",
"{\"title\": \"The paper is well written and the contribution is sound\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper made some efforts in smoothing the top-k losses proposed in Lapin et al. (2015). A family of smooth surrogate loss es was proposed, with the help of which the top-k error may be minimized directly. The properties of the smooth surrogate losses were studied and the computational algorithms for SVM with these losses function were also proposed.\", \"pros\": \"1, The paper is well presented and is easy to follow.\\n2, The contribution made in this paper is sound, and the mathematical analysis seems to be correct. \\n3, The experimental results look convincing.\", \"cons\": \"Some statements in this paper are not clear to me. For example, the authors mentioned sparse or non-sparse loss functions. This statement, in my view, could be misleading without further explanation (the non-sparse loss was mentioned in the abstract).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to Reviewer 1: Algorithms Discussion\", \"comment\": \"We thank the reviewer for the detailed comments. We answer each of the reviewer\\u2019s concerns:\\n\\n\\nA/\", \"the_reviewer_rightly_points_out_the_two_key_aspects_in_the_design_of_an_efficient_algorithm_in_our_case\": \"(i) numerical stability and (ii) speed. We have implemented the alternative Summation Algorithm (SA), and we have added a new section in the appendix to compare it to our method, on numerical stability and speed. On both aspects, experimental results demonstrate the advantages of the Divide and Conquer (DC) algorithm over SA in our use case.\", \"here_are_some_highlights_of_the_discussion\": \"(i) We emphasize the distinction between numerical accuracy and stability. To a large extent, high levels of accuracy are not needed for the training of neural network, as long as the directions of gradients are unaffected by the errors. Stability is crucial however, especially in our case where the evaluation of the elementary symmetric polynomials is prone to overflow. When the loss function overflows during training, the weights of the neural network diverge and any learning becomes impossible. \\nWe discuss the stability of our method in Appendix D.2. In summary, the summation algorithm starts to overflow for tau <= 0.1 in single precision and 0.01 in double precision. It is worth noting that compensation algorithms are unlikely to help avoid such overflows (they would only improve accuracy in the absence of overflow). Our algorithm, which operates in log-space, is stable for any reasonable value of tau (it starts to overflow in single-float precision for tau lower than 1e-36).\\n\\n(ii) The reviewer is correct that the computation of the loss can be trivially parallelized over the samples of a minibatch, and this is exploited in our implementation. However we can push the parallelization further within the DC algorithm for each sample of a minibatch. Indeed, inside each recursion of the Divide-and-Conquer (DC) algorithm, all polynomial multiplications are performed in parallel, and there are only O(log(C)) levels of recursion. On the other hand, most of the operations of the summation algorithm are essentially sequential (see Appendix D.1) and do not benefit from the available parallelization capabilities of GPUs. We illustrate this with numerical timing of the loss evaluation on GPU, with a batch size of 256, k=5 and a varying number of classes C:\\n\\n \\t C=100\\tC=1,000 C=10,000 C=100,000\\nSummation\\t0.006\\t0.062\\t 0.627\\t 6.258\\nDC\\t 0.011 \\t0.018\\t 0.024\\t 0.146\\n\\nThis shows that in practice, parallelization of DC offers near logarithmic rather than linear scaling of C, as long as the computations are not saturating the device capabilities. \\n\\nB/ We believe that the differences between top-k classification and detection make it difficult to perform a fair comparison between the two methods. In particular, detection methods require significantly more annotation (label and set of bounding boxes per instance to detect) than top-k classification (single image-level label). Furthermore, detection models are most often pre-trained on classification and then fine-tuned on detection, which entangles the influence of both learning tasks on the resulting model.\", \"additional_comments\": \"We thank the reviewer for this useful suggestion. We have changed Figure 2.b) to visualize the sparsity of the derivatives on real data.\"}",
"{\"title\": \"Good paper. Should be accepted\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper introduces a smooth surrogate loss function for the top-k SVM, for the purpose of plugging the SVM to the deep neural networks. The idea is to replace the order statistics, which is not smooth and has a lot of zero partial derivatives, to the exponential of averages, which is smooth and is a good approximation of the order statistics by a good selection of the \\\"temperature parameter\\\". The paper is well organized and clearly written. The idea deserves a publication.\\n\\nOn the other hand, there might be better and more direct solutions to reduce the combinatorial complexity. When the temperature parameter is small enough, both of the original top-k SVM surrogate loss (6) and the smooth loss (9) can be computed precisely by sorting the vector s first, and take a good care of the boundary around s_{[k]}.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to Reviewer 3: Correction of Confusing Statement\", \"comment\": \"We thank the reviewer for the feedback. In the abstract we mean the sparsity of the derivatives. We have changed statements accordingly in the paper. We would be grateful if the reviewers could indicate further sources of confusion in the paper, which we will correct in subsequent versions.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The submission proposes a loss surrogate for top-k classification, as in the official imagenet evaluation. The approach is well motivated, and the paper is very well organized with thorough technical proofs in the appendix, and a well presented main text. The main results are: 1) a theoretically motivated surrogate, 2) that gives up to a couple percent improvement over cross-entropy loss in the presence of label noise or smaller datasets.\\n\\nIt is a bit disappointing that performance is limited in the ideal case and that it does not more gracefully degrade to epsilon better than cross entropy loss. Rather, it seems to give performance epsilon worse than cross-entropy loss in an ideal case with clean labels and lots of data. Nevertheless, it is a step in the right direction for optimizing the error measure to be used during evaluation. The reviewers uniformly recommended acceptance.\", \"decision\": \"Accept (Poster)\"}"
]
} |
ry9tUX_6- | Entropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy | [
"Gintare Karolina Dziugaite",
"Daniel M. Roy"
] | We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance. | [
"generalization error",
"neural networks",
"statistical learning theory",
"PAC-Bayes theory"
] | Reject | https://openreview.net/pdf?id=ry9tUX_6- | https://openreview.net/forum?id=ry9tUX_6- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"Hyfu0ylHM",
"ryq2cm9xG",
"rJXPfp_GM",
"Skza1ggrG",
"Hy0bdarZG",
"r1dNqr9xf",
"SJMcMTOMz",
"H1id1T1Hf",
"Hk9z76OfG",
"SkflOAJrG",
"rJic-pJBf",
"HkWuERkHz",
"SyGJQhkHM",
"Bk1HygxSM",
"BkbCG6dzM",
"rkS7ZILEz",
"r1rP81pSz"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision"
],
"note_created": [
1516400234090,
1511828145939,
1513833050713,
1516400570442,
1512589317958,
1511836208162,
1513833097625,
1516388211450,
1513833234147,
1516394473840,
1516388754724,
1516393577010,
1516384985879,
1516400439394,
1513833160961,
1515770141501,
1517250141462
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper32/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Authors"
],
[
"ICLR.cc/2018/Conference/Paper32/Area_Chair"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Further response\", \"comment\": \"Our PAC-Bayes bound relies on a private prior, which has some privacy level, epsilon. The prior is determined a vector of weights and so the bound depends on the privacy of that weight vector alone.\\n\\nSGLD produces the weight vector w_N by way of simulating a Markov chain w_1,w_2,...,w_N. The privacy of w_N is determined *entirely* by its distribution. It is irrelevant how the vector is produced. Its distribution is all that matters. This is basic differential privacy.\\n\\nThere is also no approximation up until this point. We then note that this distribution is known to converge weakly to the Gibbs distribution=exponential release. And so we approximate the privacy by that of the exponential release.\\n\\nSo yes, we get to apply the bound for \\\"algorithm (a)\\\". \\n\\nOur \\\"We want to address...\\\" was simply saying that scientific progress sometimes relies on approximations. We think it's a reasonable approximation when one is far away from pathological distributions. \\n\\n\\nRegarding clarity, we truly believe we can address these issues in another minor revision, but since the ACs will have to render their decision very soon, you would likely have to take it on faith that we could execute on this based on what we've explained above. The process of discussing it with you has provided a road map. \\n\\nWe'd like to point out a couple issues:\\n\\n1. Some of the confusion stems from hold overs from Version 1 of the paper. Future readers will, fortunately, not have to suffer through Version 1. \\n\\n2. We were under the misconception that the privacy analysis of a Markov chain + post processing the final element was straightforward, but we now see that it is worth explaining in greater detail, even if the argument is a standard one in the privacy literature.\\n\\n\\nFinally, to put it in simple terms, if you don't change your score, our paper surely gets rejected. The other reviewers appear not to have engaged at all with our responses, for whatever reason. They had similar confusion about the privacy approximation, which we have now addressed and can bake into the paper by adding further clarity. We appreciate you spending the time doing this back and forth. We've never experienced anything like this in reviewing.\"}",
"{\"title\": \"review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper connects Entropy-SGD with PAC-Bayes learning. It shows that maximizing the local entropy during the execution of Entropy-SGD essentially minimize a PAC-Bayes bound on the risk of the Gibbs posterior. Despite this connection, Entropy-SGD could lead to dependence between prior and data and thus violate the requirement of PAC-Bayes theorem. The paper then proposes to use a differentially private prior to get a valid PAC-Bayes bound with SGLD. Experiments on MNIST shows such algorithm does generalize better.\\n\\nLinking Entropy-SGD to PAC-Bayes learning and making use of differential privacy to improve generalization is quite interesting. However, I'm not sure if the ideas and techniques used to solve the problem are novel enough.\\nIt would be better if the presentation of the paper is improved. The result in Section 4 can be presented in a theorem, and any related analysis can be put into the proof. Section 5 about previous work on differentially private posterior sampling and stability could follow other preliminaries in Section 2. The figures are a bit hard to read. Adding sub-captions and re-scaling y-axis might help.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to your comments\", \"comment\": \"Thank you for the comments and pinpointing several typos.\\n\\nWe have made an extensive rewrite to address the weaknesses you identified. We will respond to each of them, but in a different order.\\n\\n(e) and (c) Regarding clarity/presentation and typos.\\n\\nWe have rewritten and rearranged much of the paper to improve the logical structure. We have also addressed all the typos. \\n\\n- Entropy-SGD and Entropy-SGLD are now presented in the main body of the paper as a single combined algorithm, with the one difference highlighted.\\n- Our analysis of the idealized exponential mechanism (what you refer to as gibbs sampling) is now presented as Theorem 5.5, and its relationship to Entropy-SGLD is clearly laid out in the same section. We also discuss our privacy approximation here in depth.\\n- Our result relating Entropy-SGD and PAC-Bayes bound optimization are now presented as Theorem 4.1. \\n- Our argument establishing the differentially-private PAC-Bayes bound is now structured as a proof.\\n\\n\\n(d) and (a). Regarding strong gibb sampling (i.e., the exponential mechanism and our \\\"privacy approximation\\\" regarding SGLD). \\n\\nWe have updated this part of the paper considerably, and the logical structure is much improved. The material is now entirely in the main body. We highlight some aspects of the argument here:\\n\\n- Note that we only use a SINGLE sample produced by SGLD (namely the last one). This last sample is what is used as the prior mean to produce the resulting Gibbs posterior classifier. When we plot the learning curves, the bounds are the bounds that would hold if we stopped SGLD at that iteration. \\n- The fact we only use one sample is the reason why we think it is reasonable to approximate the privacy of SGLD by that of its limiting invariant distribution (i.e., the exponential mechanism). Since we are far from the worst case with MNIST, we expect not to see much difference. There is likely a worst-case distribution where our bounds would end up being badly violated.\\n- Typical analyses of SGLD don't try to deal with the fact that it begins to mix. So they make a step by step analysis, where information is leaked at every stage. Because they do this, there is no reason not to release the whole trajectory. However, in an analysis that took advantage of mixing (very hard!), they would NOT release the whole trajectory (or at least, they certainly wouldn't release the early parts).\\n- In our experiments where we run SGLD for 1000's of epochs (!) on random noise, we see zero overfitting when we set the thermal noise to the settings suggested by theory.\\n\\n\\n(b) Regarding the goal of the experiments.\\n\\nWe have significantly revised the section describing our numerical experiments. We feel that the motivation for our experiments in much clearer now. Here are some particular points we wanted to highlight:\\n\\nPAC-Bayes bounds are data dependent and so it is an empirical question whether they are useful or not, and how they compare to previously established bounds. On top of this, we are using private data-dependent priors and a differentially private PAC-Bayes theorem and so it is an empirical question whether a sufficiently private optimization finds a decent prior. (Generalization bounds require a very high degree of privacy!) One way to think about the quantity tau/m (which determines the privacy along with our loss bound) is that, when tau/m < 1, it specifies what fraction of your data you \\\"throw away\\\" while doing your sampling in order to not learn \\\"too much\\\" about your data itself, rather than the distribution underlying it. We have to \\\"throw away\\\" quite a bit of data while privately optimizing our PAC-bayes prior. And so it is an empirical question whether we can find anything useful still. Indeed, we do. We can also study our privacy approximation regarding SGLD empirically. If the privacy/stability of SGLD degraded over time, we might have seen overfitting occur on very long runs. In fact, we don't see this, even after 1000's of epochs! The private versions of the algorithms we tested reach some level of performance and stay there.\"}",
"{\"title\": \"Your issues have been addressed...\", \"comment\": \"We revised our paper considerably over a month ago. We have since had a long back and forth conversation with AnonReviewer3 discussing the privacy approximation, which seems to have addressed their misgivings.\\n\\nWe would much appreciate it if you could update your reviews and/or score.\"}",
"{\"title\": \"Reasonably good idea (but with lots of strong assumptions) connecting generalization of entropy SGD and PAC-Bayes risk bound.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Brief summary:\\n Assume any neural net model with weights w. Assume a prior P on the weights. PAC-Bayes risk bound show that for ALL other distributions Q on the weights, the the sample risk (w.r.t to the samples in the data set) and expected risk (w.r.t distribution generating samples) of the random classifier chosen according to Q, averaged over Q, are close by a fudge factor that is KL divergence of P and Q scaled by m^{-1} + some constant.\\n\\nNow, the authors first show that optimizing the objective of the Entropy SGD algorithm is equivalent to optimizing the empiricial risk term + fudge term over all data dependent priors P and the best Q for that prior. However, PAC-Bayes bound holds only when P is NOT dependent on the data. So the authors invoke results from differential privacy to show that as long as the prior choosing mechanism in the optimization algorithm is differentially private with respect to data, differentially private priors can be substituted for valid PAC-Bayes bounds rectifying the issue. They show that when entrop SGD is implemented with pure gibbs sampling steps (as in Algorithm 3), the bounds hold.\\n\\nWeakness that remains is that the gibbs sampling step in Entropy SGD (as in algo 3 in the appendix) is actually approximated by samples from SGLD that converges to this gibbs distribution when run for infinite hops. The authors leave this hole unsolved. But under the very strong sampling assumption, the bound holds. The authors do some experiments with MNIST to demonstrate that their bounds are not trivial.\", \"strengths\": \"Simple connections between PAC-Bayes bound and entropy SGD objective is the first novelty. Invoking results from differential privacy for fixing the issue of validity of PAC-Bayes bound is the second novelty. Although technically the paper is not very deep, leveraging existing results (with strong assumptions) to show generalization properties of entropy-SGD is good.\", \"weakness\": \"a) Obvious issue : that analysis assumes the strong gibbs sampling step.\\n b) Experimental results are ok. I see that the bounds computed are non-vacuous. - but can the authors clarify what exactly they seek to justify ? \\n c) Typos: \\n Page 4 footnote \\\"the local entropy should not be <with>..\\\" - with is missing.\\n Eq 14 typo - r(h) instead of e(h) \\n Definition A.2 in appendix - must have S and S' in the inequality -both seem S.\\n\\nd) Most important clarification: The way Thm 5.1, 5.2 and the exact gibbs sampling step connect with each other to produce Thm 6.1 is in Thm B.1. How do multiple calls on the same data sample do not degrade the loss ? Explanation is needed. Because the whole process of optimization in TRAIN with may steps is the final 'data dependent prior choosing mechanism' that has to be shown to be differentially private. Can the authors argue why the number of iterations of this does not matter at all ?? If I get run this long enough, and if I get several w's in the process (like step 8 repeated many times in algorithm 3) I should have more leakage about the data sample S intuitively right ?\\n\\ne) The paper is unclear in many places. Intro could be better written to highlight the connection at the expression level of PAC-Bayes bound and entropy SGD objective and the subsequent fix using differentially private prior choosing mechanism to make the connection provably correct. Why are all the algorithms in the appendix on which the theorems are claimed in the paper ??\", \"final_decision\": \"I waver between 6 and 7 actually. However I am willing to upgrade to 7 if the authors can provide sound arguments to my above concerns.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Weak Accept\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"1) I would like to ask for the clarification regarding the generalization guarantees. The original Entropy-SGD paper shows improved generalization over SGD using uniform stability, however the analysis of the authors rely on an unrealistic assumption regarding the eigenvalues of the Hessian (they are assumed to be away from zero, which is not true at least at local minima of interest). What is the enabling technique in this submission that avoids taking this assumption? (to clarify: the analysis is all-together different in both papers, however this aspect of the analysis is not fully clear to me).\\n2) It is unclear to me what are the unrealistic assumptions made in the paper. Please, list them all in one place in the paper and discuss in details.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to your questions\", \"comment\": \"Thank you for your questions. We'll address both in turns, paraphrasing each as we understood it. We close with two remarks about uniform stability.\\n\\n1. In our paper we repeat the statement by Chaudhari et al. that their analysis has some violated assumptions about curvature. How does our result sidestep this issue with the curvature.\\n\\nOur PAC-Bayes bound is tight provided that the KL(Q||P) term is small. In our case, P is a Gaussian whose mean is differentially private. Q is then the corresponding Gibbs posterior. Whether the empirical risk surface near the mean of P is exactly flat or nearly flat does not matter. In both cases, Q and P will be nearly identical and KL(Q||P) will be very small. This is what we find empirically.\\n\\n2. What are the \\\"unrealistic assumptions\\\" you refer to?\", \"there_is_one_approximation_made_in_our_paper\": \"our \\\"privacy approximation\\\". We now discuss this approximation in the Section 1, Introduction; Section 5, Data-dependent PAC-Bayes priors via differential privacy; Section 6, Numerical results on MNIST, and Section 7, Discussion.\\n\\nOur approximation is as follows (Section 1 and especially 5 give these details): The gold standard way to minimize a bounded function f is the exponential mechanism, namely generating a sample from the distribution with density exp(- c*f) where c > 0 is a constant. The bound on f and c determine the privacy. However, if f is high-dimensional and nonconvex, then exact sampling can be intractable. SGLD is a way to get an approximate sample, and it is know that the longer you run SGLD, the better the approximation. We approximate the exponential mechanism (i.e., an exact sample), with an approximate sample from SGLD, and calculate the privacy as if we got an exact sample. Differential privacy is a worst case framework and so we might not notice this approximation on \\\"nice\\\" data. An adversary might be able to exploit our approximation if they could carefully craft the data distribution. In the text, we point out that our bounds may be optimistic as a result, but they behave in a way that the theory predicts, and so we can still learn something from studying them.\\n\\nFinally, we'll make two remarks about the uniform stability of SGD and Entropy-SGD.\\n\\nFirst, the stability analysis in Chaudhari et al.'s Entropy-SGD paper does not account for the thermal noise required to get reasonable empirical results. Once you add in the amount of thermal noise they were advocating in their experiments, their results flips: Entropy-SGD is less stable. Our results actually point to using less thermal noise in order to get good generalization at the cost of excess empirical risk.\\n\\nSecond, in the now well-known \\\"Rethinking generalization\\\" paper by Zhang, et al 2017, the authors, which include Hardt and Recht themselves, say that the uniform stability result cannot explain the difference between the performance on random and true labels, because stability does not care about the labels. The uniform stability bounds degrade to vacuous bouds after several passes through the data. The same issues are relevant to the stability analyses of Entropy-SGD.\"}",
"{\"title\": \"Why our approximation is reasonable.\", \"comment\": \"We're rushing this out, because the ACs have to make up their minds today. We'll post this comment and then go back and re-read your last two comments. Sorry for the rush, but your comment only appeared to us on Jan 19.\\n\\nWe want to start by distinguishing Entropy-SGD, Entropy-SGLD in terms of their inner/outer loops. We believe these confusions are due to the first version of the paper where these issues were muddled. It should now be crystal clear in version 2.\", \"in_summary\": \"Entropy-SGD (original Chaudhari et al.)\", \"inner_loop\": \"SGLD\", \"outer_loop\": \"SGLD\", \"our_privacy_analysis_is_of_the_outer_loop_of_entropy_sgld\": \"When we talk about getting one sample, we mean running the OUTER loop of Entropy-SGLD many many times, and then only using the last sample. Your comment focuses on the INNER loop. The inner loop wouldn't be necessary if we could calculate the gradient of the local entropy exactly. However, this inner loop works very well because the gaussian prior is so sharp/focused. Even if we had a perfect inner SGD, we would still need SGLD on the outer loop to get generalization bounds. It is the outer loop that's important. Yes, there's a little bit of bias on the gradient calculation, but we state we'll ignore this. In practice, changing the number of inner loop steps has no affect unless you choose a really small number. Subsequent work by Chaudhari et al agrees with this.\", \"so_the_question_is\": \"is it reasonable to analyze many iterations of the outer-loop of Entropy-SGLD as having the same privacy of its limiting stationary distribution?\\n\\nFirst of all, experiments bear this out (or at least don't contradict this). Look at any of the random-label experiments. The generalization error does not get worse despite running getting worse and worse as information about the labels slips through and allows the network to overfit. But this doesn't happen.\\n\\nNow, to be clear, to produce our pretty figures, we are looking at the parameter at every stage. But this doesn't effect the bound we calculate at every point in time. (It would present problems if you wanted to use the figures to do early stopping, but I suspect one could do another analysis on the optional stopping time to do much better than a per-iteration analysis.)\"}",
"{\"title\": \"Summary of the major changes we made addressing reviewer feedback\", \"comment\": \"This comment summarizes the major changes we made to the document while addressing the reviewers' comments. We have also crafted responses to each individual reviewer.\\n\\nWe took all of the reviewers\\u2019 comments seriously and made extensive edits to the article. Some of the major changes include:\\n\\n1. stating our main results as theorems and writing up the analysis in the form of a proof. This should make our contributions clearer to readers. These results include: i) the connection between Entropy-SGD optimization and PAC-Bayes prior optimization, ii) our differentially private PAC-Bayes bound, iii) our privacy analysis for the data-dependent prior.\\n\\n2. giving a single unified description of the Entropy-SGD and Entropy-SGLD algorithms, so the difference is obvious.\\n\\n3. rewriting our differential privacy analysis, to make it easier for the reader to understand our assumptions/approximations.\\n\\n3. adding experiments comparing SGLD and Entropy-SGD at different levels of thermal noise, which highlights the role of thermal noise in generalization and the difference between empirical risk minimization and local entropy maximization.\\n\\n4. discussing the relationship between our differentially private PAC-Bayes priors and data-distribution-dependent priors.\"}",
"{\"title\": \"Privacy analysis of a Markov chain when result depends on last element\", \"comment\": \"I believe that our post above (\\\"Addressing the ONE sample issue.\\\") exactly hits on the reason why we've not been understanding each other.\\n\\nSo, yes, absolutely. If you get N samples and your mixing time is k, then the privacy of the ENTIRE trajectory is certainly no better than N/k releases.\\n\\nHowever, if you get N samples w_1,....,w_N, and then your algorithm returns g(w_N) as its output, where g(.) doesn't use data, then the privacy of our algorithm is *no worse than* the privacy of w_N. You don't pay for w_1,....,w_(N-1) because the privacy of g(w_N) depends only on its distribution and its distribution is independent of w_1,...,w_N-1 *conditioned* on w_N.\\n\\nHope this clears things up.\"}",
"{\"title\": \"Misunderstanding is due to issues with version 1 versus version 2.\", \"comment\": \"OK. So this is our third response but now that we've reread your comment, we're certain that the issue you are highlighting is not a problem, and that the confusion stems from clarity issues in version 1 of the paper.\\n\\nBasically, in version 1, it sounded like we were analyzing a \\\"Perfect SGD\\\" algorithm, but actually we meant to communicate that we were analyzing a Perfect SGLD algorithm.\\n\\nEntropy-SGD is\", \"inner\": \"essentially exact gradient\", \"outer\": \"SGLD\\n\\nVersion 2 of the paper makes this crystal clear now. We apologize for Version 1 being so unclear. \\n\\nThe inner loop of Entropy-SGLD is sampling from the empirical risk surface TIMES an extremely-low-variance Gaussian. This sampling problem is VERY easy and gets even easier as you get to regions with low empirical risk. So ignoring the inner loop approximation is not really a concern for us.\\n\\nAgain, we are comforted by the fact that we NEVER see the empirical generalization error get worse over very long runs. We would expect it to if we were leaking information.\"}",
"{\"title\": \"Addressing the ONE sample issue.\", \"comment\": \"We think we understand the confusion about \\\"one\\\" sample. We'll address that first, and then address the inner loop issue.\", \"update\": \"Our short response (\\\"Privacy analysis of a Markov chain when result depends on last element\\\") below to your follow up may be a good place to start as it quickly lays out the technical issue with privacy analysis.\", \"focusing_on_the_outer_loop_and_ignoring_any_potential_bias_in_the_inner_loop\": \"SGLD produces a Markov chain w_1, w_2, ... where w_j is the vector of parameters after j iterations of SGLD.\\n\\nEarlier papers on SGLD show that, for large N, the distribution of w_N is close to the Gibbs distribution=exponential release. The privacy of w_N depends *only* on its distribution and so this is the basis of our approximation. In other words, if my algorithm uses data to compute w_1,...,w_N, and then only uses w_N for subsequent calculations, then the privacy of my algorithm is *no worse than* the privacy of w_N. Differential privacy is powerful.\\n\\nAt time step N, we use *only* w_N and no other w_j in our bound calculation. Therefore, the privacy we must pay for is the privacy of w_N alone. We don't have to pay for w_1,w_2,....,w_N-1. We say we use one sample because we use only the value of w_N to compute our bound at time step N.\\n\\nExisting analyses of SGLD analyze the privacy of the entire trajectory (w_1,w_2,...,w_N). This obviously leaks a lot of information! Indeed, say the chain mixes every k steps, then the privacy of the whole trajectory it is at least as bad as N/k samples from the exponential mechanism. But existing analyses do not handle the fact that SGLD is asymptotically ergodic. They instead do a very coarse analysis of each step. Because they do this step by step analysis they MIGHT AS WELL release the whole trajectory because there analysis is one for the whole trajectory.\\n\\nWe mentioned the experimental justification in the comment below (\\\"Why our approximation is reasonable.\\\") Just to reiterate, look at our random label experiments. (Figure 1, bottom right.) The true error here is 0.5 and so generalization error is determined by gap between training error and 0.5. \\n\\nIf running SGLD for many many steps caused leakage that allowed us to overfit, we would see the empirical generalization error (empirical risk - test error) increasing. However, we see in our experiments that the generalization error is not increasing over time. It's steady. This plot covers 16,000 passes through the data! (We actually ran this experiment for 100,000 passes through the data with no difference.)\\n\\nFinally, regarding the inner loop. Note that there's no inherent issue with the inner loop gradients leaking information. To see this, recall that exact gradients have no (!) privacy, but SGLD uses exact gradients. SGLD adds noise to the exact gradients and over time the magnitude of this noise dominates and FK dynamics take over and you converge to the exponential release. Our approximate gradients (inner SGLD loop) are not unbiased... that's the issue! (They have way more privacy than exact gradients.) It is possible that the bias in the gradients leads us to converge to a different Gibbs distribution, with different privacy. But, as we've argued, for the settings of Gaussian variance that we have studied, this inner sampling step is over a tiny (!) region in weight space and so we believe this sampling step is pretty straightforward. E.g., we saw absolutely no change when we doubled the number of inner iterations. So we don't actually think that there's much bias.\\n\\n\\nWe want to address one more issue about the motivation behind this approximation. In an ideal world, we would know what the privacy of Entropy-SGLD was, and we would plug that into our new differentially private PAC-Bayes bound. Current privacy analyses of SGLD are borderline useless because they don't deal with mixing. So we've made some strong approximations to see what type of bounds we might have gotten optimistically. And the paper makes it clear that things are optimistic. But we learn quite a bit from this exercise.\"}",
"{\"title\": \"Final comments\", \"comment\": \"I am not saying that the ideas in this paper are not good. I very much like these ideas. But it seems that the authors guarantee that every gradient step on the local entropy objective is privacy preserving and hence (by the results in this paper) imply valid generalization bounds (using PAC Bayes theory) on the network obtained after one such step.\\n\\nWhat is not clear to me is what happens if I run many gradient steps on the local entropy objective - intuitively privacy of multiple releases from the exponential mechanisms would decay with iterations- this is not clear to me at all at this point. I think the next submission after clearly clarifying these issues would certainly be a nice one.\\n\\nBecause this aspect is unclear, I am uncertain about raising the scores.\"}",
"{\"title\": \"The issues you've raised have been addressed.\", \"comment\": \"Dear AnonReviewer1,\\n\\nWe have addressed all your concerns. We've also had a lengthy conversation with AnonReviewer3 around the privacy approximation. That reviewer appears to be now convinced of the reasonableness of our approximation. \\n\\nWe would very much appreciate if you could update your reviews/scores.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your feedback.\\n\\nYou raise two issues regarding novelty and clarity/presentation. We will address these in turns.\\n\\nRegarding novelty. We have recently presented this work to experts at a PAC-Bayes workshop. They expressed great interest in our results using differential privacy, and we have fielded a number of requests for preprints. Our private data-dependent priors can be viewed as a new type of data-distribution-dependent prior. The classical technique for dealing with data-distribution-dependent priors is due to Catoni and Lever et al., but these techniques have only been applied to Gibbs distributions, whereas our approaches offers much more flexibility. We now explain this connection more carefully in the related work section. We believe that our approach opens up the avenue to more advanced uses of stable, data-dependent priors. \\n\\nBeyond connecting PAC-Bayes theory and privacy, our work makes a number of other contributions: \\n- We reveal the importance of the thermal noise to the generalization performance of Entropy-SGD, and tie this parameter to stability/privacy. We also make a detailed study of the role of thermal noise in overfitting on MNIST, not only for Entropy-SGD, but also for SGLD and Entropy-SGLD.\\n- We identify the deep connection between Entropy-SGD and PAC-Bayes bounds, which guides us to new ways to improve the generalization performance of Entropy-SGD. Our modifications lead to new learning algorithms that do not overfit, yet still have very good risk.\\n- We obtain risk/generalization bounds for neural networks that, up to our privacy approximation, are much tighter than any bounds previously published for MNIST.\\n\\nRegarding clarity/presentation. We have rewritten several sections in the paper using your feedback as a guideline. Our connection between Entropy-SGD and PAC-Bayes priors is now stated as a theorem and our argument is now structured as a proof. Our derivations concerning privacy are now also organized into a theorem in Section 5. Indeed, Section 5 has been reworked from the ground up to have much clearer logical structure. We have reproduced all figures with larger fonts and careful attention to readability. The organization of Figure 1 now makes it immediately clear which figures are on true or random labels, and which algorithms are being compared.\"}",
"{\"title\": \"Discussion\", \"comment\": \"Yes I have encouraged them to discuss and see if their impression of your paper has improved after your response.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper proposes a new analysis of the optimization method called entropy-sgd which seemingly leads to more robust neural network classifiers. This is a very important problem if successful. The reviewers are on the fence with this paper. On the one hand they appreciate the direction and theoretical contribution, while on the other they feel the assumptions are not clearly elucidated or justified. This is important for such a paper. The author responses have not helped in alleviating these concerns. As one of the reviewers points out, the writing needs a massive overhaul. I would suggest the authors clearly state their assumptions and corresponding justifications in future submissions of this work.\"}"
]
} |
rJWechg0Z | Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation | [
"Pietro Morerio",
"Jacopo Cavazza",
"Vittorio Murino"
] | In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks. | [
"unsupervised domain adaptation",
"entropy minimization",
"image classification",
"deep transfer learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rJWechg0Z | https://openreview.net/forum?id=rJWechg0Z | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SkjcYkCgf",
"BkiyM2dgG",
"rkcg_HfXf",
"By07bPdwz",
"HkkI8HMQM",
"SkhhtBGXG",
"r15hYW5gM",
"SJJSYrGmG",
"SJAnG1TSf",
"HJGANV2Ez"
],
"note_type": [
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1512073619015,
1511731683141,
1514457073746,
1519051045818,
1514456646693,
1514457524124,
1511819698954,
1514457398681,
1517249206291,
1516156106180
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper399/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper399/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper399/Authors"
],
[
"ICLR.cc/2018/Conference/Paper399/Authors"
],
[
"ICLR.cc/2018/Conference/Paper399/Authors"
],
[
"ICLR.cc/2018/Conference/Paper399/Authors"
],
[
"ICLR.cc/2018/Conference/Paper399/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper399/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper399/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"This paper proposes a principled connection between correlation alignment and entropy minimization to achieve a more robust domain adaptation. The authors show the connection between the two approaches within a unified framework. The experimental results support the claims in the paper, and show the benefits over state-of-the-art methods such as DeepCoral.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The authors propose a novel deep learning approach which leverages on our finding that entropy minimization\\nis induced by the optimal alignment of second order statistics between source and target domains. Instead of relying on Euclidean distances when performing the alignment, the authors use geodesic distances which preserve the geometry of the manifolds. Among others, the authors also propose a handy way to cross-validate the model parameters on target data using the entropy criterion. The experimental validation is performed on benchmark datasets for image classification. Comparisons with the state-of-the-art approaches show that the proposed marginally improves the results. The paper is well written and easy to understand.\\n\\nAs a main difference from DeepCORAL method, this approach relies on the use of geodesic distances when doing the alignment of the distribution statistics, which turns out to be beneficial for improving the network performance on the target tasks. While I don't see this as substantial contribution to the field, I think that using the notion of geodesic distance in this context is novel. The experiments show the benefit over the Euclidean distance when applied to the datasets used in the paper. \\n\\nA lot of emphasis in the paper is put on the methodology part. The experiments could have been done more extensively, by also providing some visual examples of the aligned distributions and image features. This would allow the readers to further understand why the proposed alignment approach performs better than e.g. Deep Coral.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Need further exploration for the use of entropy to select free parameters; geodesic correlation alignment is a reasonable improvement\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper improves the correlation alignment approach to domain adaptation from two aspects. One is to replace the Euclidean distance by the geodesic Log-Euclidean distance between two covariance matrices. The other is to automatically select the balancing cost by the entropy on the target domain. Experiments are conducted from SVHN to MNIST and from SYN MNIST to SVHN. Additional experiments on cross-modality recognition are reported from RGB to depth.\", \"strengths\": [\"It is a sensible idea to improve the Euclidean distance by the geodesic Log-Euclidean distance to better explore the manifold structure of the PSD matrices.\", \"It is also interesting to choose the balancing cost using the entropy on the target. However, this point is worth further exploring (please see below for more detailed comments).\", \"The experiments show that the geodesic correlation alignment outperforms the original alignment method.\"], \"weaknesses\": \"- It is certainly interesting to have a scheme to automatically choose the hyper-parameters in unsupervised domain adaptation, and the entropy over the target seems like a reasonable choice. This point is worth further exploring for the following reasons. \\n1. The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac\\u2019s delta function as the predictions over the target, etc.\\n2. The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \\\\lamba for geodesic alignment but not for the Euclidean alignment.\\n3. The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e..g, learning rate, early stopping) for the geodesic alignment? \\n4. If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \\\\lambda differ from that by the entropy? \\n\\n- The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated.\", \"question\": \"- I am not sure how the gradients were computed after the eigendecomposition in equation (8).\\n\\n\\nI like the idea of automatically choosing free parameters using the entropy over the target domain. However, instead of justifying this point by the theorem that relies on many assumptions, it is better to further test it using experiments (e.g., on Office31 and for other adaptation methods). The geodesic correlation alignment is a reasonable improvement over the Euclidean alignment.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We are thankful for the provided comments and we will respond (A) to each query (Q) in detail.\\n\\n\\nQ 1 - (a) Can entropy minimization on target be used with other methods for DA param tuning? (b) Does it require that the model was trained to minimize the geodesic correlation distance between source and target? \\n\\nA 1 - (a) Let us point out that we are not minimizing entropy on the target as a regularizing training loss, as previous works did (Tzeng et al. 2015, Haeusser et al. 2017 or Carlucci et al. 2017). For the latter methods, entropy cannot be used as a criterion for parameter tuning, since it is one of the quantities explicitly optimized in the problem. Differently, we obtain the minimum of the entropy as a consequence of an optimal correlation alignment. Such criterion could possibly be used for other methods aiming at source-target distribution alignment. (b) Alignment does not *explicitly* require a geodesic distance. However, since the former must be optimal, it cannot be attained with an Euclidean distance, which is the reason why we propose the log-Euclidean one.\\n\\n\\nQ 2. - It would be helpful to have a longer discussion on the connection with Geodesic flow kernel [1] and other unsupervised manifold based alignment methods [2]. Is this proposed approach an extension of this prior work to the case of non-fixed representations in the same way that Deep CORAL generalized CORAL? \\n[1] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.\\n[2] Raghuraman Gopalan,, Ruonan Li and Rama Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011. \\n\\nA -2 The works [1,2] are kernelized approaches which, by either using Principal Components Analysis [1] or Partial Least Squares [2], a sequence of intermediate embeddings is generated as a smooth transition from the source to the target domain. In [1], such sequence is implicitly computed by means of a kernel function which is subsequently used for classification. In [2], after the source data are projected on hand-crafted intermediate subspaces, classification is performed. \\nIn [1] and [2], the necessity for engineering intermediate embeddings is motivated by the need for adapting the fixed input representation so that the domain shift can be solved. As a way to do it, [1] and [2] follow the geodesics on the data manifold. \\nIn a very same way, our proposed approach, MECA, follows the geodesics on the manifold (of second order statistics), but, differently, this step is finalized to better guide the feature learning stage. \\nFor all these reasons, MECA and [1,2] can be seen as different manners of exploiting geodesic alignment for the sake of domain adaptation.\\n\\n\\nQ 3. - Why does performance suffer compared to TRIPLE on the SYN->SVHN task? Is there some benefit to the TRIPLE method which may be combined with the MECA approach? \\n\\nA 3 - As we argued in the paper, the performance on SYN to SVHN task is due to the the visual similarity between source and target domain whose relative data distributions are already quite aligned. Also note that TRIPLE already performs better than direct training on the target domain. This could be interpreted as a cue for TRIPLE to perform implicit data augmentation on the source synthetic data (and, indeed, the same could be done in MECA, trying to boost its performance by means of data augmentation). However, when more realistic datasets are used as source, such procedure becomes more difficult to be accomplished and that\\u2019s why, on all the other benchmarks, TRIPLE is inferior to MECA in terms of performance.\"}",
"{\"title\": \"Github code\", \"comment\": \"https://github.com/pmorerio/minimal-entropy-correlation-alignment\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We are thankful for the detailed reading and careful evaluation of our work.\\n\\n\\nBy following the proposed suggestion, we added to the Appendix some t-SNE visualizations in which we compare our baseline network with no adaptation against Deep CORAL and MECA on the SVHN to MNIST benchmark. As the we observed, Deep CORAL and MECA achieve a better separation among classes - confirming the quantitative results of Table 1. \\n\\nMoreover, when looking at the degree of confusion between source and target domain achieved within each digit\\u2019s class, we can qualitatively show that MECA is better in \\u201cshuffling\\u201d source and target data than Deep CORAL, in which the two are close but much more separated. This can be read as an additional, qualitative evidence of the superiority of the proposed geodesic over the Euclidean alignment. \\n\\nThese considerations and further remarks have been discussed in the revised paper (appendix).\"}",
"{\"title\": \"Response to AnonReviewer1 - part 2\", \"comment\": \"W 5. - The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated.\\n\\nIn domain adaptation, the equivalence between domain and dataset is not automatic and some works have been operating in the direction of discovering domains as a subpart of a dataset (e.g., Gong et al. Reshaping Visual Datasets for Domain Adaptation - NIPS 2013). In this respect, the NYU dataset can be used to quantify adaptation across different sensor modalities within the same dataset.\", \"the_nyu_experiment_we_carried_out_was_also_considered_in_the_following_recent_domain_adaptation_works\": \"Tzeng et al. \\u201cAdversarial Discriminative Domain Adaptation ICCV 2017\\u201d and Volpi et al. \\u201cAdversarial Feature Augmentation for Unsupervised Domain Adaptation\\u201d ArXiv 2017. We believe such experiment adds a considerable value to our work and we would like to maintain it.\\nIn any case, after the reviewer\\u2019s suggestion, we are now running the Office-31 experiments. Preliminary results on the Amazon->Webcam split are in line with those already in the paper and coherent with the ones published in Sun & Saenko, 2016: Baseline (no adapt) 58.1%, Deep-Coral +5.9%, MECA +8.7% (Note that we use a VGG as a baseline architecture, while Sun & Saenko, 2016 use AlexNet).\\n---\\n\\nQ 1. - I am not sure how the gradients were computed after the eigendecomposition in equation (8). \\n\\nAs a common practice, we let the software library to automatically compute the gradients along the computation graph, given the fact that the additive regularizer that we wrote is nothing but a differentiable composition of elementary functions such as logarithms and square exponentiation. Although it\\u2019s possible to explicitly write down gradients with formulas, such explicit formalism is not of particular interest and we decided to remove such calculations from the paper in order to reduce verbosity.\"}",
"{\"title\": \"New correlation alignment based domain adaptation method which results in minimal target entropy\", \"rating\": \"7: Good paper, accept\", \"review\": \"Summary:\\nThis paper proposes minimal-entropy correlation alignment, an unsupervised domain adaptation algorithm which links together two prior class of methods: entropy minimization and correlation alignment. Interesting new idea. Make a simple change in the distance function and now can perform adaptation which aligns with minimal entropy on target domain and thus can allow for removal of hyperparameter (or automatic validation of correct one).\\n\\nStrengths\\n- The paper is clearly written and effectively makes a simple claim that geodesic distance minimization is better aligned to final performance than euclidean distance minimization between source and target. \\n- Figures 1 and 2 (right side) are particularly useful for fast understanding of the concept and main result.\\n\\n\\nQuestions/Concerns:\\n- Can entropy minimization on target be used with other methods for DA param tuning? Does it require that the model was trained to minimize the geodesic correlation distance between source and target?\\n- It would be helpful to have a longer discussion on the connection with Geodesic flow kernel [1] and other unsupervised manifold based alignment methods [2]. Is this proposed approach an extension of this prior work to the case of non-fixed representations in the same way that Deep CORAL generalized CORAL?\\n- Why does performance suffer compared to TRIPLE on the SYN->SVHN task? Is there some benefit to the TRIPLE method which may be combined with the MECA approach?\\n\\n\\t\\t\\t\\t\\t\\n[1] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.\\n\\t\\t\\t\\t\\t\\n[2] Raghuraman Gopalan and Ruonan Li. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Response to AnonReviewer1 - part 1\", \"comment\": \"We thank the reviewer for having read our work with great detail and for the valuable suggestions. We will address all quoted weaknesses (W) and questions (Q) separately.\\n\\n\\nW 1. - The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac\\u2019s delta function as the predictions over the target, etc. \\n \\nIn Theorem 1, by assuming the optimal correlation alignment, we can prove that entropy is minimized (which, ancillary, implies the Dirac\\u2019s delta function for the predictions). Under a theoretical standpoint, the strong assumption is balanced by the significant claim we have proved. In practical terms, the reviewers is right in observing that the optimal alignment is not granted for free, and this justifies the choice of a more sound metric for correlation alignment. That\\u2019s why we proposed the log-Euclidean distance to make the alignment closer to the optimal one.\\n--\\n\\nW 2. - The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \\\\lamba for geodesic alignment but not for the Euclidean alignment. \\n\\nAs we showed in Figure 2, in the case of geodesic alignment, entropy minimization always correlate with the optimal performance on the target domain. Since the same does not always happen when an Euclidean metric is used, this is an evidence that Euclidean alignment is not able to achieve an optimal correlation alignment which, in comparison, is better achieved through our geodesic approach. \\n--\\n\\nW 3. - The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e.g., learning rate, early stopping) for the geodesic alignment?\\n\\nIt does make sense to fine tune the \\\\lambda by using target entropy since, ultimately, a low entropy on the target is a proxy for a confident classifier whose predictions are peaky. In other words, since \\\\lambda regulates the effect of the correlation alignment, it also balances the capability of a classifier trained on the source to perform well on the target. Since in our pipeline \\\\lambda is the only parameter related to domain adaptation, we deem our choice quite natural. In fact, other free parameters (learning rate, early stopping) are not related to adaptation, but to the regular training of the deep neural network, which can be actually determined by using source data only - as we did in our experiments.\\n--\\n\\nW 4. - If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \\\\lambda differ from that by the entropy?\\n\\nThe availability of a few labeled samples from the target domain would cast the problem into semi-supervised domain adaptation. Instead, our work faces the more challenging unsupervised scenario. \\nIndeed, we propose an unsupervised method which lead to the same results of using labelled target samples for validation. This is shown in the top-right of Figure 2: the blue curve accounts for the best target performance, which is computed by means of target test labels - thus not accessible during training. Differently, the red curve can be computed at training time since the entropy criterion is fully unsupervised. \\nFigure 2 shows that the proposed criterion is effectively able to select the \\\\lambda which corresponds to the best target performance that one could achieve if one was allowed to use target label. Notice that the same does not happen for Deep CORAL (bottom-right) - and the reported results for that competitor were done by direct validation on the target.\\n--\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper presents a nice approach to domain adaptation that improves empirically upon previous work, while also simplifying tuning and learning.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Comments after reading the rebuttal\", \"comment\": \"The rebuttal addresses most of my questions. Here are two more cents.\\n\\nThe theorem still does not favor the correlation alignment over the geodesic alignment. What Figure 2 shows is an empirical observation but the theorem itself does not lead to the result.\\n\\nI still do not think the cross-modality setup is appropriate for studying domain adaptation. That would result in disparate supports to the distributions of the two domains. In general, it is hard to adapt between two such \\\"domains\\\" though the additional pairwise relation between the data points of the two \\\"domains\\\" could help. Moreover, there has been a rich literature on multi-modality data. It is not a good idea to term it with a new name and meanwhile ignore the existing works on multi-modalities.\"}"
]
} |
HkinqfbAb | Automatic Parameter Tying in Neural Networks | [
"Yibo Yang",
"Nicholas Ruozzi",
"Vibhav Gogate"
] | Recently, there has been growing interest in methods that perform neural network compression, namely techniques that attempt to substantially reduce the size of a neural network without significant reduction in performance. However, most existing methods are post-processing approaches in that they take a learned neural network as input and output a compressed network by either forcing several parameters to take the same value (parameter tying via quantization) or pruning irrelevant edges (pruning) or both. In this paper, we propose a novel algorithm that jointly learns and compresses a neural network. The key idea in our approach is to change the optimization criteria by adding $k$ independent Gaussian priors over the parameters and a sparsity penalty. We show that our approach is easy to implement using existing neural network libraries, generalizes L1 and L2 regularization and elegantly enforces parameter tying as well as pruning constraints. Experimentally, we demonstrate that our new algorithm yields state-of-the-art compression on several standard benchmarks with minimal loss in accuracy while requiring little to no hyperparameter tuning as compared with related, competing approaches. | [
"neural network",
"quantization",
"compression"
] | Reject | https://openreview.net/pdf?id=HkinqfbAb | https://openreview.net/forum?id=HkinqfbAb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S14u6LpQG",
"r1xnEJpBM",
"HyrzaIp7G",
"Byk0Q2_xz",
"H1J8TLamG",
"Hy-t_ztgG",
"Bky9cL_eG",
"S13TkvpQz"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1515183467996,
1517249704153,
1515183372677,
1511732167425,
1515183430923,
1511757944632,
1511709318701,
1515184068338
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper909/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper909/Authors"
],
[
"ICLR.cc/2018/Conference/Paper909/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper909/Authors"
],
[
"ICLR.cc/2018/Conference/Paper909/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper909/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper909/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Author Rebuttal\", \"comment\": \"Thanks for the feedback. In table 2 of our revised paper, we added a new experiment that compares with Bayesian compression on VGG16 on CIFAR-10. This is comparable to major existing work (that we\\u2019re aware of) on compressing neural networks. In table 2 we also compare to Deep Compression and GMM prior at the same level of classification accuracy, to address concerns about the accuracy loss in our method.\\n\\nAs with most machine learning methods, some tuning may be needed for optimal performance. In our experiments we simply tried K-1 (number of non-zero parameter values) on a log scale of 4, 8, 16\\u2026, and settled on the first K that gave acceptable accuracy loss. The k-means and l1 penalty factors, lamba_1 and lambda_2, were tuned in [1e-6, 1e-3] with a combination of grid search and manual tuning. We believe this is less tuning compared to probabilistic methods like GMM or scaled mixture priors using many more parameters/hyperparameters that are less intuitive and often require careful initialization. In fact, the main reason why we couldn\\u2019t compare kmeans with GMM prior ourselves on larger datasets was the latter required significantly more computation and tuning (we were often unable to get it to work).\", \"regarding_additional_comments\": \"a). Fixed in revised paper.\\nb). Fixed in revised paper. \\nc). See section 3 of revised paper; as described, at the beginning of the 1D kmeans algorithm, we sort the parameters on the number line and initialize K partitions corresponding to the K clusters; E-step then simplifies to re-drawing the partitions given partition means (requiring K binary-searches), and M-step recalculates partition means (partition sums/sizes are maintained for efficiency).\\nd). Fixed in revised paper; we show the typical training dynamics in figures 3 and 4.\\ne). Thanks for catching this; we used the wrong image where K=7. See the correct one with K=8 in revised paper.\\nf). You mean methods like Optimal Brain Damage and Optimal Brain Surgeon? Admittedly the kmeans distortion is only a rough surrogate to the actual quantization loss, but we found it sufficient for compression; the fact that our method doesn\\u2019t use more sophisticated techniques such as second order information means it adds very little overhead to training. Again we\\u2019re striving for simplicity and efficiency.\\ng). By sparsity we meant the fraction of parameters that are zero.\\nh). Fixed in revised paper; added new discussion in results section about the observed structured sparsity (entire units and filters being pruned); we observed this with l1 alone, however, but to a lesser extent and with more accuracy loss.\\ni). Fixed in revised paper.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper presents yet another scheme for weight tying for compressing neural networks, which looks a lot like a less Bayesian version of recent related work, and gets good empirical results on realistic problems.\\n\\nThis paper is well-executed and is a good contribution, but falls below the bar on\\n1) Discovering something new and surprising, except that this particular method (which is nice and simple and sensible) works well. That is, it doesn't advance the conversation or open up new directions.\\n2) Potential impact (although it might be industrially relevant)\\n\\nAlso, the title is a bit overly broad given the amount of similar existing work.\"}",
"{\"title\": \"Author rebuttal\", \"comment\": \"Please see the revised paper for a clearer discussion of our proposed method. L1 penalty is indeed used for soft-tying in the sparse-formulation, and yes the hard-tying stage fixes cluster assignments, which is essentially the same as the Hashed Net method except that the assignments are learned from the soft-tying stage, instead of being random.\\nFollowing our discussion in section 3 and 4.1, randomly (hard) tying parameters corresponds to restricting the solution to a random, low dimensional linear subspace; for (especially deep) neural networks that are already hard to train, this extra restriction would significantly hamper learning. The idea is illustrated by figure 5(a) with smaller K and 5(b) for t=20000. Hashed Net effectively uses a very large K with random tying, which poses little/no problem to training, but a larger K would result in degraded compression efficiency for our method. We found soft-tying to be crucial in guiding the parameters to the \\u201cright\\u201d linear subspace (determined by the assignments, which is itself iteratively improved), such that the projection of parameters onto it is minimized, leading to small accuracy loss when switching to hard-tying; so in this sense we don\\u2019t think it\\u2019s the same as pre-training the model. That said, starting from a pre-trained solution does seem to make the soft-tying phase easier.\\n\\nThe reference error (no regularization) VGG11 on CIFAR10 in our experiment was about 21%, the same as training with sparse APT from scratch; we apologize for failing to mention that. We replaced this part of experiment with VGG16 (15 million parameters) in the revised paper, to compare with Bayesian compression (Louizos et al. 2017). We agree that the number of parameters (and more generally the architecture) does influence the difficulty of optimization and extent to which a network can be compressed. \\n\\nHopefully we made it clear in the revised paper that the kmeans prior for quantization alone is not enough for compression, e.g .K=2 (storing 32-bit floats as 1 bit indices) would roughly give compression rate (without post-processing) of only 32 and likely high accuracy loss with our current formulation. We did a small scale evaluation of l1 penalty alone followed by thresholding for compression, and didn\\u2019t find it as effective as kmeans+l1 for achieving sparsity. Note that the Deep Compression work already did an ablation test and reported compression rates with pruning (l1+thresholding) only, and we didn\\u2019t find it necessary to repeat this work, since we use the same compression format as theirs. Please see revised table 2 for our method\\u2019s performance at the same classification error as Deep Compression and Soft Weight Sharing (GMM prior), to clear up the concern with accuracy loss in our method.\", \"regarding_the_minor_issues\": \"-We feel that many existing methods can be difficult/expensive to apply in practice, and our method has the virtue of being very simple, easy to implement, and efficient (linear time/memory overhead) while achieving good practical performance without much tuning.\\n\\n-See figure 5(b) added in the appendix.\\n\\n-As we discuss at the end of sec 3.1 in revised paper, at the end of soft-tying we identify the zero cluster as the one with smallest magnitude, and fix it at zero throughout hard-tying. It is possible to use a threshold to prune multiple clusters of parameters that are near zero, but generally we didn\\u2019t find it necessary, as a large zero cluster naturally develops during soft-tying for properly chosen K.\\n\\n-We weren\\u2019t aware of this work; thanks for pointing it out. We\\u2019ve added some relevant discussion. The biggest difference compared to our method is that our formulation uses hard assignments even in the soft-tying phase, whereas their method calculates soft-assignment responsibilities of cluster centers for each parameter (similar to the GMM case) and that could take O(NK) time/memory. They achieved smaller accuracy loss on CIFAR-10 than ours, but with K=75 (instead of our 33). However, it\\u2019s not clear how much computation was actually involved.\"}",
"{\"title\": \"comments on K-means and L1 regularization\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This is yet another paper on parameter tying and compression of DNNs/CNNs. The key idea here is a soft parameter tying under the K-means regularization on top of which an L1 regularization is further imposed for promoting sparsity. This strategy seems to help the hard tying in a later stage while keeping decent performance. The idea is sort of interesting and the reported experimental results appear to be supportive. However, I have following concerns/comments.\\n\\n1. The roles played by K-means and L1 regularization are a little confusing from the paper. In Eq.3, it appears that the L1 regularization is always used in optimization. However, in Eq.4, the L1 norm is not included. So the question is, in the soft-tying step, is L1 regularization always used? Or a more general question, how important is it to regularize the cross-entropy with both K-means and L1? \\n\\n2. A follow-up question on K-means and L1. If no L1 regularization, does the K-means soft-tying followed by a hard-tying work as well as using the L1 regularization throughout? \\n\\n3. It would be helpful to say a few words on the storage of the model parameters. \\n\\n4. It would be helpful to show if the proposed technique work well on sequential models like LSTMs.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Author Rebuttal\", \"comment\": \"1. Please see the revised paper for a clearer discussion of our method. We use kmeans prior alone in our investigation of automatic parameter tying (e.g., in the sections on algorithmic behavior and generalization effect); we always use the L1 norm together with kmeans for the purpose of compression, since quantization alone is not sufficient for state of the art compression (e.g., for 32-bit floats, K=2 roughly gives compression rate of 32; to compress well over 100 times would require K=1, i.e. the entire network using a single parameter value which is infeasible).\\n\\n2. The answer is no, as explained above. \\n\\n3. As we discussed in section 3 of revised paper, we implemented the sparse encoding scheme proposed by Han, following the detailed appendix in Ullrich. Basically network parameters are stored as CSR matrices (we append the bias from each layer as extra an column to the layer\\u2019s weight matrix); the CSR data structures (indices for the position of non-sparse entries, as well as assignment indices) are further compressed with standard Huffman coding.\\n\\n4. We haven\\u2019t gotten an opportunity to investigate sequential models like LSTMs, but we don\\u2019t think anything particular about them may prevent our method from being used. It might require some more tuning to make sure the pull from the cluster centers aren\\u2019t strong enough to overpower the gradient signals from data loss, and might require initializing to a pre-trained solution rather than from scratch. That said, we\\u2019ve found our method to be rather agnostic towards the nature of different parameters in the network (e.g. weights/biases in all conv/fc layers, along with batch normalization parameters), so it should be able to handle things like memory cells/gates.\"}",
"{\"title\": \"Simple and effective compression method, but needs refinement and large-scale experiments\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"As the authors mentioned, weight-sharing and pruning are not new to neural network compression. The proposed method resembles a lot with the deep compression work (Han et. al. 2016), with the distinction of clustering across different layers and a Lasso regularizer to encourage sparsity of the weights. Even though the change seems minimal, the authors has demonstrated the effectiveness on the benchmark.\\n\\nBut the description of the optimization strategy in Section 3 needs some refinement. In the soft-tying stage, why only the regularizer (1) is considered, not the sparsity one? In the hard-tying stage, would the clustering change in each iteration? If not, this has reduced to the constrained problem as in the Hashed Compression work (Chen et. al. 2015) where the regularizer (1) has no effect since the clustering is fixed and all the weights in the same cluster are equal. Even though it is claimed that the proposed method does not require a pre-trained model to initialize, the soft-tying stage seems to take the responsibility to \\\"pre-train\\\" the model.\\n\\nThe experiment section is a weak point. It is much less convincing with no comparison result of compression on large neural networks and large datasets. The only compression result on large neural network (VGG-11) comes with no baseline comparisons. But it already tells something: 1) what is the classification result for reference network without compression? 2) the compression ratio has significantly reduced comparing with those for MNIST. It is hard to say if the compression performance could generalize to large networks.\\n\\nAlso, it would be good to have an ablation test on different parts of the objective function and the two optimization stages to show the importance of each part, especially the removal of the soft-tying stage and the L1 regularizer versus a simple pruning technique after each iteration. This maybe a minor issue, but would be interesting to know: what would the compression performance be if the classification accuracy maintains the same level as that of the deep compression. As discussed in the paper, it is a trade-off between accuracy and compression. The network could be compressed to very small size but with significant accuracy loss.\", \"some_minor_issues\": [\"In Section 1, the authors discussed a bunch of pitfalls of existing compression techniques, such as large number of parameters, local minimum issues and layer-wise approaches. It would be clearer if the authors could explicitly and succinctly discuss which pitfalls are resolved and how by the proposed method towards the end of the Introduction section.\", \"In Section 4.2, the authors discussed the insensitivity of the proposed method to switching frequency. But there is no quantitative results shown to support the claims.\", \"What is the threshold for pruning zero weight used in Table 2?\", \"There are many references and comparisons missing: Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations in NIPS 17 for instance. This paper also considers quantization for compression which is related to this work.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Automatic Parameter Tying in Neural Networks\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Approach is interesting however my main reservation is with the data set used for experiments and making general (!) conclusions. MNIST, CIFAR-10 are too simple tasks perhaps suitable for debugging but not for a comprehensive validation of quantization/compression techniques. Looking at the results, I see a horrific degradation of 25-43% relative to DC baseline despite being told about only a minimal loss in accuracy. A number of general statements is made based on MNIST data, such as on page 3 when comparing GMM and k-means priors, on page 7 and 8 when claiming that parameter tying and sparsity do not act strongly to improve generalization. In addition, by making a list of all hyper parameters you tuned I am not confident that your claim that this approach requires less tuning.\", \"additional_comments\": \"(a) you did not mention student-teacher training\\n(b) reference to previously not introduced K-means prior at the end of section 1\\n(c) what is that special version of 1-D K-means?\\n(d) Beginning of section 4.1 is hard to follow as you are referring to some experiments not shown in the paper.\\n(e) Where is 8th cluster hiding in Figure 1b?\\n(f) Any comparison to a classic compression technique would be beneficial.\\n(g) You are referring to a sparsity at the end of page 8 without formally defining it. \\n(h) Can you label each subfigure in Figure 3 so I do not need to refer to the caption? Can you discuss this diagram in the main text, otherwise what is the point of dumping it in the appendix?\\n(i) I do not understand Figure 4 without explanation.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Updated submission\", \"comment\": \"We have updated the draft to address the reviewers concerns (plus rearranging to still keep it within the 8 page limit). Most notably, we have added additional experiments on VGG-16 and improved the clarity of the presentation by adding additional details in both the algorithm description and experimental sections.\"}"
]
} |
Syhr6pxCW | PixelNN: Example-based Image Synthesis | [
"Aayush Bansal",
"Yaser Sheikh",
"Deva Ramanan"
] | We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an ``incomplete'' signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags. | [
"conditional image synthesis",
"nearest neighbors"
] | Accept (Poster) | https://openreview.net/pdf?id=Syhr6pxCW | https://openreview.net/forum?id=Syhr6pxCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"ry2edZQMf",
"SJt3bbKgz",
"SyubVypHG",
"BJxxa-XzG",
"BJ4AfUoeG",
"SJt9X9kWz",
"HJ9SiWmfM"
],
"note_type": [
"official_comment",
"official_review",
"decision",
"official_comment",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1513457652274,
1511752112648,
1517249536080,
1513458920398,
1511903947598,
1512182688724,
1513458498366
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper432/Authors"
],
[
"ICLR.cc/2018/Conference/Paper432/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper432/Authors"
],
[
"ICLR.cc/2018/Conference/Paper432/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper432/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper432/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for the positive feedback and suggestion to improve writing\", \"comment\": \"We thank the reviewer for the suggestion to improve the writing, and will incorporate these suggestions in our final version.\\n\\n1. \\\"The spatial grouping that is happening in the compositional stage, is it solely due to the multi-scale hypercolumns? Would the result be more inconsistent if the hypercolumns had smaller receptive field?\\\" \\n\\nYes, we think so. We believe that much of the spatial grouping is due to the multi-scale hypercolumns. The results degrade with smaller receptive fields.\\n\\n2. \\\"For the multiple outputs, the k neighbor is selected at random?\\\" \\n\\nYes, the k-neighbors are selected at random as described in \\\"Efficient Search\\\" on page-6. We will clarify this.\"}",
"{\"title\": \"Nice approach on conditional image generation\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"Overall I like the paper and the results look nice in a diverse set of datasets and tasks such as edge-to-image, super-resolution, etc. Unlike the generative distribution sampling of GANs, the method provides an interesting compositional scheme, where the low frequencies are regressed and the high frequencies are obtained by \\\"copying\\\" patches from the training set. In some cases the results are similar to pix-to-pix (also in the numerical evaluation) but the method allows for one-to-many image generation, which is a important contribution. Another positive aspect of the paper is that the synthesis results can be analyzed, providing insights for the generation process.\\n\\nWhile most of the paper is well written, some parts are difficult to parse. For example, the introduction has some parts that look more like related work (that is mostly a personal preference in writting). Also in Section 3, the paragraph for distance functions do not provide any insight about what is used, but it is included in the next paragraph (I would suggest either merging or not highlighting the paragraphs).\", \"q\": \"For the multiple outputs, the k neighbor is selected at random?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper proposes a novel method for conditional image generation which is based on nearest neighbor matching for transferring high-frequency statistics. The evaluation is carried out on several image synthesis tasks, where the technique is shown to perform better than an adversarial baseline.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Thanks for the positive feedback\", \"comment\": \"We thank the reviewer for their feedback.\\n\\n1. \\\"Requires a potentially costly search procedure to generate images.\\\" -\\n\\nWe agree that this approach could be computationally expensive in its naive form. However, the use of optimized libraries such as FAISS, FLAWN etc. can be used to reduce the run-time. Similar to CNNs, the use of parallel processing modules such as GPUs could drastically reduce the time spent on search procedure.\\n\\n\\n2. \\\"Seems to require relevant objects and textures to be present in the training set in order to succeed at any given conditional image generation task.\\\"\\n\\nWe agree. However, this criticism could also be applied to most learning-based models (including CNNs and GANs, as R3 points out).\"}",
"{\"title\": \"Shines Light on Deficiencies in Conditional GAN: borderline accept\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents a pixel-matching based approach to synthesizing RGB images from input edge or normal maps. The approach is compared to Isola et al\\u2019s conditional adversarial networks, and unlike the conditional GAN, is able to produce a diverse set of outputs.\\n\\nOverall, the paper describes a computer visions system based on synthesizing images, and not necessarily a new theoretical framework to compete with GANs. With the current focus of the paper being the proposed system, it is interesting to the computer vision community. However, if one views the paper in a different light, namely showing some \\u201cblind-spots\\u201d of current conditional GAN approaches like lack of diversity, then it can be of much more interest to the broader ICLR community.\", \"pros\": \"Overall the paper is well-written\\nMakes a strong case that random noise injection inside conditional GANs does not produce enough diversity\\nShows a number of qualitative and quantitative results\", \"concerns_about_the_paper\": \"1.) It is not clear how well the proposed approach works with CNN architectures other than PixelNet\\n2.) Since the paper used \\u201cthe pre-trained PixelNet to extract surface normal and edge maps\\u201d for ground-truth generation, it is not clear whether the approach will work as well when the input is a ground-truth semantic segmentation map.\\n3.) Since the paper describes a computer-vision image synthesis system and not a new theoretical result, I believe reporting the actual run-time of the system will make the paper stronger. Can PixelNN run in real-time? How does the timing compare to Isola et al\\u2019s Conditional GAN?\", \"minor_comments\": \"1.) The paper mentions making predictions from \\u201cincomplete\\u201d input several times, but in all experiments, the input is an edge map, normal map, or low-resolution image. When reading the manuscript the first time, I was expecting experiments on images that have regions that are visible and regions that are masked out. However, I am not sure if the confusion is solely mine, or shared with other readers.\\n\\n2.) Equation 1 contains the norm operator twice, and the first norm has no subscript, while the second one has an l_2 subscript. I would expect the notation style to be consistent within a single equation (i.e., use ||w||_2^2, ||w||^2, or ||w||_{l_2}^2)\\n\\n3.) Table 1 has two sub-tables: left and right. The sub-tables have the AP column in different places.\\n\\n4.) \\u201cDense pixel-level correspondences\\u201d are discussed but not evaluated.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple and effective baseline for conditional image generation\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes a compositional nearest-neighbors approach to image synthesis, including results on several conditional image generation datasets.\", \"pros\": [\"Simple approach based on nearest-neighbors, likely easier to train compared to GANs.\", \"Scales to high-resolution images.\"], \"cons\": [\"Requires a potentially costly search procedure to generate images.\", \"Seems to require relevant objects and textures to be present in the training set in order to succeed at any given conditional image generation task.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Thanks for the insightful comments and positive feedback\", \"comment\": \"We thank the reviewer for their comments and suggestions, and appreciate their effort to highlight our work for a broader ICLR community. We will incorporate the suggestions provided in the reviews.\\n\\n1. \\\"It is not clear how well the proposed approach works with CNN architectures other than PixelNet\\\"\\n\\nWe will add experiments with other architectures. However, we believe that our approach is agnostic of a pixel-level CNN used for regression. We used PixelNet because it had been shown to work well for the various pixel-level tasks, particularly the inverse of our synthesis problems (i.e., predicting surface normals and edges from images). The use of a single network architecture for our various synthesis problems reduces variability due to the regressor and lets us focus on the nearest neighbor stage. \\n\\n2. \\\"Since the paper used \\u201cthe pre-trained PixelNet to extract surface normal and edge maps\\u201d for ground-truth generation, it is not clear whether the approach will work as well when the input is a ground-truth semantic segmentation map.\\n\\nThis is an interesting question. We have initial results that synthesize faces from the Helen Face dataset (Smith et al, CVPR 2013) from ground-truth segmentation masks. We see qualitatively similar behaviour. In many cases we even see better performance because the input signal (i.e., the ground-truth segmentation labels) are of higher quality than the edges/normals we condition on. We will add such an analysis and discussion.\\n\\n3. \\\"Since the paper describes a computer-vision image synthesis system and not a new theoretical result, I believe reporting the actual run-time of the system will make the paper stronger. Can PixelNN run in real-time? How does the timing compare to Isola et al\\u2019s Conditional GAN?\\\" \\n\\nOur approximate neighbor neighbor search (described on Page 6) takes .2 fps. We did not optimize our approach for speed. Importantly, we make use of a single CPU to perform our nearest neighbor search, while Isola et al makes use of a GPU. We posit that GPU-based nearest-neighbor libraries (e.g., FAISS) will allow for real-time performance comparable to Isola\\u2019s. We will add a discussion.\"}"
]
} |
Hy3MvSlRW | Adversarial reading networks for machine comprehension | [
"Quentin Grail",
"Julien Perez"
] | Machine reading has recently shown remarkable progress thanks to differentiable
reasoning models. In this context, End-to-End trainable Memory Networks
(MemN2N) have demonstrated promising performance on simple natural language
based reasoning tasks such as factual reasoning and basic deduction. However,
the task of machine comprehension is currently bounded to a supervised setting
and available question answering dataset. In this paper we explore the paradigm
of adversarial learning and self-play for the task of machine reading comprehension.
Inspired by the successful propositions in the domain of game learning, we
present a novel approach of training for this task that is based on the definition
of a coupled attention-based memory model. On one hand, a reader network is
in charge of finding answers regarding a passage of text and a question. On the
other hand, a narrator network is in charge of obfuscating spans of text in order
to minimize the probability of success of the reader. We experimented the model
on several question-answering corpora. The proposed learning paradigm and associated
models present encouraging results. | [
"machine reading",
"adversarial training"
] | Reject | https://openreview.net/pdf?id=Hy3MvSlRW | https://openreview.net/forum?id=Hy3MvSlRW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"Sy0AiMnef",
"BJLWIyaBz",
"HynOT_5Jz",
"SJ5ajc8EG",
"BJggQbceG"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1511955414075,
1517250046254,
1510800756119,
1515789250145,
1511817960039
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper256/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper256/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper256/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper256/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"The root idea is interesting but the paper has significant issues.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The main idea of this paper is to automate the construction of adversarial reading comprehension problems in the spirit of Jia and Liang, EMNLP 2017. In that work a \\\"distractor sentence\\\" is manually added to a passage to superficially, but not logically, support an incorrect answer. It was shown that these distractor sentences largely fool existing reading comprehension systems although they do not fool human readers.\\n\\nThis paper replaces the manual addition of a distractor sentence with a single word replacement where a \\\"narrator\\\" is trained adversarially to select a replacement to fool the question answering system. This idea seems interesting but very difficult to evaluate. An adversarial word replacement my in fact destroy the factual information needed to answer the question and there is no control for this. The performance of the question answering system in the presence of this adversarial narrator is of unclear significance and the empirical results in the paper are very difficult to interpret. No comparisons with previous work are given (and perhaps cannot be given).\\n\\nA better model would be the addition of a distractor sentence as this preserves the information in the original passage. A language model could probably be used to generate a compelling distractor. But we want that the corrupted passage has the same correct answer as the uncorrupted passage and this difficult to guarantee. A trained \\\"narrator\\\" could learn to actually change the correct answer.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper presents an adversarial learning framework for reading comprehension. Although the idea is interesting and presents an approach that ideally would make reading comprehension approaches more robust, the results are not substantially solid (see reviewer 3's comments) compared to other baselines to warrant acceptance. Comments from reviewer 2 are also noteworthy where they mention that adversarial perturbations to a context around an answer can alter the facts in the context, thus destroying the actual information present there, and the rebuttal does not seem to satisfy the concern. Addressing these issues will strengthen the paper for a potential future venue.\"}",
"{\"title\": \"Interesting idea, unconvincing results\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper aims to improve the accuracy of reading model on question answering dataset by playing against an adversarial agent (which is called narrator by the authors) that \\\"obfuscates\\\" the document, i.e. changing words in the document. The authors mention that word dropout can be considered as its special case which randomly drops words without any prior. Then the authors claim that smartly choosing the words to drop can make a stronger adversarial agent, which in turn would improve the performance of the reader as well. Hence the adversarial agent is trained and is architecturally similar to the reader but just has a different last layer, which predicts the word that would make the reader fail if the word is obfuscated.\\n\\nI think the idea is interesting and novel. While there have been numerous GAN-like approaches for language understanding, very few, if any, have shown worthy results. So if this works, it could be an impactful achievement. \\n\\nHowever, I am concerned with the experimental results.\\n\\nFirst, CBT: NE and CN numbers are too low. Even a pure LSTM achieves (no attention, no memory) 44% and 45%, respectively (Yu et al., 2017). These are 9% and 6% higher than the reported numbers for adversarial GMemN2N. So it is very difficult to determine if the model is appropriate for the dataset in the first place, and whether the gain from the non-adversarial setting is due to the adversarial setup or not.\\n\\nSecond, Cambridge dialogs: the dataset's metric is not accuracy-based (while the paper reports accuracy), so I assume some preprocessing and altering have been done on the dataset. So there is no baseline to compare. Though I understand that the point of the paper is the improvement via the adversarial setting, it is hard to gauge how good the numbers are.\\n\\nThird, TripAdvisor: the dataset paper by Wang et al. (2010) is not evaluated on accuracy (rather on ranking, etc.). Did you also make changes to the dataset? Again, this makes the paper less strong because there is no baseline to compare.\\n\\nIn short, the only comparable dataset is CBT, which has too low accuracy compared to a very simple baseline.\\nIn order to improve the paper, I recommend the authors to evaluate on more common datasets and/or use more appropriate reading models.\\n\\n---\", \"typos\": \"\", \"page_1_first_para\": \"\\\"minimize to probability\\\" -> \\\"minimize the probability\\\"\", \"page_3_first_para\": \"\\\"compensate\\\" -> \\\"compensated\\\"\", \"page_3_last_para\": \"\\\"softmaxis\\\" -> \\\"softmax is\\\"\\npage 4 sec 2.4: \\\"similar to the reader\\\" -> \\\"similarly to the reader\\\"\\npage 4 sec 2.4: \\\"unknow\\\" -> \\\"unknown\\\"\", \"page_4_sec_3_first_para\": \"missing reference at \\\"a given dialog\\\"\", \"page_5_first_para\": \"\\\"Concretly\\\" -> \\\"Concretely\\\"\", \"table_1\": \"what is difference between \\\"mean\\\" and \\\"average\\\"?\", \"page_8_last_para\": \"missing reference at \\\"Iterative Attentive Reader\\\"\\npage 9 sec 6.2 last para: several citations missing, e.g. which paper is by \\\"Tesauro\\\"?\\n\\n\\n[Yu et al. 2017] Adams Wei Yu, Hongrae Kim, and Quoc V. Le. Learning to Skim Text. ACL 2017\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Read the rebuttal\", \"comment\": \"I have read authors' rebuttal and I am still keeping my scores same.\"}",
"{\"title\": \"Paper needs significant revision\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Summary:\\n\\nThis paper proposes an adversarial learning framework for machine comprehension task. Specifically, authors consider a reader network which learns to answer the question by reading the passage and a narrator network which learns to obfuscate the passage so that the reader can fail in its task. Authors report results in 3 different reading comprehension datasets and the proposed learning framework results in improving the performance of GMemN2N.\", \"my_comments\": \"This paper is a direct application of adversarial learning to the task of reading comprehension. It is a reasonable idea and authors indeed show that it works.\\n\\n1. The paper needs a lot of editing. Please check the minor comments.\\n\\n2. Why is the adversary called narrator network? It is bit confusing because the job of that network is to obfuscate the passage.\\n\\n3. Why do you motivate the learning method using self-play? This is just using the idea of adversarial learning (like GAN) and it is not related to self-play.\\n\\n4. In section 2, first paragraph, authors mention that the narrator prevents catastrophic forgetting. How is this happening? Can you elaborate more?\\n\\n5. The learning framework is not explained in a precise way. What do you mean by re-initializing and retraining the narrator? Isn\\u2019t it costly to reinitialize the network and retrain it for every turn? How many such epochs are done? You say that test set also contains obfuscated documents. Is it only for the validation set? Can you please explain if you use obfuscation when you report the final test performance too? It would be more clear if you can provide a complete pseudo-code of the learning procedure.\\n\\n6. How does the narrator choose which word to obfuscate? Do you run the narrator model with all possible obfuscations and pick the best choice?\\n\\n7. Why don\\u2019t you treat number of hops as a hyper-parameter and choose it based on validation set? I would like to see the results in Table 1 where you choose number of hops for each of the three models based on validation set.\\n\\n8. In figure 2, how are rounds constructed? Does the model sees the same document again and again for 100 times or each time it sees a random document and you sample documents with replacement? This will be clear if you provide the pseudo-code for learning.\\n\\n9. I do not understand author's\\u2019 justification for figure-3. Is it the case that the model learns to attend to last sentences for all the questions? Or where it attends varies across examples?\\n\\n10. Are you willing to release the code for reproducing the results?\", \"minor_comments\": \"Page 1, \\u201cexploit his own decision\\u201d should be \\u201cexploit its own decision\\u201d\\nIn page 2, section 2.1, sentence starting with \\u201cIndeed, a too low percentage \\u2026\\u201d needs to be fixed.\\nPage 3, \\u201cforgetting is compensate\\u201d should be \\u201cforgetting is compensated\\u201d.\\nPage 4, \\u201cfor one sentences\\u201d needs to be fixed.\\nPage 4, \\u201cunknow\\u201d should be \\u201cunknown\\u201d.\\nPage 4, \\u201c??\\u201d needs to be fixed.\\nPage 5, \\u201cfor the two first datasets\\u201d needs to be fixed.\\nTable 1, \\u201cGMenN2N\\u201d should be \\u201cGMemN2N\\u201d. In caption, is it mean accuracy or maximum accuracy?\\nPage 6, \\u201cdataset was achieves\\u201d needs to be fixed.\\nPage 7, \\u201cdocument by obfuscated this word\\u201d needs to be fixed.\\nPage 7, \\u201coverall aspect of the two first readers\\u201d needs to be fixed.\\nPage 8, last para, references needs to be fixed.\\nPage 9, first sentence, please check grammar.\\nSection 6.2, last sentence is irrelevant.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.