forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
rJeIGkBKPS | Improving Confident-Classifiers For Out-of-distribution Detection | [
"Sachin Vernekar",
"Ashish Gaurav",
"Vahdat Abdelzad",
"Taylor Denouden",
"Rick Salay",
"Krzysztof Czarnecki"
] | Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called “confident-classifier” by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KLdivergence between the predictive distribution of OOD samples in the low-density“boundary” of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We also propose a novel algorithm to generate the“boundary” OOD samples to train a classifier with an explicit “reject” class for OOD samples. We compare our approach against several recent classifier-based OOD detectors including the confident-classifiers on MNIST and Fashion-MNISTdatasets. Overall the proposed approach consistently performs better than others across most of the experiments. | [
"Out-of-distribution detection",
"Manifold",
"Nullspace",
"Variational Auto-encoder",
"GAN",
"Confident-classifier"
] | Reject | https://openreview.net/pdf?id=rJeIGkBKPS | https://openreview.net/forum?id=rJeIGkBKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mZydQoihz5",
"HJeuOdXIiS",
"Sylbmv78jB",
"rklCnE7IsB",
"HJeHSCR-5r",
"SJlnCEcaFS",
"S1xdZMcnKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727082,
1573431408277,
1573431064812,
1573430454423,
1572101692855,
1571820756276,
1571754495525
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1580/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1580/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1580/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper improves the previous method for detecting out-of-distribution (OOD) samples.\\n\\nSome theoretical analysis/motivation is interesting as pointed out by a reviewer. I think the paper is well written in overall and has some potential.\\n\\nHowever, as all reviewers pointed out, I think experimental results are quite below the borderline to be accepted (considering the ICLR audience), i.e., the authors should consider non-MNIST-like and more realistic datasets. This indicates the limitation on the scalability of the proposed method. \\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Justifying why reject-classifier option is superior and comment about experiments on colored images.\", \"comment\": \"First of all thank you very much for the review and, of course, for your time.\\nHere are the answers to the questions.\\n\\n1. By introducing a reject class, our goal is not to have multiple k*\\u2019s. Our goal is to have a single k* such that k* = K+1 so that all the OOD samples are classified as belonging to the reject class. We have added more details in Section 4 to make it clearer.\\nAs far as the validity of theoretical justifications are concerned, while on page 3, the last paragraph we do mention that our analysis cannot be directly applied to bounded cases to obtain arbitrary confidence values as we cannot make $\\\\alpha_l \\\\rightarrow \\\\infty$, the technique in principle can be applied to obtain high confidence values far from the in-distribution. We have added additional details in section 3 and section 4 that makes it clear that the analysis is still applicable for the case of bounded input space. Even Hein\\u2019s et.al., (section 3 last paragraph) on which our analysis is based on also indicate how the analysis holds for the bounded domain evidenced by experiments in Table 2.\\n \\n2. We couldn't do these experiments in time as these take larger compute due to nullspace calculation. We have added a section in Appendix E to highlight the same. However, we would like to argue why our approach is still good.\\nWe focused mainly on following a principled approach for OOD detection unlike many state-of-the-art methods in the literature. We show clearly both with theoretical intuitions and toy experiments on how our approach gives the desired results. Our experiments on gray-scale images are quite comprehensive given that we use a large set of OOD datasets and also report results without tuning hyper-parameter per OOD dataset unlike Mahalanobis and ODIN approaches. Moreover, MNIST0-4 vs MNIST5-9 and F-MNIST0-4 vs F-MNIST5-9 are quite challenging tasks given that the in and out-of-distribution samples in these are quite close to each other. However, our approach does significantly better than the benchmark. Therefore we assume experiments on CIFAR or SVHN may not be necessary to justify our claim about the reject-classifier as other benchmarks that perform so well on these datasets do not perform as well in MNIST and Fashion MNIST experiments. We would like to remark that the OOD detection capability of the reject classifier depends on the quality of boundary OOD samples generated. Therefore if VAE cannot represent these large-scale datasets effectively, the generated OOD samples might also not be a good representation of boundary OOD samples. Therefore the OOD detection accuracy might suffer. In which case, a better boundary OOD sample generation method can improve upon our results. Therefore our work can be judged as a benchmark for OOD detection methods that are based on boundary OOD sample generation.\", \"questions\": \"1. Except for adding an extra node at the output, we are not increasing the number of hidden neurons, so overfitting is not an issue.\\n2. Ensemble-MD requires training a logistic regression classifier on top of the original classifier to learn the weights of different layers. For this, they use 10% OOD samples for training (Quoting from MD paper, section 3.2: \\u201cFollowing the similar strategies in [7, 22], we randomly choose 10% of original test samples for training the logistic regression detectors and the remaining test samples are used for evaluation.\\u201d), which in our opinion is not practical as in real-world scenarios we don\\u2019t have access to OOD samples. Moreover, most of the recent OOD detection papers (Ren \\u201819, Abdelzad '19) don\\u2019t compare against the ensemble version of MD.\\n \\n3. The goal of our approach is to use boundary OOD samples that can guide the decision boundary of the classifier to be bounded around the in-distribution regions as depicted in Figure. 1(b). For images, it is difficult to represent the entire OOD space with a small number of samples, therefore we don\\u2019t expect it to give good results. We have added some more information in section 4, paragraph 2 to explain the same. If one is able to use a diverse set of OOD samples that can well represent the OOD space, the reject classifier, and the confident-classifier may perform equally well as evidenced by the toy experiment in Appendix C and Figure. 6.\\nPlease let us know if anything else is not clear, we would be happy to provide more details. \\n\\n[Ren \\u201819]: Likelihood Ratios for Out-of-Distribution Detection, NeurIPS \\u201819\\n[Abdelzad \\u201819] Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output, arXiv preprint, arXiv 1910.10307, 2019\"}",
"{\"title\": \"Clarification on Generated OOD samples being Boundary OOD samples and Experiments on Colored images.\", \"comment\": \"First of all thank you very much for the review and, of course, for your time. Based on the review summary, we want to elaborate on a few things that might be unclear.\", \"response_to_the_summary\": \"While the auxiliary reject class option has been explored before, it hasn\\u2019t been done by generating OOD samples in the low-density boundary of in-distribution. We clearly mention in our paper at multiple places including the proposed method to generate OOD samples that these samples are the boundary OOD samples. Moreover, we mention in our related work (Appendix B) the following:\\n\\u201cHendrycks et al. (2019) propose to train a classifier with a confidence loss where OOD data is sampled from a large natural dataset. Hein et al. (2019) also follow a similar approach using a confidence loss and uniformly generated random OOD samples from the input space. In addition, they not only minimize the confidence at the generated OOD samples but also in the neighborhood of those samples. However, because both these approaches use the confidence-loss, they suffer from the problems explained in this paper. Moreover, such approaches are only feasible for input spaces where it is possible to represent the support of OOD with finite samples (assuming a uniform distribution over OOD space). This is not possible when the input space is $R^d$, whereas the method proposed in this paper is still applicable.\\u201d\", \"response_to_supporting_arguments\": \"1. While on page 3, the last paragraph we do mention that our analysis cannot be directly applied to bounded cases to obtain arbitrary confidence values as we cannot make $\\\\alpha_l \\\\rightarrow \\\\infty$, the technique in principle can be applied to obtain high confidence values far from the in-distribution. We have added additional details in section 3 and section 4 that makes it clear the analysis is still applicable for the case of bounded input space. Even Hein\\u2019s et.al. (section 3 last paragraph), on which our analysis is based on also indicate how the analysis holds for the bounded domain evidenced by their experiments in Table 2.\\n\\n2. With respect to OOD being distributed all over the data space, we have already answered this question previously (i.e, we only generated boundary OOD samples and we assume we can model the boundary OOD with few samples).\\nFor conventional approaches such as Mahalanobis distance-based method or ODIN, we have argued in the related work section (Appendix B), why they cannot be considered as general OOD detection approaches as they work in the discriminative feature space. In fact, we can easily show why these methods don\\u2019t work with simple toy experiments in low-dimensional space. Moreover, their results on some of the gray-scale experiments are significantly worse compared to ours.\\n\\n4. We couldn't do these experiments in time as these take larger compute due to nullspace calculation. We have added a section in Appendix E to highlight the same. However, we would like to argue why our approach is still good.\\nWe focused mainly on following a principled approach for OOD detection unlike many state-of-the-art methods in the literature. We show clearly both with theoretical intuitions and toy experiments on how our approach gives the desired results. Our experiments on gray-scale images are quite comprehensive given that we use a large set of OOD datasets and also report results without tuning hyper-parameter per OOD dataset unlike Mahalanobis and ODIN approaches. Moreover, MNIST0-4 vs MNIST5-9 and F-MNIST0-4 vs F-MNIST5-9 are quite challenging tasks given that the in and out-of-distribution samples in these are quite close to each other. However, our approach does significantly better than the benchmark. Therefore we assume experiments on CIFAR or SVHN may not be necessary to justify our claim about the reject-classifier as other benchmarks that perform so well on these datasets do not perform as well in MNIST and Fashion MNIST experiments. We would like to remark that the OOD detection capability of the reject classifier depends on the quality of boundary OOD samples generated. Therefore if VAE cannot represent these large-scale datasets effectively, the generated OOD samples might also not be a good representation of boundary OOD samples. Therefore the OOD detection accuracy might suffer. In which case, a better boundary OOD sample generation method can improve upon our results. Therefore our work can be judged as a benchmark for OOD detection methods that are based on boundary OOD sample generation.\", \"answers_to_comments\": \"1. Thanks for pointing out the mistake, we have fixed this in the paper. However, it is less likely that we have this problem in our approach as we use boundary OOD samples that can guide the decision boundary of the classifier to be bounded around the in-distribution regions as depicted in Figure. 1(b).\\nPlease let us know if anything else is not clear, we would be happy to provide more details.\"}",
"{\"title\": \"Reiterating our contributions and explaining the choices made for benchmark OOD approaches.\", \"comment\": \"First of all thank you very much for the review and, of course, for your time. Thanks again for supporting our paper. Going by your review comments it seems you have understood our paper quite well. However, we want to reiterate our major contributions again.\\n\\n1. While training with the boundary OOD samples, we propose to use a reject-classifier instead of a confident-classifier and we provide a theoretical argument for the same and also compare our approach against the state-of-the-art classifier-based methods for OOD detection in the literature on MNIST and Fashion MNIST datasets.\\n\\n2. We propose a novel method to generate boundary OOD samples that are more diverse and follow the in-distribution boundary better than the one proposed in Lee (confident classifier paper).\\n\\nMoreover, in our analysis of related work (Appendix B), we have argued why the current state-of-the-art methods in OOD detection such as Mahalanobis-distance based approach and ODIN cannot be considered as general OOD detection approaches as they work in the discriminative feature space. In fact, we can easily show why these methods don\\u2019t work with simple toy experiments in low-dimensional space. We, on the other hand, try to obtain the decision boundaries as shown in the Figure. 1b that clearly separates in and out-of-distribution regions.\\n\\nNow to answer your specific questions,\\n1. The methods to compare, such as Confident-Classifier and ODIN, are not so strong. Thus, I am not sure whether the performance of the proposed algorithm is dramatically better.\\n\\nWhile we agree that the confident-classifier results aren't that strong, our approach is however similar to the confident-classifier in that we also generate boundary OOD samples to train the classifier, therefore we compare against it. But Mahalanobis-distance based (state-of-the-art on most datasets) and ODIN are the top classifier-based OOD detection approaches in the literature that recent OOD detection methods ([Ren \\u201819], [Abdelzad \\u201819]) compare against. Our approach outperforms these methods in most of our experiments without having to fine-tune the hyper-parameters per OOD dataset as these methods do. The detection results are significantly better than Mahalanobis-distance based method, especially for MNIST experiments (compare FPR 95\\\\% values). Moreover, MNIST0-4 vs MNIST5-9 and F-MNIST0-4 vs F-MNIST5-9 are quite challenging tasks given that the in and out-of-distribution samples in these are quite close to each other. However, our approach does significantly better than the benchmark.\\n\\n2. I would like to see the sensitivity analysis of the proposed method because there are several hyper-parameters as mentioned in the paper.\\n\\nThe hyper-parameters that we have are $\\\\beta$ from Eq. 2, OOD class weight, and learning rate. The learning rate is a hyper-parameter like in any other classifier training. In our case, it mostly impacts the convergence rate. Since the OOD class has lots more samples than the other in-distribution classes, we used a class-weight to handle class-imbalance. However, an equal mix of classes in each batch of training data performs almost as well as the reported results. \\u03b2 that determines how close to the in-distribution boundary are the OOD samples is very important, as choosing a very large value can make OOD samples to be generated far away from the in-distribution boundaries and a smaller value could make OOD samples fall into the in-distribution regions. But we found that the stochastic $\\\\beta$ chosen in the range [0.1 1.0] works well for all our experiments.\\n\\nPlease let us know if anything else is not clear, we would be happy to provide more details. \\n\\n\\n[Ren \\u201819]: Likelihood Ratios for Out-of-Distribution Detection, NeurIPS \\u201819\\n\\n[Abdelzad \\u201819] Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output, arXiv preprint, arXiv 1910.10307, 2019\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an algorithm to generate boundary OOD positive/negative samples to train a classifier for OOD samples. The algorithm is based on the theoretical analysis on why confidence value could be high in unbounded polytopes. CVAE is used as a generative model to get new samples. Experiments are conducted on MNIST and Fashon-MNIST datasets, which are used as OOD and in-distribution, and vice versa. Other datasets are also used for OODs. Comparison are made with Confident-Classifier, ODIN, and Mahalanobis distance-based approach, and the proposed method outperforms the others.\\nOverall the paper is well-written and well-organized. The proposed method is based on the idea from theoretical analysis, and is reasonable and valid. There are only a couple of things to point out: First, the methods to compare, such as Confident-Classifier and ODIN, are not so strong. Thus, I am not sure whether the performance of the proposed algorithm is dramatically better. Second, I would like to see the sensitivity analysis of the proposed method, because there are several hyper-parameters as mentioned in the paper.\\nHowever, I like the method and could be accepted as an ICLR paper.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"** post rebuttal start **\\n\\nAfter reading reviews and authors' response, I decided not to change my score.\\nI recommend to strengthen their theoretical justification or make their method scalable to improve their work.\", \"detailed_comments\": \"2. \\\"Moreover, their results on some of the gray-scale experiments are significantly worse compared to ours.\\\"\\n-> If you are talking about the comparison in MNIST-variants, please note that experimental results on MNIST cannot be seriously taken unless there is a strong theoretical background; especially, MNIST-variants are too small to talk about the scalability of the method. It is hard to convince readers only with results in MNIST-variants, unless the method has a strong theoretical justification.\\nHowever, if your claim is true for general gray-scale images, e.g., preprocessing CIFAR to be in gray scale, then you may add supporting experiments about it.\\n\\n4. Again, if the method is only applicable to MNIST-variants due to its computational complexity while it has no strong theoretical justification, I can't find benefits from it.\\n\\n** post rebuttal end **\\n\\n\\n\\n- Summary:\\nThis paper proposes to improve confident-classifiers for OOD detection by introducing an explicit \\\"reject\\\" class. Although this auxiliary reject class strategy has been explored in the literature and empirically observed that it is not better than the conventional confidence-based detection, the authors provide both theoretical and empirical justification that introducing an auxiliary reject class is indeed more effective.\\n\\n\\n- Decision and supporting arguments:\\nWeak reject.\\n\\n1. Though the analysis is interesting, it is not applicable to both benchmark datasets and real-world cases. Including the benchmark datasets they experimented, the input to the model is in general bounded, e.g., natural images are in RGB format, which is typically normalized to be bounded in [0,1]. Therefore, the polytopes would not be stretched to the infinity in most cases.\\nOn the other hand, note that softmax classifiers produce a high confidence if the input vector and the weight vector of a certain class are in the same direction (of course feature/weight norm also matters, but let's skip it for simplicity). Therefore, if there is an auxiliary reject class, only data in the same direction will be detected as OOD; in other words, OOD is \\\"modeled\\\" to be in the same direction with the weight vector of the auxiliary reject class. However, the conventional confidence-based detection does not model OOD explicitly. Since OOD is widely distributed over the data space by definition, modeling such a wide distribution would be difficult. Thus, the conventional approach makes more sense to me.\\n\\n2. The experiment is conducted only on MNIST variations, so it is unclear whether their claim is true on large-scale datasets and real-world scenario.\\nWhy don't you provide some experimental results on other datasets commonly used in other OOD detection papers, such as CIFAR, SVHN, TinyImageNet, and so on?\\n\\n\\n- Comments:\\n1. In section 4, the authors conjectured the reason why the performance of reject class in Lee et al. (2018a) was worse is that the generated OOD samples do not follow the in-distribution boundaries well. I think Appendix E in the Lee et al.'s paper corresponds to this reasoning, but Lee et al. actually didn't generate OOD samples but simply optimized the confidence loss with a \\\"seen OOD.\\\" Lee et al. didn't experiment on MNIST variations but many natural image datasets. So, it is possible that the auxiliary reject class strategy is only effective in MNIST variations. I suggest the authors to do more experiments on larger datasets to avoid this criticism.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Comments on rebuttal\\n\\nI don\\u2019t think that the authors made a valid argument to address my concerns about theoretical justification and experiments. As I mentioned in the review, the assumption and statements in the paper are not clear to me. Moreover, I think the authors should evaluate their methods on more realistic cases. Because of that, I\\u2019d like to keep my score. \\n\\n====\\n\\n[Summary]\\n\\nTo detect out-of-distribution (OOD) samples, the authors proposed to add an explicit \\\"reject\\\" class instead of producing a uniform distribution and OOD sample generation method. They showed that the proposed method can perform better than several OOD detectors on MNIST and Fashion-MNIST datasets.\\n\\n[Detailed comments]\\n\\nI'd like to recommend a \\\"weak reject\\\" due to the following reasons:\\n\\n1. Justification is not clear: The authors argue that arbitrarily large confidence values cannot be obtained if there are multiple K*. However, how can we guarantee that there are multiple K* only by introducing the additional class? Could the authors elaborate this more? Also, I'm not sure that the theoretical justifications are really valid because we usually consider bounded input space. \\n\\n2. Experimental results are not convincing: in the paper, only grayscale datasets, such as MNIST and FMNIST, are considered to evaluate the proposed method and I think it is not enough. I would be appreciated if the authors can provide more evaluations on various datasets (e.g., CIFAR, SVHN, TinyImageNet) and deep architectures (e.g., DenseNet and ResNet) similar to [Hendrycks 19, Liang' 18, Lee' 18].\\n\\n[Questions]\\n\\n1. Introducing additional class increases the number of parameters and can suffer from overfitting. Could the authors comment on overfitting issues? \\n\\n2. Could the authors compare the performance of the proposed method with the ensemble version of MD? \\n\\n3. Instead of generated OOD samples, could the authors report the performance with explicit OOD samples similar to [Hendrycks' 19]? \\n\\n[Hendrycks' 19] Hendrycks, D., Mazeika, M. and Dietterich, T.G., Deep anomaly detection with outlier exposure. In ICLR, 2019.\\n\\n[Lee' 18] Lee, K., Lee, H., Lee, K. and Shin, J., Training confidence-calibrated classifiers for detecting out-of-distribution samples. In ICLR, 2018.\\n\\n[Liang' 18] Liang, S., Li, Y. and Srikant, R., Enhancing the reliability of out-of-distribution image detection in neural networks In ICLR, 2018.\"}"
]
} |
S1xSzyrYDB | Cyclic Graph Dynamic Multilayer Perceptron for Periodic Signals | [
"Mikio Furokawa",
"Erik Gest",
"Takayuki Hirano",
"Kamal Youcef-Toumi"
] | We propose a feature extraction for periodic signals. Virtually every mechanized transportation vehicle, power generation, industrial machine, and robotic system contains rotating shafts. It is possible to collect data about periodicity by mea- suring a shaft’s rotation. However, it is difficult to perfectly control the collection timing of the measurements. Imprecise timing creates phase shifts in the resulting data. Although a phase shift does not materially affect the measurement of any given data point collected, it does alter the order in which all of the points are col- lected. It is difficult for classical methods, like multi-layer perceptron, to identify or quantify these alterations because they depend on the order of the input vectors’ components. This paper proposes a robust method for extracting features from phase shift data by adding a graph structure to each data point and constructing a suitable machine learning architecture for graph data with cyclic permutation. Simulation and experimental results illustrate its effectiveness. | [
"periodic signals",
"data",
"difficult",
"data point",
"order",
"periodic signals cyclic",
"dynamic multilayer perceptron",
"feature extraction",
"mechanized transportation vehicle"
] | Reject | https://openreview.net/pdf?id=S1xSzyrYDB | https://openreview.net/forum?id=S1xSzyrYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GGOcIMkHa",
"SkeTY92ocH",
"HklFYLqOqr",
"S1g1uWIO9r",
"BkliLUPDqS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727032,
1572747908830,
1572542080854,
1572524391037,
1572464211137
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1579/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1579/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1579/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1579/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers all appreciated the area explored by this work but there was a consensus that it lacked a thorough presentation of existing works, as well as relevant baselines.\\n\\nI encourage the authors to better position their work with respect to the existing literature for what should be a stronger submission for a future conference.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The goal of this work is to explore multi-sensor data modelling, with a focus on anomaly detection in machinery containing rotating shafts. Multi-sensor recordings from machinery can be phase-shifted, due to errors in the relative timing of sensors. The authors develop a method for modelling such phase-shift data. They augment phase-shift data with a graph structure which represents the relationship between sensors and they use a graph neural network with a cyclic permutation structure to enforce phase-shift invariance. Model performance is evaluated with real-world data from machinery containing a rotating shaft. Their model may be useful for other domains with phase-shifted data, such as multi-sensor medical data.\\n\\nAlthough this is an interesting application domain and model, I have selected weak reject. \\n\\nThe primary reason for this decision is that the authors do not provide sufficient comparisons to related work and models, either in the form of a literature review, or in the form of model benchmarking. This is especially problematic for a domain which will be unfamiliar to much of the machine learning community.\\n\\nNone of the six references in the paper address anomaly detection for temporal data (e.g. Ahrens et al. \\u201cA machine-learning phase classification scheme for anomaly detection in signals with periodic characteristics\\u201d 2019) or the extensive related literature of time series models (e.g Pope et al. Learning phase-invariant dictionaries 2013, Edwards and Lee, Using Convolutional Neural Networks to Extract Shift-Invariant Features from Unlabeled Data\\u201d, 2019), or more closely related work on shift invariant graph neural networks (e.g. Gama et al, Convolutional neural network architectures for signals supported on graphs, 2018). \\n\\nTo address this, I feel that the authors need to provide a related work discussion. I would also like to see some experiments comparing their model to benchmark models that have a greater chance of being competitive such as a variant of models introduced by Pope et al. 2013 or Edwards and Lee 2019 for example.\", \"minor_note\": \"there is a typo in the definition of v_i\\n\\nThank you for the submission.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"In the paper, the authors proposed a novel method adding graph architecture for collected data points to utilize not only features but also relative information. This helps reduce time and costs to collect a huge amount of data from industrial machines and improve accuracy. Although the paper idea is very interesting when presenting a new learning model for graph data, explanations and experiments are not convincing.\\n\\nFor explanations, the authors did not provide sufficient related works or references to prove that the problem the paper wants to solve is important. Also, for some approaches using deep learning mentioned there are no references.\\n\\nFor experiments, the results presented in Table 1 are good but there are no official baselines (e.g. from some prior works) to make the comparison more reliable.\\n\\nBase on the arguments mentioned above, the paper is not convincing and reliable.\", \"small_suggestion_revision\": \"More analysis of prior works to show that the problem is important and need-to-solve\\nThe introduction section should more references.\\nThe experiments should be rigorous cause it lacks reliable baselines for comparison.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a machine learning framework for periodic data. The authors note that representing input data in vector form does not encode input coordinates relationship to one. Capturing this structure can be especially important for periodic signals. The authors address this by adding graph structure to each data point to encode relative structure about each coordinate. They then apply a graph neural network to the resulting structured data. They evaluate the method on an anomaly detection in a low data setting.\\n\\nThis paper would be greatly improved by an addition of a related work section. It is unclear where the novelty comes in precisely in this work because it is not very well situated within previous work on (i) anomaly detection w/ periodic signals, (ii) temporal and periodic signal processing and (iii) graph neural network approaches. Contextualized this work within these related areas would improve the clarity and readability of the work and also help frame the results. \\n\\nThe method is evaluated on synthetic and real datasets, comparing a couple variants of the model (one that can deal with phase shifts). Results support the main claims of the paper.\\n\\nIt is hard for me to assess the significance of this work since this is a very specific application of known techniques. I think the application is an important one, and also one that requires some domain knowledge, so there does appear to be a useful contribution in terms of adapting graph-based methodologies here. However, the methodology and application is outside my area of expertise.\\n \\nDetailed suggestions / questions / comments:\\n- the ws-dimensional \\u00a0vector v_i is defined as v_i = (x_{(i\\u22121)\\u2217ss+1}, x_{i+1}, . . . , x_{(i\\u22121)\\u2217ss+ws}). Is there a typo here? It's not obvious to me how the second index relates to the sequence or how this sequence is specified? Perhaps it should say \\u00a0v_i = (x_{(i\\u22121)\\u2217ss}, x_{(i-1)*ss+1}, . . . , x_{(i\\u22121)\\u2217ss+ws})?\\n\\u00a0- The authors mention alternative methods of capturing structured temporal information in the input features. For example, they suggest concatenating the original signal with the cross correlated signal. They also suggest time-frequency analysis methods (such as a Fourier transform and the wavelet transform) and applying a CNN to the time-frequency signal. They authors mention that these methods would require much more data than their graph convolution approach. I agree this is probably the case, but this would still be a useful empirical result to show the degree of data required for these alternative. \\n- What are previous approaches to detecting the properties explored in this work? In addition to discussing previous approaches in a related work section, some empirical analysis comparison would help contextualize this work as well. \\u00a0\\n\\nOverall, I think this paper is a useful application of graph-based methods. The claims are verified empirically on real and synthetic data. I think it could be significantly improved with a discussion of related work and better situating of the methods / more comparisons in the results. As a result of these significant weaknesses I'm really on the fence with my recommendation -- the work is sensible but the paper has a lot of room for improvement and I'm not quite sure its ready for publication. However, it is possible underestimated the significance/impact of this work because I am not very familiar with the topic.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents a novel architecture for extracting features for periodic signals that is sample efficient and has superior performance than previous approaches. The proposed method is based on a graph architecture that takes into account the ordering of the vertices, contrary to standard GNNs. In order to extract periodic signals, they prove that if two data points are phased shifted, then there exists a subgraph for each data point such that one is a cyclic permutation of the other. To that end, the authors present an architecture that is cyclic permutation invariant through a pooling operation on all the cycles. The proposed method is evaluated on simulated and real data.\\n\\nThe paper is well written and well motivated. The authors however ignore most of previous related work, there is a huge bulk of work on graph neural networks and on modelling time-series. It is important and interesting that the authors compare how the presented architecture differs from previous work.\\n\\nRegarding the method, the entire approach is based on two assumptions: 1) the data points are phase shifted with a period that is a multiple of T, and 2) that you know that windows size and slide size such that makes one graph a cyclic permutation of the other. How often does (1) happen in practice? How sensitive it is to the failure of such assumptions? \\n\\nThe results section is the weakest part of this paper. The comparison between other approaches not presented by this method is essentially just the MLP, which is the most naive baseline. The author should compare to 1Dconvs, RNNs, MLPs with fourier features, and state-of-the-art approaches tackling time-series and/or periodic signals. As I mentioned previously, it would also be important to analyze the sensitivity of the method with respect to the assumptions build upon.\\n\\nAt this stage, I do not think that the paper is ready for acceptance.\"}"
]
} |
HJlHzJBFwB | Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over the Simplex | [
"Yufei Cui",
"Wuguannan Yao",
"Qiao Li",
"Antoni Chan",
"Chun Jason Xue"
] | Estimating the predictive uncertainty of a Bayesian learning model is critical in various decision-making problems, e.g., reinforcement learning, detecting adversarial attack, self-driving car. As the model posterior is almost always intractable, most efforts were made on finding an accurate approximation the true posterior. Even though a decent estimation of the model posterior is obtained, another approximation is required to compute the predictive distribution over the desired output. A common accurate solution is to use Monte Carlo (MC) integration. However, it needs to maintain a large number of samples, evaluate the model repeatedly and average multiple model outputs. In many real-world cases, this is computationally prohibitive. In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by model posterior with a parameterized model and in an amortized fashion. The aim is to approximate the true uncertainty of a specific Bayesian model, meanwhile alleviating the heavy workload of MC integration at testing time. The proposed method is universally applicable to Bayesian classification models that allow for posterior sampling. Theoretically, we show that the idea of amortization incurs no additional costs on approximation performance. Empirical results validate the strong practical performance of our approach. | [
"predictive uncertainty",
"model posterior",
"simplex",
"bayesian learning model",
"critical",
"various",
"problems",
"reinforcement learning"
] | Reject | https://openreview.net/pdf?id=HJlHzJBFwB | https://openreview.net/forum?id=HJlHzJBFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"lZUKw3nCEG",
"BkxPM6usjr",
"S1xACUAQir",
"BylIHICmor",
"S1x-L40Xjr",
"HylknWCmiB",
"rylxzAp7jS",
"SJgNjT9W9B",
"SJx1BMq5FB",
"Skg2gUHcFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798727001,
1573780751316,
1573279445941,
1573279294412,
1573278793026,
1573278118924,
1573277191767,
1572085147764,
1571623479239,
1571603955738
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1578/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1578/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1578/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to speed up Bayesian deep learning at test time by training a student network to approximate the BNN's output distribution. The idea is certainly a reasonable thing to try, and the writing is mostly good (though as some reviewers point out, certain sections might not be necessary). The idea is fairly obvious, though, so the question is whether the experimental results are impressive enough by themselves to justify acceptance. The method is able to get close to the performance achieved by Monte Carlo estimators with much lower cost, although there is a nontrivial drop in accuracy. This is probably worth paying if it achieves 500x computation reduction as claimed in the paper, though the practical gains are probably much smaller since Monte Carlo methods are rarely used with 500 samples. Overall, this seems a bit below the bar for ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"A new revision is submitted.\", \"comment\": \"We thank all reviewers for the constructive feedback. We revised the paper according to the feedbacks and submitted a revision. The major changes are highlighted in blue, either in main body or in appendix. Some new experimental results and analysis are shown in appendix D.3, page 14-15.\"}",
"{\"title\": \"Response to Reviewer #4, part 2.\", \"comment\": \"Q3: One importance advantage of Bayesian classification models is that they can capture the covariance between predictions of different data points. By amortization this advantage no longer exists.\\n\\nThis disadvantage is true but common for all distillation methods [1,2]. How to extend the distillation for a single input to more simultaneous inputs to model covariance in the predictions an interesting topic for future work.\", \"q4\": \"In the paper the authors keep mentioning that the method can be applied to GPs but I don't see experiments or algorithms for it?\\n\\nDue to space constraints we put the algorithms and experiments related to GPs in the appendix. Appendix E.4 shows how to get particles from G, and Appendix D.2 present the experimental results for GP. The algorithms are obtained by replacing samples with GP particles in Algorithm 1 and Algorithm 2.\", \"q5\": \"The concentration model is parameterized using an exponential activation, how does this activation affect the performance?\\n\\nWe also tried softplus as the activation, and the performance was similar to using the exponential activation. Because the concentration value is non-negative, we require the output activation for the concentration model to be non-negative and monotonically increasing.\", \"q6\": \"The distilling process is done on a held-out dataset. Which may not be wanted because an advantage of Bayesian classification models (eg. GPs) is that all hyperparameters can be automatically selected by marginal likelihoods and don't need a held-out validation set.\\n\\nWe refer to the \\\"held-out\\\" dataset D\\u2019 for training OPU so as to distinguish it from the training set of the teacher. Actually, in all the experiments, for fairness (BDK, CompactApprox and SVGP are not trained on held-out set), the dataset used to train OPU is the same as the teacher's training set. See the first sentence of \\u201cData and Evaluation Metrics\\u201d on page 7. To clarify, the \\u201cheld-out\\u2019\\u2019 dataset doesn\\u2019t mean validation set. The aim of this dataset is not to facilitate selection of hyperparameters, but to better capture uncertainties on points that do not appear in training data. To avoid confusion, we will rename the D\\u2019 dataset as \\u201cthe OPU training dataset\\u201d.\", \"q7\": \"MMD/wasserstein distances are cool but they require also samples from the student, which adds more variance to the distillation process.\\n\\nThis is true that samples from the students are required. We use the reparameterization technique in our framework to reduce the variance (see \\u201cReparameterization\\u201d on page 5, and Appendix C for details).\", \"q8\": \"The experiment setup is extremely unclear to me. What is \\\"uncertainty measures\\\", are they used as metrics for detecting out-of-distribution data, how are AUROC/AUPR calculated using the uncertainty measures? I can guess the meaning but the paper should be more clear about this.\\n\\nWe use entropy (E) and max probability (P) of the particle mean, and differential entropy (D) of locally fitted student distribution as the measures for the teacher. We use entropy (E) and max probability (P) of the particle mean, and the scalar output of student concentration model (C) as the measures for the OPU student. For OOD detection, say q_x1 has higher E (or lower P or lower C) than q_x2, then q_x1 is more likely to be an OOD data. The value of E, P and C (D for the teacher) are computed for all testing data points (both in domain data and out-of-domain data), which rank the data points based on the numerical magnitude. The ROC curve is plotted by setting threshold on each magnitude and compute the True Positive Rate and False Positive Rate at each threshold. The PR curve is plotted similar by computing the Precision and Recall. Then the area under two curves (AUROC and AUPR) can be obtained. For misclassification detection, the similar calculation process is applied.\", \"q9\": \"I found most numbers convincing except that sometimes BDK-SGLD outperforms BDK-DIR-SGLD, if I understand it right, the predicted mean of BDK-DIR-SGLD should be as good as BDK-SGLD?\\n\\nThe reason might be that the parameterization of BDK-DIR without disentangling the mean and concentration is harder to learn. In comparison, BDK only approximates the mean of particles.\", \"q10\": \"On Page 7, \\\"To save space, we only present the best performing uncertainty measure (E, P or C)\\\". What is \\\"C\\\" here?\\n\\nC is the scalar output of the concentration model, which can be directly used as the uncertainty measure (See Para. 2, Page 4 and Para. 1, Page 7 ).\\n\\nWe solved the minor issues and submitted a revision.\\n\\n[1] Bayesian dark knowledge. Advances in Neural Information Processing Systems 28, pp. 3438\\u20133446.\\n[2] Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015.\"}",
"{\"title\": \"Response to Reviewer #4, part 1.\", \"comment\": \"Review #4\", \"we_thank_the_reviewer_for_the_comments_and_would_like_to_answer_the_questions_as_follows\": \"\", \"q1\": \"However, I do find some discussions in the paper unnecessary and would expect for more technical contributions. For example, I didn't see the argument for the whole section discussing amortization gap. Everything seems straightforward given the hypothesis F is of enough capacity, which obviously does not hold in practice.\\n\\nTo understand the amortized approximation problem better, we show the total approximation error can be decomposed into model error and amortization gap, and that the amortization gap can be reduced to zero given enough capacity. The analysis aims to show that the idea of \\u201camortization\\u201d is appropriate in our particular scenario. However, in our application, a strong model doesn\\u2019t always translate to small approximation error. For example, consider inference gap in VAE [1]. The local amortization gap is also useful as a general evaluation metric in the amortized knowledge distillation problems. The assumption of F having enough capacity is an assumption also used in the analyses of GAN and WGAN [2,3].\\nIn the experiment (Sec. 4.2 on page 7), we show that the amortized student model approximates each local distribution (that with only model error) in high-fidelity, indicating a low amortization loss. We add an experiment on the EMNIST dataset to verify the decomposition of total approximation error empirically.\", \"more_results_on_emnist_opu_mcdp_mmd\": \"Average approximation error [Eq. 7 on page 5]: $\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(q_{\\\\mathbf{x}_i}, p_{\\\\mathbf{x}_i})$. The averaged MMD between teacher\\u2019s particles (for each x) and the predicted Dirichlet by OPU: 6.5*10^(-2).\\nAverage model error [Eq. 7 on page 5]: $\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(p_{\\\\mathbf{x}_i}, \\\\bar{q}_{\\\\mathbf{x}_i}^\\\\ast)$. The averaged MMD between teacher\\u2019s particles and locally fitted Dirichlet: 6.01*10^(-2).\", \"average_local_amortization_error\": \"$\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(q_{\\\\mathbf{x}_i}, \\\\bar{q}_{\\\\mathbf{x}_i}^\\\\ast)$. The averaged MMD between locally fitted Dirichlet (for each x) and the predicted Dirichlet by OPU: 5.3*10^(-3). Note that the \\u201cerror\\u201d is different from \\u201camortization gap\\u201d \\u0394(x) defined in the paper [Sec. 2.4 on page 5]. The relationship between \\u201camortization error\\u201d defined here and \\u201camortization gap\\u201d is given by Eq. 8 [Sec 2.4 on page 5].\\n\\nIt can be observed that the local amortization error is low, even with a student model having limited capacity capacity. This effectively shows that the function space considered is enough to cover the essential target, which is in fact \\u201cof enough capacity\\u201d.\\nThe local amortization error 5.3*10^(-3) bounds the local amortization gap \\u0394(x) = Avg.Apx.Err \\u2013 Avg.Mdl.Err = 4.9*10^(-3), which is consistent with Eq. 8 [Sec 2.4 on page 5].\\n\\nThe approximation error is mainly determined by the model error, which turns out to be acceptably small. This is consistent with the analysis and also shows the effectiveness and suitableness of using Dirichlet family.\", \"q2\": \"What if the teacher predictive distribution is far unlike a Dirichlet? How much is the discrepancy between teacher and student predictive distribution? Theoretical or empirical evidence is needed for this modeling choice.\\n\\nUsing a Dirichlet for the student is modeling choice, similar to assuming Gaussian posteriors for variational approximations. Our framework is general and any student distribution can be adopted, e.g., generalized Dirichlet, mixture of Dirichlets. To do this requires: 1) a suitable parametrization of the model that can capture uncertainty; 2) deriving/computing the approximation loss (KL, MMD, EMD); 3) reparameterization trick of expectations for efficient gradient estimation. The algorithms of KL, EMD and MMD as well as the analysis still apply.\\n\\nEMNIST is a more challenging dataset compared with Pima, Spambase, MNIST and Cifar10 as the number of classes is larger (47). The empirical evidence for the choice of Dirichlet is in this experiment, where we obtain nearly the same performance (accuracy) as using particles, while obtaining better OOD performance. We also show that the approximation error is low (see Q1), and thus the Dirichlet is a good fit for the teacher\\u2019s predictive distribution in this case.\", \"references\": \"[1] Kingma, Diederik P., and Max Welling. \\\"Auto-encoding variational bayes.\\\" arXiv preprint arXiv:1312.6114 (2013).\\n[2] Goodfellow, Ian, et al. \\\"Generative adversarial nets.\\\" Advances in neural information processing systems. 2014.\\n[3] Arjovsky, Martin, Soumith Chintala, and L\\u00e9on Bottou. \\\"Wasserstein generative adversarial networks.\\\" International conference on machine learning. 2017.\"}",
"{\"title\": \"Response to Reviewer #1, part 2.\", \"comment\": \"Q3: More experimental results: 1) un-distilled MCDP and SGLD models. 2) BDK and DPN for the MCDP models. 3) MCDP and SGLD with fewer particles.\\n\\n1) The performance of un-distilled MCDP and SGLD model is already given in the experiment. \\nFor a clear illustration, we show the performance of un-distilled MCDP/SGLD here.\\n\\nMCDP\\nMisC. AUROC 97.3 (E), AUPR 43.0 (E). OOD1. AUROC 99.2 (P) 98.8 (P) OOD2. AUROC 86.8 (E) 53.8 (P)\\n\\nSGLD\\nMisC. AUROC 97.9 (E), AUPR 46.2 (E). OOD1. AUROC 99.2 (E) 99.6 (E) OOD2. AUROC 89.3 (E) 46.8 (E)\\n\\nTo avoid the model error, the numbers of MCDP are the same with MCDP-(KL/EMD/MMD) in terms of entropy (E) and maximum probability (P) of particle mean. The differential entropy (D) are from the fitted local distribution with KL/EMD/MMD where model error is involved. In table 1, we only show the best out of (E/P/D). Same for SGLD.\\n\\n2) BDK results for MCDP.\\nMisC. Detection: AUROC 86.9 (E) AUPR 41.1 (E)\", \"ood_omniglot\": \"AUROC 47.5 (P) AUPR 44.1 (P)\", \"ood_semeion\": \"AUROC 43.3 (P) AUPR 47.2 (P)\\nAs DPN is not designed to approximate a Bayes teacher, there is no DPN-MCDP and DPN-SGLD. The performance of DPN is already shown in Table 1.\\n\\n3) MCDP and SGLD with fewer particles.\", \"mcdp_half_samples\": \"Acc. 94.9 (-3.0%). MisC.AUROC: 96.1 (-1.2) OOD1.AUROC. 98.5 (-0.9) OOD2.AUROC. 82.7 (-4.1) Test time: 132.1 (s) (OPU has 298x speed up.)\", \"sgld_half_samples\": \"Acc. 98.0 (-0.4%). MisC.AUROC: 97.1 (-0.9) OOD1.AUROC. 91.2 (-8.0) OOD2.AUROC. 82.5 (-6.8) Test time: 141.9 (s) (OPU has 320x speed up.)\\n\\nWe note that using few particles would affect the performance of Bayesian classifier, especially the performance on OOD detection. In real-world cases like automatic driving car where the robustness is critical, it is not worth to trade safety for speed (see the fatality of assisted driving system [1]). Therefore, OPU solves this issue by providing accurate estimation of predictive uncertainty with short evaluation time. Besides the speedup, OPU approximation provides a full distribution to characterize the predictive distribution, which is not available with particle approximation. This allows for better uncertainty measures such as differential entropy.\", \"q4\": \"It would be interesting to see experiments other than out-of-distribution detection, such as calibration.\\n\\nWe will add the calibration experiments.\\n\\nWe solved the minor issues and submitted a revision.\", \"references\": \"[1] \\\"What uncertainties do we need in bayesian deep learning for computer vision?.\\\" Advances in neural information processing systems. 2017.\"}",
"{\"title\": \"Response to Reviewer #1, part 1.\", \"comment\": \"We thank the reviewer for the comments and would like to answer the questions as follows:\", \"q1\": \"Lacks of novelty and straight forward.\\n\\nThis paper proposes a framework that solves the practical problem of real-time evaluation of induced predictive uncertainty. Different from previous knowledge distillation [1,2], we provide a new view of induced distribution \\\\pi which isolates the dependence between y and x, as shown by the graphic model in Fig. 3(b) in the Appendix. The \\u201cisolation view\\u201d is meaningful not only in classification, but also in all applications where predictive uncertainty are required, e.g., image object detection and segmentation. The \\u201cisolation view\\u201d also enables a richer characterization of student model (all previous works use a simple categorical distribution). As this kind of distillation is an unexplored problem, different evaluation metrics are considered and adapted to our framework. We also propose use the unamortized version to study how amortization affects the approximation, which decomposes the total approximation error into model error and amortization gap (Eq.7, Page 5). (see the new experimental results on EMNIST below) The local amortization gap can be used as an evaluation metric for amortized approximation. This also appears to be novel to the literature.\\n\\nThe student distribution does not necessarily need to be a Dirichlet. The framework allows to use various choices of student distribution, and the algorithms based on KL, EMD and MMD as well as the analysis still apply.\", \"more_results_on_emnist_opu_mcdp_mmd\": \"Average approximation error [Eq. 7 on page 5]: $\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(q_{\\\\mathbf{x}_i}, p_{\\\\mathbf{x}_i})$. The averaged MMD between teacher\\u2019s particles (for each x) and the predicted Dirichlet by OPU: 6.5*10^(-2).\\nAverage model error [Eq. 7 on page 5]: $\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(p_{\\\\mathbf{x}_i}, \\\\bar{q}_{\\\\mathbf{x}_i}^\\\\ast)$. The averaged MMD between teacher\\u2019s particles and locally fitted Dirichlet: 6.01*10^(-2).\", \"average_local_amortization_error\": \"$\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(q_{\\\\mathbf{x}_i}, \\\\bar{q}_{\\\\mathbf{x}_i}^\\\\ast)$. The averaged MMD between locally fitted Dirichlet (for each x) and the predicted Dirichlet by OPU: 5.3*10^(-3). Note that the \\u201cerror\\u201d is different from \\u201camortization gap\\u201d \\u0394(x) defined in the paper [Sec. 2.4 on page 5]. The relationship between \\u201camortization error\\u201d defined here and \\u201camortization gap\\u201d is given by Eq. 8 [Sec 2.4 on page 5].\\n\\nIt can be observed that the local amortization error 5.3*10^(-3) is low and it bounds the local amortization gap \\u0394(x) = Avg.Apx.Err \\u2013 Avg.Mdl.Err = 4.9*10^(-3), which is consistent with Eq. 8 [Sec 2.4 on page 5].\\nThe approximation error is mainly determined by the model error, which turns out to be acceptably small. This is consistent with the analysis and also shows the effectiveness and suitableness of using Dirichlet family.\", \"q2\": \"The \\\"single-point\\\" baselines are strange.\\n\\nWe argue that this baseline is fair. Let the \\u2018single-point distribution\\u2019 be understood as the \\u2018local\\u2019 distribution for each input x we have defined, either local induced conditional distribution or local Dirichlet approximation.\\nThis baseline is consistent with the analysis, where the approximation loss equals the model loss and the amortization loss is zero, which should be the best performance OPU can achieve (within Dirichlet family) theoretically.\\nWhen training OPU, we first extract 700 posterior samples from the pretrained MCDP/ SGLD model. (for fairness, the baselines and OPU use the same set of posterior samples) For each input x, this induces 700 particles over the simplex for OPU to approximate.\\nWhen training each local distribution, for each x, a Dirichlet with a k-dim (k=10 for MNIST and Cifar10, k=47 for EMNIST) vector parameter is fitted on the 700 particles induced by the 700 posterior samples, which is enough particles to learn the Dirichlet well. The vector is also disentangled into a probability vector and a concentration scalar to be consistent, with no neural network parameterizing them (no amortization).\", \"references\": \"[1] Bayesian dark knowledge. Advances in Neural Information Processing Systems 28, pp. 3438\\u20133446.\\n[2] Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the comments and would like to answer the questions as follows:\", \"q1\": \"Although the restriction of the student distribution to be tractable seems to limit the design of the student model significantly. And this restrictive distribution family may cause large amortization error, as suggested by Lemma 1 in the paper.\\n\\nThe restrictive distribution family causes \\u201clarge\\u201d total approximation error if the ground truth induced distribution is considerably different from a Dirichlet. However the amortization error is low and depends on capacity of neural network used to construct the approximation [point 2 in Para 1. Sec 2.2 on page 3]. We show the three types of errors with the experimental results on EMNIST.\", \"results_on_emnist_opu_mcdp_mmd\": \"Average approximation error [Eq. 7 on page 5]: $\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(q_{\\\\mathbf{x}_i}, p_{\\\\mathbf{x}_i})$. The averaged MMD between teacher\\u2019s particles (for each x) and the predicted Dirichlet by OPU: 6.5*10^(-2).\\nAverage model error [Eq. 7 on page 5]: $\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(p_{\\\\mathbf{x}_i}, \\\\bar{q}_{\\\\mathbf{x}_i}^\\\\ast)$. The averaged MMD between teacher\\u2019s particles and locally fitted Dirichlet: 6.01*10^(-2).\", \"average_local_amortization_error\": \"$\\\\frac{1}{N}\\\\sum_{i=1}^{N} MMD(q_{\\\\mathbf{x}_i}, \\\\bar{q}_{\\\\mathbf{x}_i}^\\\\ast)$. The averaged MMD between locally fitted Dirichlet (for each x) and the predicted Dirichlet by OPU: 5.3*10^(-3). Note that the \\u201cerror\\u201d is different from \\u201camortization gap\\u201d \\u0394(x) defined in the paper [Sec. 2.4 on page 5]. The relationship between \\u201camortization error\\u201d defined here and \\u201camortization gap\\u201d is given by Eq. 8 [Sec 2.4 on page 5].\\n\\nIt can be observed that the local amortization error 5.3*10^(-3) is low and it bounds the local amortization gap \\u0394(x) = Avg.Approx.Err \\u2013 Avg.Model.Err = 4.9*10^(-3), which is consistent with Eq. 8 [Sec 2.4 on page 5].\\nThe approximation error is mainly determined by the model error, which turns out to be acceptably small. This is consistent with the analysis and also shows the effectiveness and suitableness of using Dirichlet family.\\n\\nMore experiments on MCDP-Cifar10. (Code available via the code link)\\nMethod || MisC. AUROC|| MisC. AUPR || OOD. AUROC || OOD. AUPR || Acc\", \"mcdp_kl\": \"|| 92.2 (P) || 47.0 (P) || 90.5 (E) || 88.7 (E) || 92.4\", \"mcdp_emd\": \"|| 92.2 (P) || 47.0 (P) || 91.4 (D) || 89.1 (D) || 92.4\", \"mcdp_mmd\": \"|| 92.2 (P) || 47.0 (P) || 91.0 (D) || 89.3 (D) || 92.4\", \"opu_mcdp_kl\": \"|| 87.2 (P) || 45.9 (P) || 86.1 (E) || 85.5 (E) || 89.9\", \"opu_mcdp_emd\": \"|| 91.8 (E) || 46.9 (P) || 93.5 (C) || 92.0 (C) || 91.8\", \"opu_mcdp_mmd\": \"|| 91.3 (E) || 46.6 (P) || 92.9 (C) || 91.7 (C) || 91.8\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the problem of avoiding Monte Carlo (MC) estimate for the predictive distribution during the test for Bayesian methods. MC estimate will incur multiple passes where the number of passes depends on the number of samples and therefore the cost can be huge. The authors propose One-Pass Uncertainty (OPU) methods to approximate the predictive distribution through distillation. Experiments on Bayesian neural networks are conducted to demonstrate the proposed method.\", \"quality\": \"The proposed method appears to be technically sound. The view of approximating the predictive distribution over simplex is interesting and may inspire future studies under this formulation. Although the restriction of the student distribution to be tractable seems to limit the design of the student model significantly. And this restrictive distribution family may cause large amortization error, as suggested by Lemma 1 in the paper.\\n\\nThe experiments are well-conducted, and the proposed method is well-evaluated.\", \"significance\": \"This paper studies an important problem in Bayesian machine learning and the proposed method can be combined with many Bayesian methods to reduce the computational cost during the test.\", \"originality\": \"As far as I know, the method is novel. The related work is adequately cited.\", \"clarity\": \"This paper is well-written and easy to follow.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Thank the authors for your detailed rebuttal. I agree with the authors that the proposed method acts as a useful tool for \\\"real-time evaluation of induced predictive uncertainty\\\", and the experiments also validate that the method indeed achieves comparable performance with smaller computations. But for now, I am inclined to not change my score.\\n\\n###################\\n\\n\\nBayesian models maintain the posterior distribution for predictions, which might bring up big computational costs of multiple forwards or big memory costs of multiple particles. To resolve the computational and memory issues at predictions, this paper proposes to distill Bayesian models into an amortized prediction model, avoiding the original multiple forwards. Specifically, in classification, they distill the predictive probabilities into an amortized Dirichlet distribution. They evaluated different distillation metrics, including KL divergence, Earth moving distance, and Maximum mean discrepancy. Empirically, they evaluate the proposed method over out-of-distribution detection. They demonstrate that their method achieves comparable performance with much speedup.\\n\\nStrengths, \\n1, This paper is well-written and the ideas are well-presented. They evaluated the proposed method over different Bayesian models (MCDP & SGLD) as well different metrics (KL, EMD, MMD), and demonstrate the effectiveness of their method. Overall, this paper is very comprehensive.\\n2, As evaluated and validated in the experiments, the proposed method vastly reduces the inference time at test phase. \\n\\nWeakness,\\n1, The paper kind of lacks of novelty. Basically the proposed method distills a Bayesian models into an amortized Dirichlet distribution, which is straightforward. \\n2, The baselines such as MCDP-KL, MCDP-EMD are strange, it is wired why you would distill the predictive distribution of a single point to a Dirichlet distribution. And I think it is probably unfair, as distilling the single-point distribution to the Dirichlet under KL, EMD, MMD might require large amount of particles, which they don't have.\\n3, Related to (2), more baselines should be compared with to better demonstrate the method's effectiveness. 1) performance of the un-distilled MCDP and SGLD models. 2) BDK and DPN for the MCDP models. 3) MCDP and SGLD with fewer particles. The paper claims to achieve 500x speed up, while I reckon the performance of MCDP and SGLD won't deteriorate a lot if you use only fewer particles. \\n4, It would be interesting to see experiments other than out-of-distribution detection, such as calibration. \\n\\nMinor Issues,\\n1, The paper has several un-complied references, such as above eq(4) and appendix D.\\n2, The \\\\Tau(x | \\\\theta) in Figure 1 is a typo.\\n3, Assumption 1 should be put forward to the main articles for comprehensiveness of Lemma 1.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Overall I liked several results presented in this paper. The findings in Figure 2 gives clear illustration on how Bayesian classification models distinguish between in-distribution difficult-to-classify data and out-of-distribution data, namely uncertain predicted mean and large predicted variance. Though I believe this eventually depends on what kind of \\\"kernel\\\"s are used to correlate data points in the prior, throughout the paper I assume meaningful \\\"kernel\\\"s are used (for Bayesian NNs this is rooted in the inductive bias of neural networks).\\n\\nAnother result that I liked is in experiments we can clearly see the advantage of considering the bayesian predictive distribution over a single predictive mean. As demonstrated by BDK-SGLD vs. BDK-DIR-SGLD. \\n\\nThe proposed idea is a simple and meaningful improvement over previous works. Though the contribution is quite limited, the authors present it with great clarity, which I appreciated. However, I do find some discussions in the paper unnecessary and would expect for more technical contributions. For example, I didn't see the argument for the whole section discussing amortization gap. Everything seems straightforward given the hypothesis F is of enough capacity, which obviously does not hold in practice.\", \"many_other_concerns_are_summarized_below\": [\"What if the teacher predictive distribution is far unlike a Dirichlet? How much is the discrepancy between teacher and student predictive distribution? Theoretical or empirical evidence is needed for this modeling choice.\", \"One importance advantage of Bayesian classification models is that they can capture the covariance between predictions of different data points. By amortization this advantage no longer exists.\", \"In the paper the authors keep mentioning that the method can be applied to GPs but I don't see experiments or algorithms for it?\", \"The concentration model is parameterized using an exponential activation, how does this activation affect the performance?\", \"The distilling process is done on a held-out dataset. Which may not be wanted because an advantage of Bayesian classification models (eg. GPs) is that all hyperparameters can be automatically selected by marginal likelihoods and don't need a held-out validation set.\", \"MMD/wasserstein distances are cool but they require also samples from the student, which adds more variance to the distillation process.\", \"The experiment setup is extremely unclear to me. What is \\\"uncertainty measures\\\", are they used as metrics for detecting out-of-distribution data, how are AUROC/AUPR calculated using the uncertainty measures? I can guess the meaning but the paper should be more clear about this.\", \"I found most numbers convincing except that sometimes BDK-SGLD outperforms BDK-DIR-SGLD, if I understand it right, the predicted mean of BDK-DIR-SGLD should be as good as BDK-SGLD?\"], \"minor\": [\"On page 4, above Eq. (4) there is a broken figure link.\", \"On Page 7, \\\"To save space, we only present the best performing uncertainty measure (E, P or C)\\\". What is \\\"C\\\" here?\"]}"
]
} |
rkxVz1HKwB | Certifiably Robust Interpretation in Deep Learning | [
"Alexander Levine",
"Sahil Singla",
"Soheil Feizi"
] | Deep learning interpretation is essential to explain the reasoning behind model predictions. Understanding the robustness of interpretation methods is important especially in sensitive domains such as medical applications since interpretation results are often used in downstream tasks. Although gradient-based saliency maps are popular methods for deep learning interpretation, recent works show that they can be vulnerable to adversarial attacks. In this paper, we address this problem and provide a certifiable defense method for deep learning interpretation. We show that a sparsified version of the popular SmoothGrad method, which computes the average saliency maps over random perturbations of the input, is certifiably robust against adversarial perturbations. We obtain this result by extending recent bounds for certifiably robust smooth classifiers to the interpretation setting. Experiments on ImageNet samples validate our theory. | [
"deep learning interpretation",
"robustness certificates",
"adversarial examples"
] | Reject | https://openreview.net/pdf?id=rkxVz1HKwB | https://openreview.net/forum?id=rkxVz1HKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hvhITUvaMK",
"LVUaRKvEg8",
"rJe_xgpssr",
"BkeotR3osr",
"H1e7_anior",
"HkeO62hooB",
"Hyenpjhior",
"H1guSchjjS",
"S1ezL4_05H",
"H1e5BxVC5r",
"HkgezgVA5H",
"SJlg24cQcr",
"r1epdz-RKr",
"Hkxsru03FB"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576888056227,
1576798726970,
1573797872227,
1573797506931,
1573797226694,
1573797056051,
1573796803764,
1573796416056,
1572926537957,
1572909121741,
1572909063542,
1572213927544,
1571848820790,
1571772483318
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1576/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1576/Authors"
],
[
"~Simon_Kornblith1"
],
[
"~Rigor_Police1"
],
[
"~Rigor_Police1"
],
[
"ICLR.cc/2020/Conference/Paper1576/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1576/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1576/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Author Response\", \"comment\": \"This review, unfortunately, mischaracterizes the main contribution of our paper. We propose a provable *defense* against adversarial attacks on saliency maps: such attacks were already previously proposed by other authors (Ghorbani et al. 2019). The existence of these attacks provides the motivation for provable defenses, e.g. our work.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper discusses new methods to perform adversarial attacks on salience maps.\\n\\nIn its current form, this paper in its current form has unfortunately has not convinced several of the reviewers/commenters of the motivation behind proposing such a method. I tend to share the same opinion. I would encourage the authors to re-think the motivation of the work, and if there are indeed solid use cases to express them explicitly in the next version of the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"still not a solid motivation\", \"comment\": \"In your response the motivation you present now is that there is some adversary which will corrupt the image.\\n\\nI don't agree and I echo the \\\"Rigor Police\\\" comment here that there there is no reasonable adversary here for medical images. How can a criminal profit? Who is the criminal? What do they gain? It is important that the work have a solid motivation which clearly translates to move us forward so we don't spend time solving problems that people don't have.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank you for your comment. Your concerns here are equally applicable to the study of adversarial robustness in the classification case: in both instances, there is a desire to protect against adversarial attacks which may affect how a machine learning system makes decisions. The suggestion that users would \\u201cignore the decision\\u201d if a system returns an incorrect result assumes that the system is entirely redundant: that its output has no effect on the users\\u2019 behavior. This is true in the classification case as well: if we a priori assume that the users know the correct classification before looking at the output, adversarial examples cannot possibly cause any harm. In addition to the medical examples laid out in the paper, gradient-based methods are also used for automated image segmentation and object localization: Subramanya, et al. (https://arxiv.org/abs/1812.02843) recently introduced an adversarial attack against GradCAM, a variation of gradient-based saliency maps which is tailored for object localization specifically.\\nIn the classification case, the wide literature on adversarial robustness published in recent years indicates that the community considers adversarial attacks to be an issue worthy of concern: attacks against interpretation are just as plausible from a security standpoint as (non-physical) attacks against classification.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank you for your comment.\", \"qualitative_evaluation_on_imagenet\": \"We have added additional qualitative comparisons on ImageNet, in Figure 3 and Appendix G.\\n\\t\\nGradients, SoftPlus and Transfer vs Whitebox Attacks: In order to use first-order methods to adversarially attack gradient-based interpretations, a network must have defined second derivatives with respect to the input image (because the saliency map itself consists of the first derivatives of the output with respect to the input image). ReLU networks thus cannot be attacked in this way. Therefore, we use a proxy network with SoftPlus activations to determine the direction of the attack.\", \"figure_2_y_axis\": \"This is the 60th percentile of the robustness certificate: 60 percent of images have robustness certificates at least this large.\", \"rank_based_certificates\": \"it is clear that an $L_p$ norm based metric would be inappropriate for the purpose of certifying similarity between saliency maps: in most works using gradient-based saliency maps (e.g., Sundararajan et al. (2017)), the top values are clipped for visualization purposes, so that the rest of the interpretation can be scaled to a reasonable color range without being dominated by a few large outlier pixels. This suggests that an $L_p$ norm approach may be meaningless, because an $L_p$ norm could be dominated by the behavior of outlier values. The fact that this clipping is accepted practice also indicates that it is the relative rank of the importance of features of an image, rather than the absolute ratios between salience measures, that is important for interpretation.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank you for your feedback. We evaluate our robustness certificates on ImageNet samples in Figure 2. To address the concern about the gap between empirical and certified robustness, we show this gap on CIFAR samples in Appendix J. While the size of perturbations with certified robustness is small compared to the empirical robustness on these samples, our main contribution is to demonstrate that a minor variation on the commonly-used SmoothGrad technique does in fact have a robustness guarantee: furthermore, this is the first robustness certificate for interpretation that can be evaluated at the ImageNet scale. This minor modification to SmoothGrad has little effect on the visual output (Figure 3). Additionally, in testing the empirical attacks (Figure 4), we show that both quadratic SmoothGrad and our variant are empirically robust. Therefore, the variant (sparsified SmoothGrad) combines the visual quality and empirical robustness of Quadratic SmoothGrad with an additional theoretical guarantee of robustness.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We respectfully disagree. We believe that you may have misunderstood the main point of the paper. You mention that:\\n\\n \\u201cI believe right now just the basic gradient is sufficient to indicate the region of interest.\\u201d\\n\\nThe central issue here is that basic gradient methods may NOT in fact indicate the region of interest in an image. A small adversarial noise can keep the label as is but change the basic gradient result significantly. This is the problem that we are addressing in this paper. \\n\\nAs you mention, gradient-based saliency maps represent only a local first order approximation to the true influence of each feature on the decision. This leads to two issues:\\n\\n* Low quality natural interpretations: as noted by Smilkov, et al. (2017) the gradient with respect to a particular pixel may \\u201cfluctuate sharply at small scales\\u201d and therefore be \\u201cless meaningful than a local average of gradient values.\\u201d This observation led to the development of SmoothGrad. To put this simply, a large gradient value over a (very) small range of input values of a feature represents in total a small influence on the class score by that feature. However, if the input image happens to be within this interval where the gradient is large, the feature will erroneously appear to be highly salient. In practice, this leads to simple gradient-based interpretations looking \\u201cnoisy,\\u201d as apparently random pixels appear to be highly salient.\\n\\n* Adversarial attacks on interpretation: as demonstrated by Ghorbani, et al. (2019), one can adversarially craft examples where the basic gradient interpretation is in fact very different from the true region of interest. This is a direct consequence of the saliency map being a \\u201cfirst order approximation\\u201d: it is therefore possible to make this approximation adversarially bad, by crafting a small perturbation to the input.\\n \\nAs detailed in the paper, saliency maps are used in a broad range of highly sensitive downstream applications, including in medical imaging and object localization. Because an adversarial attack has been proposed by Ghorbani et al. (2019) which can distort saliency maps, it is therefore a topic of interest to defend against this type of adversarial attack.\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank you for your constructive feedback. To address your comments:\\n\\n1 and 2. Note that we sparsify the saliency maps before smoothing: in other words, the final smoothed saliency map will be non-sparse, because pixels which are less salient overall may still occur in the top 10% in a minority of random samples. Empirically, we find that this sparsification prior to averaging has little effect on the final smoothed interpretation: in particular, the results are visually very similar to the quadratic SmoothGrad proposed by (Smilkov et al. 2017). This was shown in Figure 3 on an ImageNet sample, as well as on additional CIFAR samples in Appendix G. To address this comment, we have added additional ImageNet samples both in the body of the paper (Figure 3) and in the appendix (Appendix G).\", \"3\": \"We have added empirical tests using additional values of the sparsification parameter to Figure 4.\"}",
"{\"title\": \"Softplus networks are not horrible\", \"comment\": \"Dear Mr. Police,\\n\\nYou say \\\"Softplus networks are horrible.\\\" This statement is extremely unfair to softplus networks. The Swish paper [1] provides comprehensive results for 9 networks with different activation functions. Softplus often outperforms ReLU.\\n\\nI have not read this paper and this comment is not an endorsement of anything besides softplus networks.\\n\\n[1] Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation functions. https://arxiv.org/abs/1710.05941\"}",
"{\"title\": \"method and empirical evaluation\", \"comment\": \"The paper claims that Scaled smoothgrad and Quadratic smoothgrad give vacuous bounds. So, they develop a new method \\u201cSparsified Smoothgrad\\u201d. And to qualitatively illlustrate this method, one example is provided on imagenet (the kind of datasets people really care about, nobody cares about MNIST) (Figure 1). With one example, how do we know if this newly developed method really performs well. Please see papers like Gradcam, where they show several examples to illustrate the interpretation methods.\\n\\n\\u201cWe test Relaxed Sparsified SmoothGrad (\\u03b3 = 0.01, \\u03c4 = 0.1), rather than Sparsified SmoothGrad because our attack is gradient-based and Sparsified SmoothGrad has no defined gradients\\u201d. If methods don\\u2019t have explicit gradients, there are several attack strategies. Refer \\u201cObfuscated gradients give a false sense of security\\u201d paper for more details.\\n\\n\\u201cWe tested on ResNet-18 with CIFAR-10 with the attacker using a separately-trained, fully differentiable version of ResNet-18, with SoftPlus activations in place of ReLU\\u201d. Why can\\u2019t you use ReLU networks? Softplus networks are horrible and give really poor performance when trained. So obviously, the attacks are going to be sub-optimal when these models are used for creating attacks. And is there a reason why transfer attacks are used and not white-box attacks?\\n\\nHonestly, its hard to understand the experimental evaluation. Lot of notations. Empirical section is dense, and hard to parse. In Figure 2, what is the robustness certificate in y-axis? Is it the rank certificate? \\u201cThe lines shown are for the 60th percentile guarantee, meaning that 60 percent of images had guarantees at least as tight as those shown\\u201d. What do you mean tight as those shown?\\n\\nFigure 2 is shown for the case K=0.2n, which is 20% of the entire image. Now, 20% is a big fraction of the image. Consider this case: An image has a small component contributing to a prediction, say 1% - this is not some example I made up for arguments\\u2019 sake. In several medical imaging and vision applications, this happens. Now, only 1% of the image is relevant in making prediction, and let us say gradient based saliency methods correct picked this top-1% overlap i.e., in the saliency map top 1% has high value, and others have very low value. By the bound you show in Figure 2, you guarantee that the prediction stays within 20%. For all you know, the method could highlight some noise, and push the 1% correct prediction to a low value (as the pixels other than 1% had low values in the original saliency map). In this case, the bound becomes useless. All I am saying is K is something that should not be picked before hand. And it is very important to analyze the certification rate as a function of K. 20% is still a very big number, and people really care about what happens for small K.\\n\\nThe previous paragraph clearly states some issues with rank certificate. May be a better metric to look at is L_p norm between predicted and perturbed saliency maps?\\n\\nIn my opinion, empirical evaluation is quite weak to access the importance of the approach. Attacks are created with Softplus network which are extremely weak in the first place. Whitebox setting is not considered. It\\u2019s hard to say the importance of provided bounds at high values of K. Very few qualitative results are presented at imagenet scale. Effect of certification as a function of K is not analyzed. MNIST and CIFAR-10 are simple classification tasks with small images, so the results obtained here are not reflective of what happens as the size of images increase to Imagenet or COCO scale. Rank certification by itself can lead to issues, so it\\u2019s not clear if this is even the right form of certification to look at.\"}",
"{\"title\": \"motivation\", \"comment\": \"Recently, use of \\u201cmedical applications\\u201d to motivate an idea has become a trend in machine learning. This paper is no different. The paper claims that gradient based saliency maps are used in several medical applications [paragraph 1] and these interpretation methods can be attacked. Really? Do you really believe that if neural network interpretation methods are used by doctors, attackers will gain access to these models that easily? And do you really believe that medical data can be tampered so easily to create adversarial attacks? Please think about plausibility of this attack and the legal implications this would have before making comments like these.\\n\\nLet\\u2019s now say these interpretation methods used were somehow attacked. Do you really think doctors will blindly trust these systems and diagnose the patients? Doctors have had years of experience and these AI systems would merely be used to aid diagnosis. Whenever the system interprets something different, doctors would just ignore the suggestion. Before all this, there are millions of considerations on how AI should be used by doctors. Nobody knows this yet. \\n\\nAnother application where interpretation can be used is model debugging. But, this happens in development phase and adversarial attacks don\\u2019t make any sense here. So, I am not even convinced why addressing the problem of robust interpretation is important in the first place.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The work addresses an important problem of robustness of interpretation methods against adversarial perturbations. The problem is well motivated as several gradient-based interpretations are sensitive to small adversarial perturbations.\\n\\nThe authors present a framework to compute the robustness certificate (more precisely, a lower bound to the actual robustness) of any general saliency map over an input example. They further propose variants of SmoothGrad interpretation method which are claimed to be more robust. \\n\\nThe empirical validation of the underlying theory and use of the sparsified (and relaxed) SmoothGradient interpretation methods is unconvincing because of the following reasons:\\n\\n1. In the demonstrated experiment, the proposed alternative to SmoothGrad involves setting the lowest 90% of the saliency values to zero, and the top 10% (for sparsified SmoothGrad) or top 1% (in the case of relaxed sparsified SmoothGrad) to one. The problem with clamping most of the lower values to zero and the remainder (or most of the remainder) higher values to one is that it defeats the purpose of having a saliency map in the first place, which exist to characterize the relative importance of the input features. \\n\\n2. The paper claims that the proposed variant maintains the high visual quality of SmoothGrad, however, the claim is unsubstantiated. With the current setup, there is a clear trade-off between robustness and fidelity of interpretation, which the paper fails to acknowledge. In principle, one can always build extremely sparse or dense interpretation methods (close to all zeros or all ones), which would produce high robustness certificates but would be much less meaningful as they are not faithful to the underlying mechanism of prediction, and the characteristics of the input.\\n\\n3. The authors present empirical evidence on just one set of sparsification parameters and K. It would be more conclusive to evaluate the robustness of the proposed variations with different values of sparsification parameters, and K.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a way to testify how much a SmoothGrad saliency can vary from the true saliency attesting to the adversarial robustness but with the goal of interpretation.\\n\\nAt the premise of this work I do not think the paper motivates the value of such a robustness certificate. Using the gradient (with SmoothGrad), while providing a reasonable interpretation of the model, is just a linear approximation of the true explanation of the prediction. So saying we have the correct approximation is not so useful. I also am not sure we need such a method. For example imagine a doctor is looking at a saliency map and we are sure that it is correct first order approximation because of some method. What were the negative cases where this would fail? How would this method improve that? I believe right now just the basic gradient is sufficient to indicate the region of interest.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces an extension of Cohen et al. (2019)\\u2019s result that allows one to derive robustness certificates for interpretation methods, as well as a bound on the top-K overlap of saliency methods. These results motivate the introduction of Sparsified SmoothGrad and a relaxation of this method that has differentiable elements. These introduced approaches adapt previous methods so the derived bounds are applicable. The proposed methods are shown to perform as well as Quadratic SmoothGrad (Smilkov et al. 2017) in CIFAR-10 experiments.\\n\\nI\\u2019m not familiar with the field so it is hard for me to judge how novel the presented results are or whether the used baselines are the proper ones. That being said, the paper presents an interesting idea and it is relatively easy to read (I really appreciate the fact that for every theorem there is an interpretation, in words, for it). The only thing that sometimes makes the paper hard to read is when it starts to refer to too many constants without remind the reader what they are about. I have two complaints/questions about the relevance of the introduced bounds though. Right now, to me, it seems that the derived theoretical guarantees are not that relevant, hopefully the questions below will help clarify that.\\n\\nIn page 6, before introducing the \\u201cSparsified SmoothGrad and its Relaxations\\u201d, it is said that q is set to 2^13 because otherwise the gap would be too large in images from ImageNet, for example, when comparing to traditional values of q. However, ImageNet is never revisited in the paper. I was expecting to see ImageNet results in the experimental section but they are not there (or maybe some correlation between the gap and performance -- robustness). More than that, the Quadratic SmoothGrad, which doesn\\u2019t have any theoretical guarantee, seems to perform as well as the proposed methods. So where is the gap/theoretical result relevant? What are the settings in which having a method with the derived theoretical guarantees shine? What are the limitations of Quadratic SmoothGrad? Right now, it seems to me that the \\u201cSparsified SmoothGrad and its Relaxations\\u201d and its empirical analysis weaken the paper, because they take a big chunk of it when there is not enough evidence to claim them as an important contribution. Am I missing something? I gave this paper a relatively low score because I\\u2019m not certain about the relevance of its results, but if my questions are satisfactory answered, I\\u2019ll be happy to update my score.\\n\\n------\\n\\n\\n>>> Update after rebuttal: I stand by my score after the rebuttal. \\n\\nUnfortunately I'm not an expert in this area and I don't feel confident in having a very strong opinion about this paper. That being said, enough presentation issues were raised that make me uneasy about raising my score. I do agree with some of the concerns raised by other reviewers.\"}"
]
} |
r1e4MkSFDr | Continuous Convolutional Neural Network forNonuniform Time Series | [
"Hui Shi",
"Yang Zhang",
"Hao Wu",
"Shiyu Chang",
"Kaizhi Qian",
"Mark Hasegawa-Johnson",
"Jishen Zhao"
] | Convolutional neural network (CNN) for time series data implicitly assumes that the data are uniformly sampled, whereas many event-based and multi-modal data are nonuniform or have heterogeneous sampling rates. Directly applying regularCNN to nonuniform time series is ungrounded, because it is unable to recognize and extract common patterns from the nonuniform input signals. Converting the nonuniform time series to uniform ones by interpolation preserves the pattern extraction capability of CNN, but the interpolation kernels are often preset and may be unsuitable for the data or tasks. In this paper, we propose the ContinuousCNN (CCNN), which estimates the inherent continuous inputs by interpolation, and performs continuous convolution on the continuous input. The interpolation and convolution kernels are learned in an end-to-end manner, and are able to learn useful patterns despite the nonuniform sampling rate. Besides, CCNN is a strict generalization to CNN. Results of several experiments verify that CCNN achieves abetter performance on nonuniform data, and learns meaningful continuous kernels | [
"data",
"interpolation",
"cnn",
"ccnn",
"time series data",
"many",
"nonuniform",
"heterogeneous",
"rates"
] | Reject | https://openreview.net/pdf?id=r1e4MkSFDr | https://openreview.net/forum?id=r1e4MkSFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0v9B8i1Au7",
"rJxnj7B3iH",
"HylfyXS2or",
"BJxjNMrhjB",
"SylsLane9H",
"BkeIvRLRYH",
"r1lPKXh6KB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726939,
1573831587990,
1573831385631,
1573831219310,
1572027731246,
1571872349627,
1571828606622
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1575/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1575/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1575/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1575/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1575/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1575/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a continuous CNN model that can handle nonuniform time series data. It learns the interpolation kernel and convolutional architectures in an end-to-end manner, which is shown to achieve higher performance compared to na\\u00efve baselines.\\nAll reviewers scored Weak Reject and there was no strong opinion to support the paper during discussion. Although I felt some of the reviewers\\u2019 comments are missing the points, I generally agree that the novelty of the method is rather straightforward and incremental, and that the experimental evaluation is not convincing enough. Particularly, comparison with more recent state-of-the-art point process methods should be included. For example, [1-3] claim better performance than RMTPP. Considering that the contribution of the paper is more on empirical side and CCNN is not only the solution for handing nonuniform time series data, I think this point should be properly addressed and discussed. Based on these reasons, I\\u2019d like to recommend rejection. \\n\\n[1] Xiao et al., Modeling the Intensity Function of Point Process via Recurrent Neural Networkss, AAAI 2017.\\n[2] Li et al., Learning Temporal Point Processes via Reinforcement Learning, NIPS 2018.\\n[3] Turkmen et al, FastPoint: Scalable Deep Point Processes, ECML-PKDD 2019.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your feedback! Below is our response to your concern.\\n\\n1. Experimental v.s. theoretical \\nIt\\u2019s hard to agree with the claim of the reviewer that a paper is experimental. If the reviewer means particularly this paper is purely theoretical, I would apologize for not emphasizing enough to let the reviewer see the continuous convolution theory in section 3 and the representation power proof in appendix B.\\n \\n2. Experimental result and state-of-the-art\\nOn the one hand, we showed the performance of CCNN together with two other baselines in figure 5. Comparing to the marginal improvement of RMTPP from its baseline N-SM-MPP, the performance gain of CCNN is actually significant. On the other hand, we honestly believe that RMTPP and N-SM-MPP are current state-of-the-art of the task.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We appreciate the reviewer\\u2019s thorough and careful reading of the manuscript and the feedback. We have fixed typos in the revised submission, and below is our response to other concerns:\\n\\n1. The notation in the caption of figure 1 is a little confusing: is x(t_i\\u2019) the same as hat x(t) in the algorithm?\\nThanks for pointing this out. Yes, they are the same. We\\u2019ve changed the notation in the caption to make it consistent with the content. \\n\\n2. The upper plots in the figure may be not convincing enough to support the claims.\\nWe admit the prediction on the upper plots of figure 5, especially the NYSE, shows the gap between prediction and ground truth data. However, we should agree that the prediction on stock transaction in real-world is fairly challenging and there\\u2019s no evidence showing how predictable is the interval to the next transaction. We could have shown only pretty result on the remaining dataset, but we decided to show the result of NYSE, as it could be a good demonstration of improvement of CCNN on very challenging task. \\n\\n3. The advantage of two-hot encoding seems subtle in figure 5. Is there any reason for the significantly higher deviation of CCNN-th in StackOverflow?\\nOur experiment controls that all architectures have the same number of hyperparameters. Since the two hot encoding requires more hyperparameters at the input layers, we have reduced the filter size to match the number of hyperparameters to the other architectures. That is why the performance is sometimes compromised.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your feedback. We have noticed that there might be some misunderstandings of the concept of a non-uniform time series. We hope our response below will help clarify these misunderstandings.\\n\\n1. Motivation of CCNN\\nOur target applications are non-uniform time series, where the time intervals between any adjacent timestamps can be different. For example, a possible set of time stamps could be {0, 0.01, 5, 7.23, 11.1...}. For such non-uniformly-sampled signals, resampling requires first interpolating the signal. As we have shown in the experiment in section 5.1 and appendix D, the selection of interpolation kernels has a direct influence on the performance, and applying any existing preset interpolation kernels can lead to a compromise in the performance. Motivated by this observation, we proposed the end-to-end method to learn the interpolation kernel in a data-driven way, to eliminate the need of trying through preset interpolation kernels and to overcome the challenge when the underlying interpolation for some time-series is unknown. \\n\\nPlease note that we are not directly applying continuous convolution, but rather use continuous convolution, together with the sampling theory, to derive the basic CCNN operation, as shown in section 3. The reason why continuous convolution is indispensable in this derivation is that its convolution kernel is well-defined on any possible time stamps, and so it can align with the non-uniform timestamps in the input signal.\\n\\n2. Stacked CCNN v.s. CCNN + standard CNN\\nContinuous convolution layers can be stacked, but it is necessary only when the output timestamps in the previous layers are non-uniform. If the output time intervals of the intermediate CCNN layer are uniform, then applying standard CNN and CCNN in the subsequent layers are equivalent.\\n\\n3. Strict generalization to standard CNN\\nAppendix B provides a theoretical analysis of the relation between standard CNN and CCNN. In short, when the input timestamps and output timestamps are both uniformly sampled, CCNN would be equivalent to standard CNN. Please refer to Appendix B for the formal statement of the theorem and the proof.\\n\\n4. CCNN v.s. Dilated CNN\\nThese two methods have fundamental differences on assumption. Dilated CNN deals with multi-resolution situation, in which for each scale the signal should still be uniformly sample but of difference resolution. CCNN releases the uniform sampling assumption and requires neither the input signal being uniformly sampled nor the intervals sharing some common factor (which could be regarded as finest resolution). \\n\\n5. Two hot encoding seems another way to discretize, no?\\nYes\\n\\n6. Experiment Result\\nCCNN did have spikes in prediction in figure 5, which indeed confirmed its good performance on that task. The y axis of the line chart in figure 5 represents the time interval to the next event, and a spike indicates some events happen after a long time interval from the previous one. We can see the spike of prediction aligns with the spikes of ground truth data, which is the intuitive evidence of CCNN learned the pattern of the temporal point process. Meanwhile, the quantitative performance evaluation also shows its advantage over existing methods on all of the four datasets. \\n\\n7. Why two hot encoding does not perform that well\\nOur experiment controls that all architectures have the same number of hyperparameters. Since the two hot encoding requires more hyperparameters at the input layers, we have reduced the filter size to match the number of hyperparameters to the other architectures. That is why the performance is sometimes compromised.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"1.\\tThe motivation of continuous convolution is not very clear, can the authors please motivate? To my understanding this is just to handle inputs with unequal time steps, but that can be handled multiple ways, why not just naively resample?\\n2.\\tThe proposed network was defined as continuous convolution followed by the standard convolution. Why not just stack multiple continuous convolutions?\\n3.\\tContinuous convolution should be a general case for standard convolution, can authors explicitly show it?\\n4.\\tAnother way to handle unequal timesteps is by using dilated convolution, can authors please comment how they differ, pros and cons etc.?\\n5.\\tTwo hot encoding seems another way to discretize, no?\\n6.\\tThe experiments section is rather weak, CCNN seems to have a lot of spikes in prediction, e.g., in Fig. 5.\\n7.\\tIt\\u2019s very strange why two hot encoding does not perform that well, while reading the method section, it seems very obvious to take two ends of an interval, in that way two hot encoding seems logical.\\n\\nOverall it seems like an easy extension with a lot of parts not well-justified. Also I don't clearly have a well-grounded motivation for a continuous convolution.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a continuous CNN model to accommodate the nonuniform time series data. The model learns the interpolation and the convolution kernel functions in an end-to-end manner, so that it can capture the signal patterns and be flexible. A layer has three networks, which learn a kernel function to represent the combination of interpolation and convolution, a bias function to represent the error correction with convolution, and then produce the output based on them. The authors introduce two assumptions and a two-hot encoding scheme for the input to control the model complexity. The paper also introduces an application of the proposed CCNN by combing with temporal point process. Experiments on simulated data compares the proposed method with some degenerative baselines show the advantage of learning the interpolation and the two-hot encoding configuration. The authors compare the performance on time interval prediction task based on real world dataset to show the model produces a better history embedding for the task.\\nOverall, the paper has some incremental improvements on the existing methods that dealing with the nonuniform time series data. Instead of using preset interpolation kernels, the proposed model can learn it with the convolution in a data-driven manner.\\nThe paper includes clear explanation on module structure and detailed experiment settings.\\nThe experiments of signal value prediction support the claims of the advantages of the proposed model.\", \"the_notation_in_the_caption_of_figure_1_is_a_little_confusing\": \"is x(t_i\\u2019) the same as hat x(t) in the algorithm?\\nIt is good that the related works section mentioned the adapted RNNs that are used as baselines in the real-world dataset experiment, and the differences between the proposed model and the related SNNs are introduced.\\nHowever, this section and the introduction can be better organized to distinguish the novelty and the contribution of the work.\\nIn page 7, the purpose of the reference in the sentence \\u201cThe time information is either two-hot encoded (Adams et al., 2010) (CCNN-th), or not encoded (CCNN).\\u201d is not very clear.\\nIt will be better if there are a little more analysis of the experiment on predicting time intervals to next event.\\nThe upper plots in the figure may be not convincing enough to support the claims.\\nThe advantage of two-hot encoding seems subtle in the figure 5. Is there any reason for the significantly higher deviation of CCNN-th in StackOverflow?\\nSometimes the usage of \\u201cCCNN\\u201d is not clear, for example, the experiment on speech interpolation compares the \\u201cCCNN-th\\u201d method with baselines, but uses \\u201cCCNN\\u201d in the analysis. Also, it could be better to show the \\u201cCCNN-th\\u201d result in the upper plots of figure 5 instead of \\u201cCCNN\\u201d.\", \"minor_comment\": \"There are some typos in the paper, for example, missing the right parenthesis in page3 \\u201c(refer to Appendix A.1\\u201d, in page4 section 4.1 \\u201cAccording to Eq.(4), the input is \\u2026\\u201d.\\n\\u201cThe left plot shows\\u201d in the last line of the caption of figure 3 should be \\u201cright\\u201d.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"A method that was proposed by authors deals with a problem of non-uniform data in time series. One of ways to deal with this problem is interpolate input signal between data points. In signal processing a standard way to interpolate is to apply a convolution with a kernel. This operation, by itself, is a non-trivial, since we lack any information about signal's spectrum (and do not know optimal kernel). Thus, the authors propose to search for a kernel in a form neural network. To this term they also add bias term which is also a neural network.\\n\\nBasic idea of the paper seems promising, but reported results are only partial. Since a paper is experimental, i.e. no theory at all, then the main judgement should be based on experimental results. They are not convincing, as CCNN is compared with methods that can not be called state-of-the-art.\"}"
]
} |
SJeQGJrKwH | DS-VIC: Unsupervised Discovery of Decision States for Transfer in RL | [
"Nirbhay Modhe",
"Prithvijit Chattopadhyay",
"Mohit Sharma",
"Abhishek Das",
"Devi Parikh",
"Dhruv Batra",
"Ramakrishna Vedantam"
] | We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment. We utilize the VIC framework, which maximizes an agent’s `empowerment’, ie the ability to reliably reach a diverse set of states -- and formulate a sandwich bound on the empowerment objective that allows identification of decision states. Unlike previous work, our decision states are discovered without extrinsic rewards -- simply by interacting with the world. Our results show that our decision states are: 1) often interpretable, and 2) lead to better exploration on downstream goal-driven tasks in partially observable environments. | [
"reinforcement learning",
"probabilistic inference",
"variational inference",
"intrinsic control",
"transfer learning"
] | Reject | https://openreview.net/pdf?id=SJeQGJrKwH | https://openreview.net/forum?id=SJeQGJrKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"yvnRFvxKkw",
"SJxf_1_9nr",
"B1lexwJ2jB",
"BkxEtLJnoH",
"rygx6BJ2sS",
"Skgk7BkhjB",
"Skxb3Ek3iH",
"BJlvPXJhsH",
"rkebgjk0Fr",
"SkxAp2npYr",
"BJgGuIGTKH"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726893,
1574760297919,
1573807848382,
1573807739702,
1573807543953,
1573807383352,
1573807272859,
1573806942925,
1571842792866,
1571830982295,
1571788393855
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1574/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1574/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1574/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1574/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work is interesting because it's aim is to push the work in intrinsic motivation towards crisp definitions, and thus reads like an algorithmic paper rather than yet another reward heuristic and system building paper. There is some nice theory here, integration with options, and clear connections to existing work.\\n\\nHowever, the paper is not ready for publication. There were were several issues that could not be resolved in the reviewers minds (even after the author response and extensive discussion). The primary issues were: (1) There was significant confusion around the beta sensitivity---figs 6,7,8 appear misleading or at least contradictory to the message of the paper. (2) The need for x,y env states. (3) The several reviewers found the decision states unintuitive and confused the quantitative analysis focus if they given the authors primary focus is transfer performance. (4) All reviewers found the experiments lacking. Overall, the results generally don't support the claims of the paper, and there are too many missing details and odd empirical choices. \\n\\nAgain, there was extensive discussion because all agreed this is an interesting line of work. Taking the reviewers excellent suggestions on board will almost certainly result in an excellent paper. Keep going!\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a mechanism for identifying decision states, even on previously unseen tasks. Decision states are states from which the option taken has high mutual information with the final state of that option, but low mutual information with the action at a time-step, given the current state. An intrinsic reward based on an upper bound of the relevant mutual information speeds up learning in similar environments that the agent has not encountered.\\n\\nA key contribution of this work is extending the notion of goal-driven decision states to goal-independent decision states. The authors also introduce an interesting upper bound on the mutual information between options and final states.\\n\\nThe authors provide an empirical evaluation that supports their central claims. \\n\\nI recommend this paper be accepted because it contributes an interesting theoretical result, a definition of decision state that does not depend on extrinsic rewards, and an algorithm to find such decision states.\", \"further_suggestions_to_improve_clarity_that_did_not_influence_the_decision\": [\"The acronym VIC is used frequently throughout the introduction, but is not explained until section 2. Please introduce variational intrinsic control in the introduction.\", \"The partial observability claim is not substantiated by the experiments. From the general response to reviewers, \\\"Therefore, we make the assumption that the complete state is available, in order to study unsupervised decision states.\\\" It is not a problem to assume that the complete state is available, but claiming to generalize to partial observability is not entirely correct, even if your method handles the same semi-partially observable case as previous work like VIC or DIAYN.\", \"Please explicitly describe the motivation for using a bottleneck variable.\", \"In MDPs with fixed episode length, the probability of termination is only non-zero on the last time step. Therefore, information about the time step must be included in a Markov state. The options in this paper terminate based on a time horizon, but there is no mention of whether the intra-option time step is included in the state or bottleneck variables.\"]}",
"{\"title\": \"Response to AnonReviewer3 (Part 2/2)\", \"comment\": \"> Finally, as shown in the Appendix, the method seems rather brittle, requiring both a schedule on the size of the option layer and the strength of the regularization term, something which would be difficult to do in the absence of a downstream task.\\n\\nThe curriculum approach of gradually increasing the number of options has been drawn from prior work (VALOR) where the authors note that it is hard to train option discovery algorithms without such a schedule.\\nThe annealing of regularization strength $\\\\beta$ is a linear schedule common in KL-regularized objectives [2] to slowly ease in the regularization. We did not have to pick a complicated schedule for any of these. Moreover, these are design choices which are independent of the transfer performance, and are simply to maximize empowerment for a given \\\"unsupervised\\\" training loop. Thus, we do not find the approach to be generally brittle.\\n\\n> Similarly, I am not quite sure what to make of Figure 4 apart from the fact that different latent options yield different trajectories. See detailed comments below.\\n\\nIn the MountainCar experiments, we show that in a setting where the state space (and connectivity) is more complicated than a simple 2D grid world, we are still able to obtain decision states as a sparse set of states in the environment where the agent switches between options. We show one such example of a trained model where the discovered decision states were concentrated near the x-axis (velocity = 0 line).\\n\\n\\n> * It is rather disappointing that the reverse predictor uses privileged information, in the guise of x-y features. \\n\\nPlease refer to our general response about privileged (x, y) information.\\n\\n> I do not believe the sandwich bound explains why Eq. 6 helps uncover decision states. ...\\n\\nWe do not claim that Eq. 6 helps discover decision states. We wrote down Eq. 6 with goal of formalizing how one might identify such decision states without explicit goals, and then observed that the objective is in fact an upper bound, which might be of more general interest. We completely agree with the reviewer that not every upper bound one could formulate would result in decision states (thus the claim is only true for the particular upper bound we derive). We are unsure what further theoretical insights we could draw from this, but would be happy to explore any concrete directions the reviewer had in mind.\\n\\n> Decision states are never clearly defined to be those having high mutual information $I(\\\\Omega, A_t, S_t)$. It is also not clear from the main text was is being plotted in Figure 3...\\n\\nWe have updated the paper to clearly define decision states and the values plotted in the heatmaps of Figure 3.\\n\\n> * Footnote 7. High variance on $\\\\beta$=1e-4. Is it possible that too few seeds were used to estimate the standard error on the mean?\\n\\nWe used the same number of random seeds (10) for all experiments in Figure 5. We did not observe a significant reduction in variance beyond 10 seeds.\\n\\n> * \\u201cUpper bound is too tight\\u201d. I don\\u2019t think this is what you mean: tight would refer to how good the upper-bound approximation is, which is different from the constraint specifying an upper-bound which is too small.\\n\\nWe agree that the term tight upper bound does not refer to how close the approximation is to the true value, we have corrected this in the revision.\\n\\n> Section 4.1: Did not understand the sentence \\u201cwe noticed that if an intersection is a decision state [...] having already made the decision.\\u201d\\n\\nWe meant to say that not every state which looks like an intersection needs to be a decision state, if for example the part of the state space in question is only spanned by one option. We have clarified this in the revision.\\n\\n> * Notation: $p^J(s_f | \\\\omega, s_0)$ What does J refer to? J is not defined anywhere.\\n\\n$p^J(s_f | \\\\omega, s_t)$ is defined in Section 2.2 (VIC) as the terminal state distribution achieved when executing policy policy $\\\\pi(a_t | \\\\omega, s_t)$. We will clarify this better.\\n\\n> *\\u201cDecision states to be points where the cart has velocity=0\\u201d: wouldn\\u2019t this mostly be restricted to the initial state?\\n\\nThe cart may come to a halt at any point on the slopes of the mountain as well, instead of just the initial state. Figure 4(d) shows the concentration of decision states near the velocity=0 line but at varying x coordinates (i.e. not just the initial x-coordinate).\\n\\n> * Section 4.1: Did not understand what is meant by \\u201cwhere trajectories associated with different options intersect\\u201d? What does this mean concretely in MountainCar for trajectories to intersect?\\n\\nTrajectories in MountainCar are visualized in the position-velocity space and if trajectories corresponding to two different options reach the same position and velocity, then they are said to intersect.\\n\\n[2] Higgins, Irina, et al. \\\"beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.\\\" ICLR 2.5 (2017): 6.\"}",
"{\"title\": \"Response to AnonReviewer3 (Part 1/2)\", \"comment\": \"We thank the reviewer for their detailed comments and feedback.\\n\\n> Experiments are unfortunately limited to MiniGrid environments and MountainCar, whereas most recent work on empowerment (DIAYN, VALOR, etc.) has focused on more complex continuous control benchmarks\\n\\nThe focus in recent work (DIAYN, VALOR) on empowerment based objectives in RL is to identify / learn skills useful for downstream tasks. In contrast, as mentioned in the introduction, and in the general response, our focus is not on identifying transferable skills but rather using skills/options to identify \\\"decision-states\\\" in an environment (in an unsupervised manner) and study how well does providing an incentive to visit decision states aid transfer to novel environments.\\n\\nMoreover, the experiments in DIAYN and VALOR have an implicit assumption that skills have a common start state and due to their respective empowerment objectives, the skill-conditioned trajectories do not overlap after the first state. Hence, there is no obvious notion of what a \\u201cdecision state\\u201d would look like in any of the DIAYN+VALOR experimental settings e.g.: Figure 2 (a, b) in the DIAYN paper has a single state common to all trajectories in the 2D navigation task and the first set of states in the overlapping skills task \\u2014 where the skills do not overlap again after they leave the narrow region on the left.\\n\\n> the experiments themselves are limited in scope and rely mostly on evaluating whether the mutual information term between goal and actions can be used to craft an auxiliary reward for downstream tasks, a task first derived in the InfoBot paper\\n\\nWe would like to reiterate that our goal was to demonstrate that a notion of decision states defined in terms of the marginal over different options can be useful for transfer to downstream tasks. Our mathematical formulation uses mutual information to formalize this notion of decision states, building on top of previous work such as [1].\\n\\n> the results here are mixed, with the mutual information based reward statistically outperforming count-based bonus (which it builds on) only when the optimal hyper-parameter $\\\\beta$ (controlling the strength of the regularization term) is known\\n\\nThe choice of $\\\\beta$ is akin to model selection in any unsupervised learning algorithm e.g.: picking the number of clusters in K-means. We believe that using a particular supervised task to pick the value of \\\\beta is not unfair, as long as the value of \\\\beta thus selected transfers to other tasks. In response to this concern, we ran an experiment where we studied if the best value of \\\\beta from N6S25 experiments also generalizes to other environments. Experimental evidence suggests this is true, with $\\\\beta$ = 1e-2 yielding the transfer performance in MultiRoomN5S4 as well.\\n\\n> A much more compelling use case for the proposed regularization term would be improved data-efficiency on a downstreak task after pre-training with Eq. 6, following in the footsteps of DIAYN\\n\\nWe ran such an experiment in order to inspect the performance of the learnt skills on continuous control tasks in the DIAYN setting with and without our information regularization. We ran it on Hopper, BipedalWalker, InvertedDoublePendulum and MountainCar (continuous actions) and found no statistically significant change in the performance and sample complexity of the best skill fine-tuned on external reward. Thus, it seems that the benefit from our approach is more in line with the gains from InfoBot, than those from DIAYN (in terms of transfer).\\n\\n> There is also an important missing baseline which is glossed over...\\n> As this term is already present in Eq. 6 it would be interesting to repeat the experiments, dropping the second term (minimality) but instead sweeping over the strength of the entropy regularization term $\\\\alpha$.\\n\\nPlease refer to our general response, we have added this baseline with $\\\\beta=0$. We found that sweeping $\\\\alpha$ values generally does not lead to stable options learning for a broad range of alpha values. Thus, it is tougher to impose different strengths of the information regularization using this suggested parameterization and we instead fix a value of $\\\\alpha$ and follow the InfoBot machinery to impose the bottleneck leveraging the DPI.\\n\\n[1] van Dijk, Sander G., and Daniel Polani. \\\"Grounding subgoals in information transitions.\\\" 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2011.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"> the resulting implementation takes too many unmotivated modifications to make work, and the results aren't terribly convincing despite this ...\\n> ... Specifically, the usage of privileged information (x,y coordinates in what is described as a partially observed domain) ...\\n\\nConstraining the empowerment should be thing that handles spurious diversity, so the need to use x,y coordinates is concerning.\\n\\nPlease see our general response on the use of privileged (x, y) information.\\n\\n> Regarding the empirical results, do all of the baseline make similar use of domain knowledge / privileged information? For example, does your implementation of DIAYN utilize x,y coordinates in the option predictor?\\n\\nYes, to ensure appropriate comparison, the relevant baselines also have access to privileged X-Y information during pre-training.\\n\\n> ... and the ad hoc choice of which networks had memory (i.e. an LSTM) don't fit the narrative that motivates the work\\n\\nThe particular inductive biases on networks in deep learning (ex. convolution) generally have as big an impact as the exact objective and formulation that is used. We argue that our choice of an LSTM is also in the same spirit, and does not make the work any less principled. The difference between the recurrence used in InfoBot versus our model is that InfoBot uses recurrence over partially observed states for both the encoder $p(Z_t|S_t, G_t)$ and decoder $\\\\pi(A_t|S_t, Z_t)$. Our model uses recurrence over just the encoder $p(Z_t|S_t, \\\\Omega)$ and keeps the decoder \\\\pi(A_t|S_t, Z_t)$ reactive. This choice was made because we do not have a time-varying goal vector that InfoBot had at every step, instead we have an episodic option which may be easily inferred from the past by the decoder.\\n\\n> Is the Beta=0 case considered?\\n\\nPlease refer to the VIC + Max-Entropy baseline in our general response.\\n\\n> The empirical evidence isn't terribly convincing. On two of the three exploration setups, the random network is as performant, and does need a Beta hyper-parameter to tune\\n\\nThe two smaller environments being pointed to \\u2014 N5S4 and N3S4 \\u2014 were used by InfoBot to report transfer performance. However, our point w.r.t. reporting the random network baseline was to show that these environments themselves are not complicated enough for any sophisticated approach to obtain significant (or meaningful) gains over a randomly initialized network. Therefore, in addition to these two, we also report performance on a larger environment \\u2014 N6S25 \\u2014 where there actually is some room for improvement beyond random baselines and heuristic based exploration strategies. Thus, we claim that our work actually establishes stronger baselines for the line of work that ours and InfoBot represents, and the random baseline is something that future works should also compare against to make meaningful progress.\\n\\n> Though, to be fair, the connection between decision state identification and a good count-based exploration bonus is loose\\n\\nAs highlighted in Goyal et. al. (InfoBot), decision states represent a sparse set of sub-goals in the environment. A naive count-based exploration will encourage exhaustive visitation of the state space, whereas decision-states based exploration will narrow this to encourage visitation of a sparser set of decision states which would improve sample efficiency in hard exploration tasks. Informally, count-based methods perform \\u201cexhaustive exploration\\u201d whereas decision-state methods perform \\u201ctargeted exploration\\u201d where the targets are all decision states.\\n\\n> The qualitative results are also a bit lacking. I was expecting the doorways to \\\"pop out\\\" more; the relatively muddled decision state activations made me wonder if they were really better than DIAYN's.\\n\\nPlease refer to our general response about decision states aligning with human intuition.\\n\\n> This work would really benefit from a quantitative measure of decision state identification accuracy. Some prior work (e.g. \\\"Grounding Subgoals in Information Transitions\\\") ...\\n\\nWe agree that Dijk & Polani [1] studied a simple 6-room environment where they were able to compute the Relevant Goal Information (RGI) explicitly using an optimal policy obtained by value iteration on the MDP. However, we note that all of our environments (including the Four Room and Maze in Fig. 3) are partially observed MDPs. If we were to compute the true quantities of interest assuming fully observed states, we would not expect alignment with what a partially observed agent ends up learning.\\n\\nReferences\\n[1] van Dijk, Sander G., and Daniel Polani. \\\"Grounding subgoals in information transitions.\\\" 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). IEEE, 2011.\"}",
"{\"title\": \"Response to AnonReviewer1 (Part 2/2)\", \"comment\": \"> 3. Can you show similar plots for the MultiRoom environment? Perhaps those will shed more light into the learned behavior.\\n\\nUnfortunately, we did not find the decision states in this case to be human interpretable. However, these states are still \\\"useful\\\" as they lead to better transfer (which is ultimately our goal).\\n\\n> Including all possible ablations to the objective in equation 6 would be helpful to tease apart the contribution of each term: variational control, information bottleneck, and entropy.\\n\\nPlease see our general response about the VIC + Maximum Entropy ablation. We do not drop the entropy term in our objective as we found it necessary for stable training of all option-based models.\\n\\n> 5. The results in Table 1 and Figures 6 do not show a significant gain in performance. Moreover, I suspect the other methods will converge to similar values soon after 8M steps. For a fair comparison, it would be good to show how the curves look after all (or at least . more of the models) converge. From Table 1, it actually seems to me that InfoBot encounters less penalty across all 3 models, even tho DS-VIC overperforms on the more challenging one... ... Plus, the comparison does not seem fair given that the the numbers are reported before the baselines converged.\\n\\nWe trained all models for the same number of time-steps for a fair comparison and reported numbers after the best model converged. As a result, some of the baselines did not converge before the time-step limit. This demonstrated that DS-VIC had higher sample efficiency than baselines. Furthermore, in the MultiRoom N6S25 (6 rooms of max-size 25) environment a count-based baseline does converge asymptotically and no other baseline was able to beat its sample complexity except DS-VIC. While Table 1 evaluates all models trained for 8M steps (which may seem like an unfair comparison), Figure 6 demonstrates the sample efficiency of DS-VIC over baselines and justifies our choice of 8M steps.\\n\\n> ... Moreover, in all 3 cases, at least one of the other methods seems to at least be close to the performance of DS-VIC so I am concerned that these may not be very challenging tasks for well-tuned SOTA methods.\\n\\nWe disagree, Fig. 6 clearly shows statistically significant improvements over baselines for DS-VIC.\\n\\n> 6. Did you pretrain the baselines for the same number of steps as DS-VIC? Please include more details about this stage and how you ensure the comparison was fair.\\n\\nYes, we did train all baselines for equal number of steps but we picked best checkpoint over training for the transfer task (similar to Infobot). We have added these details in the revision.\\n\\n> It might be useful to include other baselines such as the curiosity-based exploration method from Pathak et al, 2017 . or universal value functions (Schaul et al. 2015)\\n\\nIn general, curiosity based approaches are useful for high dimensional observation spaces such as images, and thus do not form a natural baseline for exploration in gridworlds. Moreover, InfoBot is a more natural baseline for our work than UVF. However, for completeness we plan to add both these baselines in the next revision.\"}",
"{\"title\": \"Response to AnonReviewer1 (Part 1/2)\", \"comment\": \"> While this is an interesting paper, I did not find the experimental section to be convincing enough for publication at this stage. Moreover, I am concerned by the novelty of the proposed approach, which seems very similar to InfoBot, the main difference between them being the replacement of the goals with options thus moving towards less supervision / use of prior-knowledge. However, if I understand correctly, this method still requires to specify a prior over the options, so it is not clear why DS-VIC would be preferable to InfoBot\\n\\nSimilar to modern approaches in deep variational inference (such as a variational auto-encoder [1]), the \\\"prior\\\" is something that is parameterized as a distribution in some abstract latent space and does not necessarily imply hardcoding some knowledge. In such models the prior generally gets \\\"meaning\\\" because of what the decoder (the policy in our case) ends up learning. We follow the same parameterization as DIAYN, setting the prior as a uniform categorical distribution. We have added this to the paper.\\n\\nAs mentioned in introduction of the submission, DS-VIC is preferable to InfoBot in cases where specifying goals is difficult. For e.g. when: 1) rewards are sparse or absent (Pathak et al., 2017), 2) for an agent to learn meaningful behavior, proxy goals and rewards need to be hand engineered (making it hard to scale), and 3) the notion of a goal might not even be obvious in some cases.\\n\\n> If the empirical results showed a more robust and significant gain in performance on more diverse or complex tasks, I would be willing to reconsider my judgement regarding the significance of this work.\\n\\nWe stand by our initial response to all reviewers that given the outlook of our paper to demonstrate utility of our notion of decision states as the marginal over options, we compared directly to the experimental setup of InfoBot. The tasks considered in the paper are goal-driven tasks with really sparse reward and are fairly complex in that sense, requiring appropriate exploration (Section 4.2 Transfer to Goal-Driven Tasks).\\n\\n> 1. How do you define the final state $S_f$? Do you only consider the episodic RL setting? Do you consider $S_f$ to always be after a fixed number of steps or whenever the termination function is triggered?\\n\\nYes, we consider the episodic RL setting where each option is terminated after a fixed number of steps, which is a hyperparameter. We have added these details in the experiments section.\\n\\n> 2. Please include more information about what is represented in Figure 3 and the color scale.\\n\\nWe have added the missing color scale in the revision. The darker shades of red represent higher values of $I(A_t, \\\\Omega | S_t)$ and are supposed to represent decision states.\\n\\n> ... (1) it seems like the model does not detect \\\"all decision states\\\" (e.g. intersections) . that a human may consider while including others (e.g. corners, for which I do not agree that the agent should be incentivized to go even after learning from the reward structure that there isn't much to gain), ...\\n> ... and (3) the model doesn't seem to be very consistent about what it considers a to be a \\\"decision state\\\".\\n\\nPlease refer to our general response about alignment of decision states with human intuition. Moreover, subject to optimization and initialization the model identifies different options which reach different parts of the state space. The decision states are a function of the options learnt (in the manner we define them). Thus, each intersection need not be a decision state (if there is only one option that leads to that part of the space).\\nIn general, we do not claim that our method necessarily identifies all states that humans would agree as decision states but we find that the decisions state that do emerge have some non-trivial alignment with what humans would expect. Regardless, we show empirically that identifying decision states via the pre-trained encoder leads to better transfer performance in novel environments.\\n\\n\\n> ... (2) why is it that the for example the top-left figure has a rather nonuniform distribution across the rooms (is it influenced by the initial position of the agent?) ...\\n\\nYes, the decision states are influenced by the initial state of the agent and for the Four Room and Maze environments in Figure 3, the initial state was chosen uniformly at random from the set of all states.\\n\\nReferences\\n[1]: Kingma, Diederik P., and Max Welling. 2013. \\u201cAuto-Encoding Variational Bayes.\\u201d http://arxiv.org/abs/1312.6114v10.\"}",
"{\"title\": \"General Response to All Reviewers\", \"comment\": \"We thank the reviewers for the detailed feedback and are encouraged that they thought the paper provides interesting theoretical results [R3], and is well motivated and interesting [R2, R1].\\n\\nWe have also uploaded a revision addressing several suggestions. Below, we address general concerns and individually reply to each reviewer.\\n\\nWe would like to reiterate that our key contribution is in providing a principled mathematical framework to identify decision states in goal-free settings. Towards this, we derive an upper bound on the VIC objective [R3] and demonstrate that the resulting objective is useful for identifying decision states. Next, we show that incentivizing decision state visitation leads to better transfer performance. Our paper focuses on replicating the exact settings and experiments used by InfoBot [R3]. Furthermore, while methodologically related, we believe direct comparisons to experiments in DIAYN and VALOR (which focus more on skill learning as opposed to decision states) are not central to the goal of the paper.\\n\\n[R2, R1] Alignment to human intuition: Although our decision states align occasionally with human intuition, we are ultimately interested in transfer to goal-driven tasks. While interpretable decision states do emerge in InfoBot with goal conditioning, our formulation chooses decision states which have high option-information (emerging naturally from our mutual information computation in Eq. 8 and Appendix A5), as opposed to states which have high goal-information (for a particular goal, as in InfoBot). Because of this averaging, our unsupervised decision states are not expected to be as interpretable as InfoBot.\\n\\nHowever, if we condition on a single option (a loose proxy for a goal in our setting), the decision states become more interpretable and evident. However, we refrain from making strong claims about them being exactly human interpretable. Moreover, not every state which looks like an intersection needs to be a decision state, if for example the part of the state space in question is only spanned by one option.\\n\\nUltimately, we take the view that a useful decision state is one that is useful for a downstream task. Thus, we don't seek a binary threshold for decision states, but view them as part of a continuum.\\n\\n[R3] Optimal value of $\\\\beta$: The choice of $\\\\beta$ is inherent in unsupervised representation learning is no different from say, choosing the number of clusters in K-means. Picking $\\\\beta$ does not make our method supervised, and should instead be thought of as model selection. Further, our experiments provide initial evidence that the same value of $\\\\beta$ works well across multiple downstream transfer environments.\\n\\n[R2, R3] We provide further justifications for why the use of privileged (x, y) information in the MultiRoom environments does not render our experimental results less meaningful:\\n\\n1. The (x, y) coordinate information is available to all baselines, including ones which infer options in during training [R2] (DIAYN) or condition on a goal vector (InfoBot \\u2014 uses relative coordinates of the goal w.r.t. the current state). Considering choices made in prior work, our usage of (x, y) coordinates is fair. As explained in 3), to the best of our knowledge, it is an open (and orthogonal) problem to understand how to use VIC/DIAYN line of work with only partial state observations and learn explicit options.\\n2. No global (x, y) coordinates are used when providing the bonus at transfer time (which is what we ultimately care about). Transfer only uses the state encoder and the policy (and not the option inference network) \\u2014 in a novel environment, one does not need global (x, y) coordinates; only the trained encoder is used to provide an exploration bonus from partial observations.\\n3. Appendix (A6): all option-based methods (DS-VIC, DIAYN) use the MultiRoom environment in which the layout of the rooms and doors is fixed for discovering options in the pre-training phase. In some sense, it is reasonable to assume that the agent (over multiple episodes of training) is building an internal map and estimating its state in the map using a black box SLAM module. Integrating state estimation and mapping into a deep learning pipeline is an active area of research that is orthogonal to our primary contribution [1]. Therefore, we make the assumption that the complete state is available, in order to study unsupervised decision states. \\n\\nWe also discuss in Section A6 about our attempts to discover options without the X-Y coordinates in the pre-training phase.\\n\\n[R1, R2, R3] Missing baseline of VIC + Max-Entropy, $\\\\beta = 0$: We ran an additional experiment with $\\\\beta=0$, and found performance close to $\\\\beta=10^-6$. We have updated Fig. 5 with this. Mathematically, this baseline is identical to running VIC with max-entropy.\\n\\nReferences\\n[1]: Gupta, Saurabh et. al. 2017. \\u201cCognitive Mapping and Planning for Visual Navigation.\\u201d\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an algorithm for discovering decision states in MDPs, in a task agnostic manner. The proposed method essentially generalizes the information bottleneck approach used in InfoBot, to an unsupervised setting. Whereas InfoBot recovers decision states in goal-conditioned policies by minimizing mutual information between goals and actions, the proposed approach (DS-VIC) does the same using implicit goals discovered by variational intrinsic control. Concretely, the authors propose adding a regularization term which constrains the mutual information between an episodic latent variable (having high mutual information to a final state) and each action along the (option conditional) trajectory. This objective is shown to be equivalent to a constraint optimization problem, which fits both a lower and upper bound on the VIC mutual information term. On MiniGrid environments, the approach is shown to yield somewhat interpretable decision states. In the footsteps of InfoBot, the authors then show that the resulting regularizer (the \\u201clatent-action\\u201d mutual information term) can serve as a useful auxiliary reward for transfer tasks with a hard exploration problem (transfer from goal navigation in small to large rooms).\\n\\nThe paper is interesting and provides some interesting theoretical results which adds to the body of work on variational intrinsic control, and unsupervised reinforcement learning. To the best of my knowledge, the derived upper-bound to the mutual information term used in VIC is novel and would be of interest to the community. The extension of InfoBot to the unsupervised regime, swapping extrinsic goals for inferred options is also intuitive and generalizes published work. \\n\\nThat being said, I do not think the paper is ready for publication at this point in time.\\n\\nOn the experimental side, the results are limited and not entirely convincing. Experiments are unfortunately limited to MiniGrid environments and MountainCar, whereas most recent work on empowerment (DIAYN, VALOR, etc.) has focused on more complex continuous control benchmarks. In addition, the experiments themselves are limited in scope and rely mostly on evaluating whether the mutual information term between goal and actions can be used to craft an auxiliary reward for downstream tasks, a task first derived in the InfoBot paper. Unfortunately, the results here are mixed, with the mutual information based reward statistically outperforming count-based bonus (which it builds on) only when the optimal hyper-parameter $\\\\beta$ (controlling the strength of the regularization term) is known. This somewhat breaks the narrative of unsupervised decision states. A much more compelling use case for the proposed regularization term would be improved data-efficiency on a downstreak task after pre-training with Eq. 6, following in the footsteps of DIAYN. Finally, as shown in the Appendix, the method seems rather brittle, requiring both a schedule on the size of the option layer and the strength of the regularization term, something which would be difficult to do in the absence of a downstream task.\\n\\nThere is also an important missing baseline which is glossed over. Instead of regularizing the mutual information between goals and actions $I(\\\\Omega, A_t \\\\mid S_t, S_0)$ one could simply encourage the low-level goal-conditioned policy to have high entropy. Indeed, one can show that $KL[\\\\pi(a_t \\\\mid s_t, w_t) \\\\| \\\\pi_0(a_t)]$, with a learnt or fixed prior $\\\\pi_0$, is an upper-bound to $I({s_t, w_t} ; a_t)$: hence minimizing this KL (equivalent to maximizing entropy) would naturally prevent high mutual information between options and individual actions. As this term is already present in Eq. 6 it would be interesting to repeat the experiments, dropping the second term (minimality) but instead sweeping over the strength of the entropy regularization term $\\\\alpha$.\\n\\nWith respect to clarity, the paper could also be greatly improved. Decision states are never clearly defined to be those having high mutual information $I(\\\\Omega, A_t \\\\mid S_t)$. It is also not clear from the main text was is being plotted in Figure 3, requiring the reader to go through Appendix 4 to understand the visualization (without any references to this appendix in the main text). Similarly, I am not quite sure what to make of Figure 4 apart from the fact that different latent options yield different trajectories. See detailed comments below.\", \"detailed_comments\": [\"(method)\", \"It is rather disappointing that the reverse predictor uses privileged information, in the guise of x-y features. This represents quite a lot of prior knowledge about what we wish the options to encode. How does the method perform from the raw state?\", \"Although mathematically elegant, I do not believe the sandwich bound explains why Eq. 6 helps uncover decision states. If we had an unbiased estimate of the mutual information between option and last state, then this would imply that an equality constraint on the VIC mutual information term would similarly yield decision states. This seems unlikely. An alternative hypothesis is that Eq. 6 works by injecting a soft prior, both via the temporal decomposition which aims to minimize $I(\\\\Omega, A_t |...)$ and its upper-bound $I(\\\\Omega, Z_t | \\u2026)$ which bottlenecks state information. Testing this theory could help strengthen the paper.\", \"It would be nice to spell-out that the standard \\u201creverse bound\\u201d employed by VIC cannot be used to estimate $I(\\\\omega, A_t \\\\mid S_t, S_0)$ as this would yield a lower-bound whereas we aim to minimize this term. I was almost tricked into thinking this was a simpler and valid strategy before realizing my mistake.\", \"Footnote 7. High variance on $\\\\beta=1e-4$. Is it possible that too few seeds were used to estimate the standard error on the mean?\", \"(clarity)\", \"*\\u201cDecision states to be points where the cart has velocity=0\\u201d: wouldn\\u2019t this mostly be restricted to the initial state?\", \"What exactly was done for DIAYN in relation to Eq. 8? The text from S4-Baselines seems at odds with the caption.\", \"\\u201cUpper bound is too tight\\u201d. I don\\u2019t think this is what you mean: tight would refer to how good the upper-bound approximation is, which is different from the constraint specifying an upper-bound which is too small.\", \"Notation: Section 2.1 states that upper-case denotes random variables and lower-case denotes samples. Following this notation, equation should read e.g. $w \\\\sim p(\\\\Omega)$ and not $\\\\Omega \\\\sim p(w)$.\", \"Notation: $p^J(s_f \\\\mid w, s_0)$. What does J refer to? J is not defined anywhere.\", \"Section 4.1: Did not understand the sentence \\u201cwe noticed that if an intersection is a decision state [...] having already made the decision.\\u201d\", \"Section 4.1: Did not understand what is meant by \\u201cwhere trajectories associated with different options intersect\\u201d? What does this mean concretely in MountainCar for trajectories to intersect? Furthermore aren\\u2019t states with velocity=0 (mostly) restricted to the initial state?\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors introduce a novel decision point discovery method, wherein the VIC objective is constrained to minimize the amount of information between the option and the actions taken along the trajectory. After relaxing the constraint and introducing an upper bound to I(a; o), a tractable algorithm is produced. An implementation is then tested empirically on several partially observed grid worlds and a simple continuous control task on both qualitative bottleneck identification and quantitative benefits as an exploration bonus in a transfer learning setup.\\n\\nOverall I think the approach is well motivated and interesting, but the resulting implementation takes too many unmotivated modifications to make work, and the results aren't terribly convincing despite this; as such I currently vote for it's rejection. Specifically, the usage of privileged information (x,y coordinates in what is described as a partially observed domain) and the ad hoc choice of which networks had memory (i.e. an LSTM) don't fit the narrative that motivates the work. Constraining the empowerment should be thing that handles spurious diversity, so the need to use x,y coordinates is concerning.\\n\\nRegarding the empirical results, do all of the baseline make similar use of domain knowledge / privileged information? For example, does your implementation of DIAYN utilize x,y coordinates in the option predictor? Is the Beta=0 case considered? It isn't mentioned, but perhaps it amounts to one of your other baselines?\\n\\nThe empirical evidence isn't terribly convincing. On two of the three exploration setups, the random network is as performant, and does need a Beta hyper-parameter to tune. Though, to be fair, the connection between decision state identification and a good count-based exploration bonus is loose. The qualitative results are also a bit lacking. I was expecting the doorways to \\\"pop out\\\" more; the relatively muddled decision state activations made me wonder if they were really better than DIAYN's.\\n\\nThis work would really benefit from a quantitative measure of decision state identification accuracy. Some prior work (e.g. \\\"Grounding Subgoals in Information Transitions\\\") were able to do this by choosing environments where the quantities of interest were tractable to calculate exactly. This would at least allow us to see if the discovered decision states correspond to those that are optimal under your metric.\", \"rebuttal_edit\": \"Thank you for the thoughtful rebuttal. If this were an option, I'd raise my score to a 5. But as my vote is to 'revise and resubmit' (unfortunately translated to 'reject' as per the conference system), I'll leave it in the 'reject' score bucket.\\n\\nYour rebuttal lessened my concerns about the using (x,y) and only using an LSTM for the policy. I agree these are largely orthogonal issues, and since they were consistent with their baselines, that is fine.\\n\\nHowever, the response to the unintuitive nature of the \\\"decision states\\\" is less convincing. If all you care about is the downstream task performance, why even show the qualitative results or impose the semantics of \\\"decision states\\\" on the learned representations? The sandwich bound is novel in and of itself; I understand the need to relate to prior work, but I actually think dropping the language around \\\"decision states\\\" (maybe outside of the algorithm's motivation) and talking purely in information theoretic terms would improve the paper.\\n\\nYour response to [R3] on the setting of Beta hyper-parameter seems to not be supported by the results. You claim that values work well across multiple tasks, but the best reported value for the \\\"hard\\\" task (1e-2) is worse than the random baseline on the \\\"easy\\\" tasks.\\n\\nPerhaps only the \\\"hard\\\" task matters and the \\\"easy\\\" tasks are only of significance due to their usage in InfoBot. But I'd argue that unless your method is dominating existing methods on both without changing hyper-parameters, switching to a more complex (and commonly used) benchmark would be more convincing. The Atari Suite or the control tasks used in related work (e.g. DIAYN) would be my suggestion.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThis paper proposes an unsupervised method for discovering \\\"decision states\\\", defined as states where decisions affect the future states an agent can reach in the environment, based on the variational intrinsic control (VIC) framework that maximizes an agent's empowerment. This paper draws connections to many prior works such as VIC, diversity is all you need (DIAYN), and InfoBot and shows results on MiniGrid and MountainCar.\", \"main_comments\": \"While this is an interesting paper, I did not find the experimental section to be convincing enough for publication at this stage. Moreover, I am concerned by the novelty of the proposed approach, which seems very similar to InfoBot, the main difference between them being the replacement of the goals with options thus moving towards less supervision / use of prior-knowledge. However, if I understand correctly, this method still requires to specify a prior over the options, so it is not clear why DS-VIC would be preferable to InfoBot. If the empirical results showed a more robust and significant gain in performance on more diverse or complex tasks, I would be willing to reconsider my judgement regarding the significance of this work. \\n\\nMinor Questions / Comments:\\n\\n1. How do you define the final state S_f? Do you only consider the episodic RL setting? Do you consider S_f to always be after a fixed number of steps or whenever the termination function is triggered?\\n\\n2. Please include more information about what is represented in Figure 3 and the color scale. I am slightly confused by the interpretation of that plot because (1) it seems like the model does not detect \\\"all decision states\\\" (e.g. intersections) . that a human may consider while including others (e.g. corners, for which I do not agree that the agent should be incentivized to go even after learning from the reward structure that there isn't much to gain), (2) why is it that the for example the top-left figure has a rather nonuniform distribution across the rooms (is it influenced by the initial position of the agent?) and (3) the model doesn't seem to be very consistent about what it considers a to be a \\\"decision state\\\".\\n\\n3. Can you show similar plots for the MultiRoom environment? Perhaps those will shed more light into the learned behavior. \\n\\n4. Including all possible ablations to the objective in equation 6 would be helpful to tease apart the contribution of each term: variational control, information bottleneck, and entropy.\\n\\n5. The results in Table 1 and Figures 6 do not show a significant gain in performance. Moreover, I suspect the other methods will converge to similar values soon after 8M steps. For a fair comparison, it would be good to show how the curves look after all (or at least . more of the models) converge. From Table 1, it actually seems to me that InfoBot encounters less penalty across all 3 models, even tho DS-VIC overperforms on the more challenging one. Moreover, in all 3 cases, at least one of the other methods seems to at least be close to the performance of DS-VIC so I am concerned that these may not be very challenging tasks for well-tuned SOTA methods. Plus, the comparison does not seem fair given that the the numbers are reported before the baselines converged. \\n \\n6. Did you pretrain the baselines for the same number of steps as DS-VIC? Please include more details about this stage and how you ensure the comparison was fair.\\n\\n7. It might be useful to include other baselines such as the curiosity-based exploration method from Pathak et al, 2017 . or universal value functions (Schaul et al. 2015)\"}"
]
} |
BJgQfkSYDS | Neural Policy Gradient Methods: Global Optimality and Rates of Convergence | [
"Lingxiao Wang",
"Qi Cai",
"Zhuoran Yang",
"Zhaoran Wang"
] | Policy gradient methods with actor-critic schemes demonstrate tremendous empirical successes, especially when the actors and critics are parameterized by neural networks. However, it remains less clear whether such "neural" policy gradient methods converge to globally optimal policies and whether they even converge at all. We answer both the questions affirmatively in the overparameterized regime. In detail, we prove that neural natural policy gradient converges to a globally optimal policy at a sublinear rate. Also, we show that neural vanilla policy gradient converges sublinearly to a stationary point. Meanwhile, by relating the suboptimality of the stationary points to the~representation power of neural actor and critic classes, we prove the global optimality of all stationary points under mild regularity conditions. Particularly, we show that a key to the global optimality and convergence is the "compatibility" between the actor and critic, which is ensured by sharing neural architectures and random initializations across the actor and critic. To the best of our knowledge, our analysis establishes the first global optimality and convergence guarantees for neural policy gradient methods. | [
"global optimality",
"rates",
"stationary points",
"actor",
"critic",
"schemes",
"tremendous empirical successes"
] | Accept (Poster) | https://openreview.net/pdf?id=BJgQfkSYDS | https://openreview.net/forum?id=BJgQfkSYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fJ34mCeZBO",
"r1x5sJ_uiS",
"SklqKJuOjS",
"Skexj6DusB",
"SyepUav_oB",
"HJen2hwdor",
"HygSQAMAYr",
"ByxGBAtpKS",
"ByerOV_aFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726853,
1573580705855,
1573580673751,
1573580184326,
1573580116526,
1573579955712,
1571855901046,
1571819065921,
1571812460624
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1573/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1573/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1573/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper makes a solid contribution to understanding the convergence properties of policy gradient methods with over-parameterized neural network function approximators. This work is concurrent with and not subsumed by other strong work by Agarwal et al. on the same topic. There is sufficient novelty in this contribution to merit acceptance. The authors should nevertheless clarify the relationship between their work and the related work noted by AnonReviewer2, in addition to addressing the other comments of the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer 1.\", \"comment\": \"We appreciate your review of our work. We have addressed the issues raised by the other reviewers and revised our work accordingly.\"}",
"{\"title\": \"Reply to Reviewer 3.\", \"comment\": \"We appreciate the valuable review and suggestions. We have revised our work accordingly. In what follows, we address your concerns in detail.\\n\\n1. On two-layer neural network parameterization. Thank you for the suggestion. We revise our abstract and introduction and highlight the two-layer neural network setting. See P.1 and P.2 of the revised work. Besides, our work can be readily extended to the multi-layer settings based on the recent progress in the generalization of overparameterized deep neural networks (See, e.g., [Cao et al., 2019], [Zou et al., 2019a], [Zou et al., 2019b], [Frei et al., 2019]).\\n\\n2. On independent sampling. We revise our abstract and introduction and highlight the independent sampling setting. See P.1 and P.2 of the revised work. Such an assumption is imposed for the ease of analysis, which can be relaxed to a weakly dependent setting with $\\\\beta$-mixing Markov chains. Specifically, drawing from the Markov Chain until it mixes leads to weakly dependent data that converges to the stationary distribution, which can be handled by standard techniques ([Bhandari et al., 2018]).\\n\\n3. On radius $R$ in the projection-free setting. The projection-free version of vanilla gradient descent requires the neural TD algorithm for the critic update. The radius $R$ comes from the neural TD algorithm, which is the Algorithm 2 (P. 27) in our work.\\n\\n4. Keeping $b_r$ fixed in training. Such an approach is the standard procedure in the recent analysis of overparameterized neural networks (see, e.g., [Allen-Zhu et al., 2018], [Arora et al., 2019]). According to [Allen-Zhu et al., 2018], the joint training may cause confusion in some scenario, where the joint training is equivalent to the optimization solely over the last layer ([Daniely, 2017]). Besides, based on the recent progress in the generalization of deep neural networks ([Cao et al., 2019]), our work can be readily extended to the joint training regime.\\n\\n5. Existence of stationary point. The existence of a stationary point is guaranteed in our setting since the parameter set is bounded and closed, and the objective is assumed to be smooth. A smooth function has a minimum on the closed and bounded domain, which is a stationary point.\\n\\n[Cao et al., 2019]: Yuan Cao, Quanquan Gu. (2019) Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks.\\n\\n[Zou et al., 2019a]: Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu. (2019) Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks.\\n\\n[Frei et al., 2019]: Spencer Frei, Yuan Cao, Quanquan Gu. (2019) Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks.\\n\\n[Zou et al., 2019b]: Difan Zou, Quanquan Gu. (2019) An Improved Analysis of Training Over-parameterized Deep Neural Networks.\\n\\n[Bhandari et al., 2018]: Jalaj Bhandari, Daniel Russo, Raghav Singal. (2018) A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation.\\n\\n[Allen-Zhu et al., 2018]: Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song. (2018) A Convergence Theory for Deep Learning via Over-Parameterization.\\n\\n[Arora et al., 2019]: Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang. (2019) Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks.\"}",
"{\"title\": \"Reply to Reviewer 2. (continued)\", \"comment\": \"[Agarwal et al., 2019]: Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan. (2019) Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes.\\n\\n[Sutton et al., 2000]: Richard S. Sutton, David McAllester, Satinder Singh, Yishay Mansour. (2000) Policy Gradient Methods for Reinforcement Learning with Function Approximation.\\n\\n[Cai et al., 2019]: Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang. (2019) Neural Temporal-Difference Learning Converges to Global Optima.\\n\\n[Martens, J. et al., 2015]: Martens, J. and Grosse, R. (2015) Optimizing neural networks with kronecker-factored approximate curvature.\\n\\n[Farahmand et al., 2016]: Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv\\\\'ari, Shie Mannor. (2016) Regularized Policy Iteration with Nonparametric Function Spaces.\\n\\n[Bhandari et al., 2018]: Jalaj Bhandari, Daniel Russo, Raghav Singal. (2018) A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation.\\n\\n[Allen-Zhu et al., 2018]: Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song. (2018) A Convergence Theory for Deep Learning via Over-Parameterization.\\n\\n[Cao et al., 2019]: Yuan Cao, Quanquan Gu. (2019) Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks.\\n\\n[Zou et al., 2019a]: Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu. (2019) Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks.\\n\\n[Frei et al., 2019]: Spencer Frei, Yuan Cao, Quanquan Gu. (2019) Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks.\\n\\n[Zou et al., 2019b]: Difan Zou, Quanquan Gu. (2019) An Improved Analysis of Training Over-parameterized Deep Neural Networks.\\n\\n[Chizat et al., 2018]: Lenaic Chizat, Francis Bach. (2018) On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport.\\n\\n [Jacot et al., 2018]: Arthur Jacot, Franck Gabriel, Cl\\\\'ement Hongler. (2018) Neural Tangent Kernel: Convergence and Generalization in Neural Networks.\\n\\n[Munos et al., 2008]: R\\\\'emi Munos, Csaba Szepesv\\\\'ari. (2008) Finite-Time Bounds for Fitted Value Iteration.\\n\\n[Chen et al., 2019]: Jinglin Chen, Nan Jiang. (2019) Information-Theoretic Considerations in Batch Reinforcement Learning.\\n\\n[Arora et al., 2019]: Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang. (2019) Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks.\"}",
"{\"title\": \"Reply to Reviewer 2. (continued)\", \"comment\": \"4. Inadequate notation of $\\\\hat\\\\nabla_\\\\theta J(\\\\pi_{\\\\theta_i})$. Thank you for the suggestion on notation. We keep the notation for simplicity, and add remarks to clarify that $\\\\hat\\\\nabla_\\\\theta J(\\\\pi_{\\\\theta_i})$ depends on $\\\\omega_i$. See P.6 of the revised version.\\n\\n5. On the temperature parameter. Our work abuses the term \\\"temperature parameter.\\\" Recall that our parameterization takes the form of $\\\\pi\\\\propto \\\\exp(\\\\tau\\\\cdot f)$. Under standard terminology, the temperature parameter under such a parameterization should be $1/\\\\tau$. When $\\\\tau$ grows, the policy will become a deterministic policy eventually, which is the optimal policy. We clarify the term abuse in the definition of policy parameterization (see P.5 of the revised work) and we thank the reviewer for point it out.\\n\\n6. Solving (3.8) with a limited budget. Though it is not our primary concern, up to minor modifications, our analysis allows for approximately solving (3.8), which is the common practice of approximate second-order optimization ([Martens, J., 2015]). Meanwhile, the count $T$ in our work is the number of policy updates (which require solving (3.8) in each update), which is not the runtime of the algorithm.\\n\\n7. What is the function $\\\\iota(\\\\omega)$ ? The function $\\\\iota(\\\\omega)$ is arbitrary, which parameterizes the element of function class $\\\\mathcal F_{R, \\\\infty}$ together with $f_0$. Such a function characterizes the size of the reproducing kernel Hilbert space (RKHS) ball. See the revised version.\\n\\n8. On Assumption 4.1. In Assumption 4.1, we assume that the function class falls in an RKHS, which is known to be rich. Many previous works have similar assumptions. See, for e.g., Assumption A.6 of [Farahmand et al., 2016]. Such an assumption is standard in the literature of nonparametric analysis. %{\\\\color{red}Remove the following? which is realistic and is satisfied for rich function spaces such as RKHS.}\\n\\n9. On Assumption 4.2. Such an assumption holds for certain behavior policy $\\\\mu$ that is sufficiently explorative. If it holds in addition that for any policy $\\\\pi$, the density ratios of the corresponding stationary distributions and visitation measures over that of $\\\\mu$ has an upper bounded $L_2$-norm, then Assumption 4.2 holds by following the Cauchy-Schwartz inequality. Hence, Assumption 4.2 is in the same flavor as the concentrability assumptions, which is standard and required ([Munos et al., 2008], [Chen et al., 2019]), and is indeed an assumption over the transition kernel.\\n\\n10. Sampling the stationary distribution of Markov Chain. The analysis of TD algorithms allows for weakly dependent data. Specifically, drawing from the Markov Chain until it mixes leads to weakly dependent data that converges to the stationary distribution, which can be handled by standard techniques ([Bhandari et al., 2018]).\\n\\n11. Equation after (D.14). The equations after (D.14) shows that (D.14) leads to (D.15), which follows from the direct calculation. See P.22 for the revised presentation.\\n\\n12. Understanding $u_{\\\\hat \\\\theta}$. The optimality of policy gradient hinges on two facts, which are the representation power of the function parameterization and the mismatch between the given policy and the optimal policy. The mismatch is characterized by the density ratios and is the major effect among the components of $u_{\\\\hat \\\\theta}$. In contrast, the extra term $\\\\phi_{\\\\hat\\\\theta}^\\\\top \\\\hat\\\\theta$ in $u_{\\\\hat \\\\theta}$ is a remainder that appears due to our analysis technique.\"}",
"{\"title\": \"Reply to Reviewer 2 and clarification for a few misclaims in the review.\", \"comment\": \"We appreciate the valuable review and suggestions. We have revised our work accordingly.\\n\\nFirst, we would like to point out that the reviewer seems to have made a mistake by stating that:\\n\\n\\\"In the related work, the authors distinguish their paper from that of Agarwal et al. (2019) by claiming that they are studying the non-tabular setting and they use the actor-critic scheme. However, the first claim is incorrect because Agarwal et al. (2019) also studied the non-tabular setting (see Section 6 in Agarwal et al. (2019)) and proved an $O(1/\\\\sqrt{T})$ convergence rate.\\\"\\n\\nSuch a statement is unsubstantiated. In fact, on page 3 of our submission file, we have acknowledged the results in [Agarwal et al., 2019] on the non-tabular setting. Specifically, we wrote:\\n\\n\\\" In independent work, Agarwal et al. (2019) prove that vanilla policy gradient and natural policy gradient converge to globally optimal policies at $1/\\\\sqrt{T} $-rates in the tabular and linear setting.\\\"\\n\\nHere, by \\\"linear setting\\\", we mean that they parametrize the value function using linear functions of the score function $\\\\nabla_{\\\\theta} \\\\log \\\\pi_{\\\\theta}$, which is known as the compatible features [Sutton et al., 2000]. Thus, their policy evaluation problem is only for linear value functions, and results such as [Bhandari et al, 2018] are readily applicable. In contrast, we parametrize the value function using neural networks, which would incur an unremovable bias in the policy gradient due to incompatibility. \\n\\nIn what follows, we address your concerns in detail.\\n\\n1. A comparison between our work and [Agarwal et al., 2019]: *simultaneous works*. According to our personal communication record with one of the authors of [Agarwal et al., 2019] (available upon request), our work and [Agarwal et al, 2019] are simultaneous works. Indeed, the first version of [Agarwal et al., 2019] is released on August 1st, 2019, the current version is released on August 29th, 2019, whereas our work is first released on August 29th, 2019 (arXiv link omitted here for double-blind review), and the deadline of ICLR 2020 submission is on September 27th, 2019. \\n \\nBesides, our work is different from [Agarwal et al., 2019] in many aspects. First, they require policy smoothness (Assumption 6.2), which does not cover our case, since the score function ($\\\\nabla_\\\\theta \\\\log\\\\pi_\\\\theta$) has indicators when parameterized using the ReLU activation function. \\n \\nSecond, when nonlinear functions are used to represent the value functions, we are faced with the compatibility issue, where the parametrization of value functions leads to an unremovable bias of the policy gradient. We directly tackle this challenge by proving that shared neural network architecture and random initialization of the weights lead to approximately compatible function approximations under the overparameterized regime. \\n \\nMoreover, we also explicitly quantify the effect of such approximate compatibility on the convergence and global optimality of policy gradient algorithms. In contrast with our work, [Agarwal et al., 2019] studies the restricted function class, where the value functions are parameterized by linear compatible functions (which is also suggested by [Sutton et al., 2000]).\\n\\n2. A comparison between our work, [Cai et al., 2019], and other literature on overparameterized neural networks: different objectives. Our work has different objectives from [Cai et al., 2019]. The main objective of our work is to understand the policy gradient and natural policy gradient under the overparameterized neural-network regime. In contrast, [Cai et al., 2019] analyze the neural TD algorithm, which serves as the policy evaluation step of the algorithms we analyze. Both the analysis in [Cai et al., 2019] and the analysis in overparameterized neural networks do not carry over directly to our work, as the analysis of neural TD and stochastic gradient descent is different from that of policy gradient algorithms. \\n\\nBesides, combining the analysis in [Agarwal et al., 2019] and [Cai et al., 2019] does not directly lead to our results, as [Agarwal et al., 2019] does not cover the neural policy class with the ReLU activation function and they require the value functions to be linear in the score function to achieve compatibility.\\n\\n3. Extremely large requirement on the network width $m$. Even in the supervised learning setting, most of the analysis ( [Jacot et al., 2018], [Allen-Zhu et al., 2018], [Du et al., 2018], [Zou et al., 2019a,b], [Chizat et al., 2018], [Arora et al., 2019]) need large network width for the corresponding generalization error guarantees. Our analysis hinges on such a regime. Therefore, our convergence and optimality guarantees also need a large network width $m$. Meanwhile, the radius $R$ here can be treated as a constant, which corresponds to the capacity of the neural network class.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies policy gradient where the policy is parameterized by an extremely wide neural network. The authors assume that the number of nodes ($m$) in the network is extremely large ($T^{8}R^{18}$, $T$ is the total runtime and $R$ is the radius of the function class that the network falls into), and they restrict their convergence analysis of the policy gradient algorithm in a particular function class and claims that the approximation error between the true network and the function class goes to zero when $m$ is large. The paper is written in a rigorous way and the presentation is mostly clear. I have some concerns about many of the assumptions made across the paper that are not explained or verified. This may potentially further decrease the usefulness of the analysis in this paper though the theoretical result with $m=T^8$ is already very impractical. Another major issue with this paper is that the theoretical analysis is not novel in terms of bringing new insights and results to the field given many other papers including global convergence of policy gradient (Agarwal et al., 2019;), convergence of neural TD learning (Cai et el., 2019), theory for overparameterized neural networks (Jacot et al., 2018; Allen-Zhu et al., 2018a;b; Du et al., 2018a;b; Zou et al., 2018; Chizat and Bach, 2018; Jacot et al., 2018, etc.) and other very similar papers.\\n\\nThere is a prior work by Agarwal et al. (2019) that proves the global convergence of both vanilla policy gradient and natural policy gradient methods. In the related work, the authors distinguish their paper from that of Agarwal et al. (2019) by claiming that they are studying the non-tabular setting and they use the actor-critic scheme. However, the first claim is incorrect because Agarwal et al. (2019) also studied the non-tabular setting (see Section 6 in Agarwal et al. (2019)) and proved an O(1/\\\\sqrt{T}) convergence rate. Moreover, the actor-critic scheme in this paper is just a trivial modification of the nonlinear policy gradient method by calling existing result for TD learning in Cai et al. (2019). Therefore, the contribution of this paper is not so clear given existing papers.\\n\\nIn Algorithm 1, the policy gradient estimator $\\\\widehat{\\\\nabla} J(\\\\pi_{\\\\theta_i})$ also depends on the critic parameter $\\\\omega_i$. It is better to show this dependency in the notation as well.\\n\\nIn Algorithm 1, the temperature parameter $\\\\tau_i$ is updated in natural policy gradient but not in vanilla policy gradient. It seems that $\\\\tau_i$ increases linearly with $i$, which makes the policy defined in eq (3.1) close to a uniform distribution when the time horizon goes to infinity. This seems to offset the update of parameter $\\\\theta_i$.\\n\\nIn the update of natural policy gradient, solving eq (3.8) is really expensive in computation, especially in the setting of this paper where $m$ is chosen as $T^{8}$. It seems impossible to obtain a reasonable solution within the claimed $O(1/T^{1/4})$ runtime.\\n\\nWhat is the function $\\\\iota(w)$ in Assumption 4.1?\\n\\nIt would be better for the authors to discuss more about Assumption 4.1. It is unknown why the action-value function $Q^{\\\\pi}$ for all policy can fall into this class. \\n\\nThe equation in Assumption 4.2 is exceeding the paper margin. Please make sure the paper follows the format guidelines.\\n\\nAssumption 4.2 seems to be very strong. The remark after the assumption says that this condition is made on the Markov transition kernel. However, this may not be true since the assumption needs to hold for any two arbitrary policies. It is not known what kind of transition kernel $\\\\mathcal{P}$ will satisfy this.\\n\\nIn each step of the neural policy gradient (Algorithm 1), the authors need to call a TD learning (Algorithm 2) to approximate the unknown action-value function $Q_{\\\\omega_i}$ associated with the policy $\\\\pi_{\\\\theta_i}$ at the $i$-th step. It seems that in the learning process of ALgorithm 2, at each iteration, it samples independent data from the stationary state-action distribution which is unknown. \\n\\nIn the proof of Theorem 4.8, it seems that eq (D.14) and (D.15) are the same. Why it needs to be proved twice? In addition, why the equation after (D.14) holds?\\n\\nThe authors should provide more details about the function $u_{\\\\hat \\\\theta}$ defined in eq (4.4), which seems to approximate the critic function. Specifically, why are there the derivative terms instead of just the inner product term in eq (4.4).\", \"other_comments\": \"In the last sentence of Section 3.1, \\u201c... approximate aompatible function approximation ...\\u201d\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"[Summary]\\nThis paper studies the convergence of actor-critic algorithms with two-layer neural networks under iid assumption. Theoretical results show that, in the aforementioned setting, policy gradient and natural policy gradient converge to a stationary point at a sublinear rate and natural policy gradient's solution is globally optimal. \\n\\n[Decision]\\nI recommend accepting this paper. While these results may not have immediate practical interest, the analysis is an important step in understanding the behavior of actor-critic algorithms with neural networks. The final revision needs to be more clear on the limiting assumptions and include a conclusion section that assembles the results.\\n\\n[Comments]\\nThe first important assumption is the architecture of the neural network. The results in the paper consider two-layer neural networks but the abstract implies that the analysis applies to general neural networks.\\n\\nThe second assumption is that the state-action pairs are sampled iid from the policy's stationary distribution. In reality, these samples are either gathered online (and are therefore temporally correlated), or from a buffer that is also affected by previous policies. The description of results in the abstract and introduction should clarify this setting.\\n\\nSection G in the appendix shows the analysis for the projection-free method. The projection radius (R) does not seem to play a role in the new algorithm, but the convergence rate still depends on R. Does R have a different definition in this context?\\n\\nSubsection 3.1 says \\\"without loss of generality, we keep b_r fixed at the initial parameter throughout training and only update W.\\\" Whether this modification affects the optimization of the neural network and its convergence rate is not obvious to me.\\n\\nThe paragraph above Theorem 4.7 defines the stationary point. Is this point guaranteed to, or assumed to, exist?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper provides theoretical studies for neural policy gradient descents for reinforcement learning problems. The authors prove global optimality and rates of convergence of neural natural/vanilla policy gradient. Their results rely on the key factor for \\\"compatibility\\\" between the actor and critic. This is ensured by sharing neural architectures and random initializations across the actor and critic.\\n\\nThe paper is well written with clear derivations. I suggest the publication of this paper.\"}"
]
} |
rJgffkSFPS | Multi-objective Neural Architecture Search via Predictive Network Performance Optimization | [
"Han Shi",
"Renjie Pi",
"Hang Xu",
"Zhenguo Li",
"James T. Kwok",
"Tong Zhang"
] | Neural Architecture Search (NAS) has shown great potentials in finding a better neural network design than human design. Sample-based NAS is the most fundamental method aiming at exploring the search space and evaluating the most promising architecture. However, few works have focused on improving the sampling efficiency for a multi-objective NAS. Inspired by the nature of the graph structure of a neural network, we propose BOGCN-NAS, a NAS algorithm using Bayesian Optimization with Graph Convolutional Network (GCN) predictor. Specifically, we apply GCN as a surrogate model to adaptively discover and incorporate nodes structure to approximate the performance of the architecture. For NAS-oriented tasks, we also design a weighted loss focusing on architectures with high performance. Our method further considers an efficient multi-objective search which can be flexibly injected into any sample-based NAS pipelines to efficiently find the best speed/accuracy trade-off. Extensive experiments are conducted to verify the effectiveness of our method over many competing methods, e.g. 128.4x more efficient than Random Search and 7.8x more efficient than previous SOTA LaNAS for finding the best architecture on the largest NAS dataset NasBench-101. | [
"nas",
"efficient",
"neural architecture search",
"gcn",
"great potentials",
"human design",
"fundamental"
] | Reject | https://openreview.net/pdf?id=rJgffkSFPS | https://openreview.net/forum?id=rJgffkSFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fQF0GfyaJ",
"B1xCIu0jjr",
"HkexsmCsoH",
"SylorXRjor",
"SJxkCG0siS",
"BJxK3oT85B",
"H1xAiuEB5H",
"ByxRlhQrcr",
"S1eGfMLzcH",
"HklMcrr0YB",
"Hyxav_e0FB",
"r1l5Hf5ptH",
"BJlWsqrpKS",
"SkeABfSpYr",
"SylLqFmiYB",
"rJgzqjHn_S",
"HklXcUF7dr",
"rkgJFBgZuS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798726822,
1573804118132,
1573802903720,
1573802819264,
1573802694521,
1572424625211,
1572321445761,
1572318197642,
1572131338109,
1571865994512,
1571846244705,
1571820097606,
1571801752812,
1571799621793,
1571662222036,
1570687882392,
1570113162918,
1569944950997
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"~Linnan_Wang1"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1572/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1572/AnonReviewer3"
],
[
"~Linnan_Wang1"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"~Linnan_Wang1"
],
[
"ICLR.cc/2020/Conference/Paper1572/AnonReviewer2"
],
[
"~Scarlett_Li1"
],
[
"ICLR.cc/2020/Conference/Paper1572/Authors"
],
[
"~Linnan_Wang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to use Graph Convolutional Networks (GCNs) in Bayesian optimization for neural architecture search. While the paper title includes multi-objective, this component appears to only be a posthoc evaluation of the Pareto front of networks evaluated using a single-objective search -- this could be performed for any method that evaluates more than one network. Performance on NAS-Bench-101 appears to be very good.\\n\\nIn the private discussion of reviewers and AC, several issues were raised, including whether the approach is compared fairly to LaNAS and whether the GCN will predict well for large search spaces. Also, unfortunately, no code is provided, making it unclear whether the work is reproducible. The reviewers unanimously agreed on a weak rejection score.\\n\\nI concur with this assessment and therefore recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to review #3\", \"comment\": \"Thanks for your interests and we appreciate your constructive comments. We hope our following clarifications can address your concerns.\\n\\n1. \\\"whether the next model is chosen given the current prediction model, how do you pick the next candidate models in open domain setting?\\\"\\n- As mentioned in section 3.4, we randomly sample a subspace as candidate pool if the search space is huge. For NASBench, since the search space only contains 420K models, which can be inferred easily with very little cost (less than 0.01 seconds), we take the whole search space as our candidate pool. To simulate the situation in a very large search space, we also test our algorithm with different pool sampling ratio (1, 0.1, 0.01, 0.001) and the result is shown as follows. Even though we select a subspace for prediction, the improvement is still significant.\\n---------------------------------\\nratio NASBench LSTM\\n1 1465.4 558.8\\n0.1 1564.6 1483.2\\n0.01 2078.8 1952.4\\n0.001 4004.4 2984.0 \\n----------------------------------\\n\\n2. \\\"It looks like Eqn. 9 biases the training heavily towards high accuracy region, which is a hack\\\"\\n- The main purpose of NAS is to search an architecture with high accuracy. Similar with MCTS in LaNAS, Our proposed loss is another way that focuses the search on the important regions. Even the weighted loss is removed, our method is still outperform than previous SOTA - LaNAS (shown in appendices section A.2).\\n\\n3. \\\"I wonder whether there is a more problem-independent way of doing it\\\"\\n- Our proposed weighted loss is a possible way to solve searching problem with attention, which is problem-independent. Our idea is inspired by Focal Loss [1], while Focal Loss is used for classification and our weighted loss is used for regression.\\n\\n4. \\\"Do you have open domain search research in single-objective search?\\\"\\n- Yes, the model we find can achieve 0.783 accuracy on ImageNet dataset. The model is the same as M3, but we can achieve it within less samples (400 samples) for single-objective search. We add experiment in DARTS search space, where each cell contains 4 blocks (11 nodes) and 8 possible operations. After sampling 200 models, we picked out 2 models with best test accuracy 97.26%, 97.39% on cifar10 respectively. The experiment details are appended in the updated paper\\n\\n5. \\\"Why not use NasNet architecture for a fair comparison with other NAS papers?\\\"\\n- We add experiment in DARTS search space, where each cell contains 4 blocks (11 nodes) and 8 possible operations. We picked out 2 models (V1 & V2) with best test accuracy on cifar10 after sampling 200 and 400 models respectively. The experiment details are appended in the updated paper.\\n--------------------------------------------------------------------------------------------------------\\nModel \\tParams\\tTop-1 err\\tNo. of samples truly evaluated\\n-------------------------------------------------------------------------------------------------------\\nNASNet-A+cutout\\t 3.3 M\\t 2.65\\t 20000\\nAmoebaNet-B+cutout\\t 2.8 M\\t 2.55\\t 27000\\nPNASNet-5\\t 3.2 M\\t 3.41\\t 1160\\nNAO\\t 10.6 M \\t 3.18\\t 1000\\nENAS+cutout \\t 4.6 M\\t 2.89\\t -\\nDARTS+cutout\\t 3.3 M\\t 2.76\\t -\\nBayesNAS+cutout\\t 3.4 M\\t 2.81\\t -\\nASNG-NAS+cutout\\t 3.9 M\\t 2.83\\t -\\n--------------------------------------------------------------------------------------------------------\\nBOGCN+cutout (V1)\\t 3.1M \\t 2.74\\t 200\\nBOGCN+cutout (V2)\\t 3.5M \\t 2.61\\t 400\\n---------------------------------------------------------------------------------------------------------\\n\\n6. \\\"I wonder how much roles the proposed methods (BO + GCN) play during search?\\\"\\n- We indeed performed ablation study in appendices section (A.1 & A.2) and the number of samples evaluated is shown below. As can be seen, the improvement of BO and GCN is significant. \\n----------------------------------------------\\n MLP BOMLP GCN BOGCN\\n4527.0 4042.25 2860.6 1465.4\\n----------------------------------------------\\nOn NasBench-101, compared to using only the GCN predictor, BOGCN finds global optimal with 1395.2 fewer samples; compared to using BOMLP, BOGCN finds global optimal with 2576.85 fewer samples, which proves the importance of BO and GCN respectively.\\n\\n7. \\\"Also what do you mean by \\u201cwithout BO\\u201d? Do you only predict the mean and assume all variance is constant?\\\"\\n- Yes, we remove BO part and select architectures only based on predictors. In this scenario, we use point estimation to predict the accuracy of each model, thus not taking variance into account. We make it clear in the updated version.\\n\\n[1] Focal Loss for Dense Object Detection. 2017\"}",
"{\"title\": \"Reply to review #2\", \"comment\": \"Thank you for your comments. We address your main concerns as follows.\\n\\n1. \\\"the key point of the paper has nothing to do with GCN\\\"\\n- We believe GCN predictor is one of the key points of our paper. As we mentioned, GCN predictor is proposed as the surrogate model for Bayesian Optimization. Different from popular Gaussian Processes, GCN predictor can obtain the architecture embeddings better as shown in the experiments in Figure 2.\\n\\n2. \\\"the key point of the paper has nothing to do with multi-objective\\\"\\n- Multi-objective search is not discussed in some well-known NAS papers such as DARTS, ENAS, NAO, etc. We believe it is not fully explored in the NAS literatures. We have shown that our method is much better than other multi-objective searching algorithms by adding more baseline comparisons in Section 4.3.\", \"https\": \"//drive.google.com/file/d/1IOr511FIjCIfIxeqLn1JIZmV0FKuog-K/view?usp=sharing\\n\\n10. \\\"Algorithm 1 uses Pareto front, which does not exist when doing experiments on single-objective search.\\\"\\n- Single-objective problem is compatible with our definition of multi-objective problem (Section 2.1) where the Pareto front is only constituted by one architecture. We describe single-objective case specifically in the updated version.\\n\\n[1] A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. 2010\\n[2] Semi-Supervised Classification with Graph Convolutional Networks. 2017\\n[3] Sample-Efficient Neural Architecture Search by Learning Action Space. 2019\\n[4] AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search. 2019\"}",
"{\"title\": \"Reply to Review #1 [1/2]\", \"comment\": \"Thank you for spending time to read our paper, we find your comments very insightful and constructive. We improved our paper according to your reviews.\\n\\n1. \\\"It's not clear how much the GCN generalizes in being able to encode arbitrarily large architectures\\\"\\n- GCN can handle a large number of nodes. For instance, the Pubmed citation dataset contains 19,717 nodes and GCN can encode it pretty well [1]. One of the most notable features of GCN is its ability to encode graphs with varying sizes, as long as the possible operations are the same. We also added experiment in DARTS search space, where each cell contains 11 nodes and 8 possible operations. After sampling 200 models, we picked out 2 models with best test accuracy 97.26%, 97.39% respectively. The experiment details are appended in the updated paper.\\n\\n2. \\\"For large discrete combinatorial search spaces, this approach will not scale.\\\"\\n- During the open-domain experiment on DARTS search space, which contains over $10^{12}$ architectures, we re-sample 1M architectures randomly after fully training every 5 models, we found that the time used to predict the architectures is negligible(less than 0.01 seconds). We also performed experiments on NASBench-101, sampling 10%, 1% and 0.1% of the search space respectively, the results are still much better than other methods. It shows that the predictive power of BOGCN is quite reliable, it can pick out good candidates as well as exploring new ones with only a few fully-trained models.\\n\\n3. \\u201cmulti-objective opt only seems like a minor/tangential contribution.\\\"\\n- Multi-objective search is not mentioned in some well-known NAS paper like DARTS, ENAS, NAO, etc. We believed it is not fully explored in the NAS literatures. We have shown that our method is better than other multi-objective searching algorthim by adding more comparision in Section 4.3. For visualization: https://drive.google.com/file/d/1xs4SNva5hdd7Rhaok15cPP7UXohVi5QR/view?usp=sharing\\n\\n4. \\u201cThere\\u2019s a pretty strong correlation between the parameter and accuracy\\u201d\\n- We believe model accuracy and number of parameters don't necessarily have a strong correlation. It can be seen in Figure 4, many models share the same number of parameters while their accuracies are quite different. The correlation between accuracy and No.of parameters in Figure 4 is only 0.145, which is a pretty weak correlation.\"}",
"{\"title\": \"Reply to Review #1 [2/2]\", \"comment\": \"Other Concerns:\\n5. \\\"Table 1 has correlations using 1000 training architectures. Why 1000? Why not 50 (that's how sec 4.2 is initialized).\\\"\\n- In table 1, our purpose was to show that GCN's prediction accuracy is better comparing to other methods given few data. \\n\\nWe have done experiments with 50, 550, 1050 ... 7050 training samples to simulate the actual search process. It can be found that our methods also works well in these situations.\\n\\n-------------------------------------------\\ntraining archs GCN MLP LSTM\\n50 0.385 0.082 0.209\\n550 0.570 0.329 0.352\\n1050 0.597 0.414 0.472\\n...\\n7050 0.692 0.573 0.504\\n--------------------------------------------\\n\\n6. \\\"Table 1 lists the number of params that the predictor uses. Why is this important? How about comparing with a linear regressor?\\\"\\n\\n- We think the number of parameters of predictor is important, and predictors with fewer parameters are more efficient due to the following reasons:\\n\\n- Given fewer training data, predictors with more parameters tend to under-fit. In practice, we can only have very few trained models in the beginning.\\n\\n- The latency caused during prediction is shorter, which allows BOGCN to predict more models each time.\\n\\n- We have followed your advice and done the experiment with a linear regressor. The correlation is only 0.34, which is uncompetitive becasue it cannot handle non-linear features.\\n\\n-------------------------------------------\\nGCN MLP LSTM Linear_Regressor\\n0.61 0.40 0.46 0.34\\n--------------------------------------------\\n\\n7. \\\"The results in Sec 4.3 are using random as the only baseline. This is a pretty weak baseline.\\\"\\n- As pointed out by [2, 3, 4], random search may not be a weak baseline in NAS problem. However, we agree with you about the lack of comparisons in multi-objective tasks. We have performed experiments using other methods and appended them in the paper.\\n\\n8. \\\"In sec 4.4, the authors pick models M1, M2 and M3 as candidate examples.. How were these chosen?\\\"\\n- We fully train every model on the estimated Pareto front and compare them with ResNets. Then we can pick three models (M1, M2, M3), which can dominate ResNets. We have elaborated more in detail in the paper. \\n\\n9. \\\"Sec 4.5 transfer learning results are pretty weak. Transfer across datasets is much more interesting e.g. between ImageNet and Cifar-10.\\\"\\n- The motivation of this experiment is to provide evidence for expanding search space dynamically. We want to show that GCN can adapt to architecture cells with a varying number of nodes. Using this feature, we could progressively search architectures: start with a small number of nodes and gradually grow to larger architectures. We will consider doing transfer learning across different datasets, but this kind of experiment is very time consuming, thus can only be further explored in future works.\\n\\n[1] Semi-Supervised Classification with Graph Convolutional Networks. 2017\\n[2] Evaluating The Search Phase of Neural Architecture Search. 2019\\n[3] Random Search and Reproducibility for Neural Architecture Search. 2019\\n[4] Exploring Randomly Wired Neural Networks for Image Recognition. 2019\"}",
"{\"title\": \"Your MLP code can be improved but still inferior to BOGCN\", \"comment\": \"Thank you very much for your interest, we share our findings as follows.\\n\\nWe have performed the experiment using MLP with your setting, and the result is around 6000 samples to obtain the optimal architecture. We also tried stricter condition (random sample ratio equals 0.1%), and BOGCN can still reach the optimal model with around 4000 samples.\\n\\nIf we decrease the random sample ratio (like from 1 to 1%), we can select less proposed sampling models (variable \\\"n\\\" in \\\"propose_location\\\" function in your code). Since BOGCN is accurate enough such that the predicted accuracy of the optimal architecture always belongs to front-rank positions, the shrinking search space means reduced proposed samples every iteration.\\n\\nWe hope this answers your question.\"}",
"{\"title\": \"Thank you for your reply\", \"comment\": \"Thank you for the detailed explanation, and it is truly impressive to see BOGCN still keeps a similar performance by only predicting 1% random samples. Could you please clarify the performance (i.e. samples to global optimum) of the case using MLP to predict 1% random samples? I really want to know whether this is due to the advantages of GCN representation plus BO or just because there is a discrepancy in the implementation which leads to ~ 10^4 sample complexity in MLP on my side. Thank you.\", \"my_implementation\": \"https://github.com/linnanwang/MLP-NASBench-101\"}",
"{\"title\": \"The performance of our method is still far better than baselines with your setting\", \"comment\": \"We are happy you could replicate our result after modifying your code, we answer your questions in the following:\\n\\nDifferent from one-shot methods, we only store two small matrices (adjacent matrix and feature matrix) instead of the whole architecture with corresponding weights. Thus, we don't have the similar storage problem as one-shot methods.\\n\\nFor NASBench dataset, the inference time of predictor for all architectures (420K) is negligible (less than 0.01 seconds). Furthermore, it only occupies 8GB of GPU memory. Thus, it is feasible and logical to fully utilize the prediction power of BOGCN.\\n\\nFor larger search space, one solution is using sampling methods as we said in algorithm section. Following your suggestion, we randomly sample 1% architectures from the search space for performance prediction. The overall performance of our BOGCN can still find the optimal model with around 2000 samples. After fully training a few hundred architectures, BOGCN could pick out the best architecture as long as it is included in the 1% sample. This experiment further shows the importance of the superior accuracy of BOGCN, other than that, the frequency of sampling also plays an important part in our algorithm. Even though random sampling is good enough, an alternative sampling method can be evolutionary algorithm.\\n\\nAnother solution is to perform prediction in mini-batches. We have tested our predictor on even larger search space (around half a billion models), the total inference time is around 60 seconds and it consumes only 22GB GPU memory.\\n\\nThanks for your comments, we hope this addresses your concern.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"--Summary--\\nThe authors present an algorithm BOGCN-NAS which combines bayesian optimization and GCNs for searching over NN architectures. The authors emphasize that this method can be used for multi-objective optimization and run experiments over NAS-Bench, LSTM-12K and ResNet models. \\n\\n--Method--\\n\\nMethodologically, the contribution is somewhat weak. The main technical contribution is to use a GCN to get a global representation of a graph, which can then be used for downstream regression tasks such as predicting accuracy. It\\u2019s not clear how much the GCN generalizes in being able to encode arbitrarily large architectures. The two main examples offered are NAS-Bench and LSTM-12K focus on optimizing cell architectures which contain a handful of nodes e.g. (5 in the case of NAS-101).\", \"graph_embeddings\": \"The authors have not considered other graph embeddings to use in their Bayesian regression setup. E.g. see https://arxiv.org/pdf/1903.11835.pdf for a list.\", \"bayes_opt\": \"In Algo1, step 5. The authors randomly sample a number of architectures in order to calculate EI scores on them. For large discrete combinatorial search spaces, this approach will not scale.\", \"multi_objective_optimization\": \"It\\u2019s not clear why GCNs or BO is required for this. Any predictor that generates multiple metrics could substitute in order to create a pareto-curve. Even multi-objective RL based approaches could suffice. Thus multi-objective opt only seems like a minor/tangential contribution.\\n\\n--Experiments--\\nThe main claim of the paper is that this approach works well for the multi-objective case. However, the results only look at two objectives #params vs accuracy. There\\u2019s a pretty strong correlation between the two. It\\u2019s unclear how the method generalizes when objectives are not correlated. The authors need to thoroughly demonstrate other objectives/find suitable benchmarks for the same as clearly NAS-101 will not suffice.\", \"other_concerns\": [\"Table 1 has correlations using 1000 training architectures. Why 1000? Why not 50 (that\\u2019s how sec 4.2 is initialized). Also, the correlation results are less impressive in Figure 9.\", \"Table 1 lists the number of params that the predictor uses. Why is this important? How about comparing with a linear regressor?\", \"The results in Sec 4.3 are using random as the only baseline. This is a pretty weak baseline.\", \"In sec 4.4, the authors pick models M1, M2 and M3 as candidate examples.. How were these chosen ?\", \"Sec 4.5 transfer learning results are pretty weak. Transfer across datasets is much more interesting e.g. between ImageNet and Cifar-10.\", \"Overall, this paper has some interesting results, which show that GCNs can be useful models to encode graph structured inputs. However, the methodological and experimental results can definitely be strengthened. The authors may consider the following:\", \"Address how GCNs can model and scale to general architecture spaces than a small number of nodes in a cell.\", \"Address how to sample better over combinatorial search spaces than random in the inner loop of BO.\", \"Strengthen MO-opt results. Use better baselines than random and different objectives than accuracy vs #params.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed BOGCN-NAS that encodes current architecture with Graph convolutional network (GCN) and uses the feature extracted from GCN as the input to perform a Bayesian regression (predicting bias and variance, See Eqn. 5-6). They use Bayesian Optimization to pick the most promising next model with Expected Improvement, train it and take its resulting accuracy/latency as an additional training sample, and repeat.\\n\\nThey tested the framework on both single-objective and multi-objective tasks. On the single-objective (accuracy task). They tested it on NasBench and LSTM-12K, two NAS datasets with pre-trained models and their performance. They obtained very good performance on both, beating LaNAS (previous SoTA) by 7.8x higher sample efficiency. On multiple-objective, they show higher efficiency in finding Pareto frontier models, compared to random search. \\n\\nOne main question I have is whether the next model is chosen given the current prediction model? For NasBench, did you run your predictor for all (420k minus explored) models and pick the one that maximizes Expected Improvement? Note that LaNAS is more efficient in that manner by sampling directly on polytopes formed by linear constraints. If so, how do you pick the next candidate models in open domain setting?\\n\\nIt looks like Eqn. 9 biases the training heavily towards high accuracy region, which is a hack. Although in the Appendix (Fig. 7) the authors have already perform some analysis on the effect of different weight terms, I wonder whether there is a more problem-independent way of doing it. The MCTS in LaNAS is one way that automatically focuses the search on the important regions. Currently, the proposed approach might limit the usability of the proposed method to other situations when accuracy is no longer that important. \\n\\nThe performance is really impressive in the NasBench and LSTM dataset. The paper mentioned that \\u201cBO can really predict the precious model to explore next\\u201d but didn\\u2019t provide an examples in the paper. I would whether the author could visualize it in the next revision, which would be very cool. \\n\\nDo you have open domain search research in single-objective search? \\n\\nWhy not use NasNet architecture for a fair comparison with other NAS papers?\\n\\nIn Appendix, Fig. 6 shows that even without GCN and BO, a single MLP already achieves global optimal solution in LSTM-12K dataset with ~850 samples, already beating all the previous methods (Random, Reg Evolution, MCTS and LaNAS). If that\\u2019s the case, I wonder how much roles the proposed methods (BO + GCN) play during search? Also what do you mean by \\u201cwithout BO\\u201d? Do you only predict the mean and assume all variance is constant? \\n\\n=====Post Rebuttal======\\nI have read other reviewers' comments and the rebuttal. \\n\\nOne of the main problems in this paper is an unfair comparison against LaNAS. LaNAS only uses a single sample at each leaf, while they sample 100% to 0.1% of the models, evaluate them with the current BO model and find the best. For NasBench-101 with 420K models, even sampling 0.1% each time means ~400 samples and the performance (from the rebuttal) seems to degrade substantially from 100% case (1464.4->4004.4). This means that almost 3x more samples are needed, compared to what they claimed. \\n\\nI agree with the authors that calling BO function is super fast so maybe this is fine. However, on the open domain experiments, their performance is also not better than LaNAS+c/o, which they didn't list in the rebuttal. I listed it here:\\n\\nModel \\tParams \\tTop-1 err\\tNo. of samples truly evaluated\\n-----------------------------------------------------------------------------------------------------------------\\nBOGCN+cutout (V1)\\t 3.1M \\t 2.74\\t 200\\nBOGCN+cutout (V2)\\t 3.5M \\t 2.61\\t 400\\nLaNAS+c/o 3.2M 2.53\\u00b10.05 803\\n\\nOverall this paper is on the borderline. I don't mind if the paper gets rejected. For now I lower the score to 3.\"}",
"{\"title\": \"Thank you\", \"comment\": \"\\\"In the real setting, enumerating all models in the search space for prediction seems to be impractical. The previous approaches like Regularized Evolution, MCTS, Random Search, LaNAS, doesn't predict performance on all unseen architectures in the search space.\\n\\nI also did one experiments that show that predicting the performance of every sample is critical for your performance. If you draw 1% architectures from NASBench 101 to predict their performance, the overall performance of MLP quickly deteriorates to 2*10^4 samples to the global optimum. This means that #samples of prediction might contribute substantially to final performance. \\n\\nThat being said, more prediction can be very important in reducing the #architectures that are actually being trained, which is a very interesting discovery \\\"\", \"see_code_is_in_github\": \"https://github.com/linnanwang/MLP-NASBench-101\"}",
"{\"title\": \"You didn\\u2019t add activation functions in your MLP and didn't set gradients to zero every iteration\", \"comment\": \"We checked your code and made the following changes to your code:\\n1. We added Relu activation after the first layer and sigmoid after the second layer.\\n2. We have set optimizer gradient to 0 in every iteration during training.\\n3. We train the mlp predictor from scratch every time after sampling new data, number of epochs is set to be 150 and lr is 0.001.\\n\\nAfter making the changes to your code, we can find the optimal architecture within 5000 samples on Nasbench 101 without manually selecting random seed.\\n\\nThere is no motivation for us to pick random seeds for MLP since it is only a baseline, and we have not manually picked seeds throughout all of our experiments.\\n\\nWe are happy to give you the modified version of your code.\"}",
"{\"comment\": [\"Thanks for your comments.\", \"Following your idea using GBDT predictor with basic features for NAS, we have performed the experiment on NASbench [1] and compared it with using only GCN predictor (described in Appendices A.2). However, the result of GBDT predictor is poor. The model struggles to find the optimal architecture given 20000 samples while GCN can find the optimal just given 2500 samples.\", \"Intuitively, only given the number of each operation is not sufficient to uniquely determine a model, which causes many models to have the same predicted accuracy. Furthermore, the connections between those operations are also essential. The reason why GBDT is superior in your experiment may be that the search space is too small, and there aren\\u2019t many architectures with the same number of each operation.\", \"We have compared our proposed model with SOTA algorithm [2] described in section 4.2. Note that in the introduction section, we make it clear that one-shot models (like ENAS/DARTS/ONE-SHOT/PROXYLESSNAS) cannot find the optimal models because of weight sharing or continuous relaxation. And the motivation of our method is to find the optimal model with less cost. Also, following the evaluation method in previous papers [2, 3], it excludes one-shot methods because they just find the competitive model rather than optimal model.\", \"We totally agreed with your concern about high accuracy model. Therefore, we proposed Exponential Weighted Loss in section 3.5, which focuses more on high accuracy model (the ablation study is shown in Appendices A.3).\", \"We have described the practical NAS is an online learning progress in section 3.4. Thus in experiment, we started with 50 initialized data (available easily) and the final result is good enough. As you said, the predictor is not good initially, but it can predict quite well after a small number of samples.\", \"For section 4.1, we select three datasets (train/validation/test) randomly. Here we just want to prove the performance of predictors with few training data (1K) rather than total dataset (420K).\", \"[1] NAS-Bench-101: Towards Reproducible Neural Architecture Search. ICML 2019\", \"[2] Sample-Efficient Neural Architecture Search by Learning Action Space. arXiv preprint\", \"[3] AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search. arXiv preprint\"], \"title\": \"Our GCN predictor performs better than GBDT predictor\"}",
"{\"comment\": \"Hello there,\\n\\nI have implemented a MLP on NASBench, and the source code is available at: https://github.com/linnanwang/MLP-NASBench-101\\n\\nAfter running the code 100 times, I get an average number of 50000 samples to get the global optimum. This is far larger than your result ( 3000 ~ 4000 avg samples to the global optimum in Fig.6(a), MLP). It is possible that the discrepancy results from the hyper-parameters in my codes. If you find something can significantly improve my code, please advise.\\n\\nOtherwise, I suspect your results are not from different random seeds. Picking a good random seed that gives fewer samples to the global optimum makes the entire results unfair for comparison. Please clarify. Thank you.\", \"title\": \"Fail to replicate the Fig.6\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provide a NAS algorithm using Bayesian Optimization with Graph Convolutional Network predictor. The method apply GCN as a surrogate model to adaptively discover and incorporate nodes structure to approximate the performance of the architecture. The method further considers an efficient multi-objective search which can be flexibly injected into any sample-based NAS pipelines to efficiently find the best speed/accuracy trade-off.\\n\\nThe paper is well-written. The experiments are abundant. However, the paper has following drawbacks that need to be further concerned:\\n\\n1.\\tIn my opinion, the key point of the paper has nothing to do with GCN or multi-objective. The important part is to use BO and EI to sample new architecture. However, no theoretical proof is provided to guarantee that the performance is getting better during while loop in Algorithm 1.\\n2.\\tEq.(9) focuses more on models with higher accuracy. However, those models with bad performance will be predicted inaccurately and may have a higher score than good models. For model with ground-truth near 0, arbitrary predicted score results in the same loss. Eq.(9) seems cannot prevent this situation from happening.\\n3.\\tTable 1 compare different architectures with GCN. However, the LSTM is the worst architecture used among the 3 different architectures in the original paper (Alpha-X), which makes the comparison unfair.\\n4.\\tTable 2 shows the number of architectures trained. However, the proposed method need to update GCN multiple times during searching, which makes the comparison unfair.\\n5.\\tTable 2 shows the number of training models before finding the best model. It is meaningless when used in reality, which often contains more than 10^10 different architectures and the best architecture is unknown. In my opinion, the performance of the top1 architecture predicted by the proposed method is much important.\\n6.\\tAlgorithm 1 uses Pareto front, which does not exist when doing experiments on single-objective search. More details should be clarified.\"}",
"{\"comment\": \"I have done some similar experiments to build a performance predictor. But I found if we using some basic feature of models(like the number of Conv, the number of operations), and use the GBDT to predict the performance of the model, it could beat the models that based on Neural Network(whatever GCN/LSTM/MLP) in limited data.\\n\\nSo could you do the simple experiments to prove the performance of your model? Could you share with us?\\n\\nAnd have you tried to compare your method with the STOA NAS methods(like ENAS/DARTS/ONE-SHOT/PROXYLESSNAS)? \\nBecause random search is not a strong baseline.\\nCould you share with us?\", \"another_question_is\": \"As we all know, the more data(arch) you have the better performance of preditor. \\nBut the practical NAS is a search progress, which means the predictor may collect many the training samples with low accuracy but fewer samples with high accuracy(or other metrics). Which means that the predictor may perform poorly on high accuracy model predictions(But this is the part we really care about). \\nAs this paper mentioned, \\\"We used 1000 architectures\\nin NASBench for training, 100 architectures for validation, and 10000 architectures for testing\\\". I wonder how to select the test sample? \\n\\nThank you.\", \"title\": \"Question about performance of predictors\"}",
"{\"comment\": \"Thanks for your comments.\\n\\nTable 1 is indeed consistent with Figure 2. For the LSTM plot, most points (lightest part) are below the line \\\"y=x\\\", while for the GCN plot, most points are close to this line. For better illustration, we will add the line \\\"y=x\\\" to Figure 2 in the final version.\", \"title\": \"Table 1 is consistent with Figure 2\"}",
"{\"comment\": \"The results are truly impressive, but I have a question, it will be great if the authors can clarify.\\n\\nIn Fig.2, it looks that LSTM actually is the best prediction model since most points are aligned with y = x. However, in your table 1. GCN (corr=0.819) greatly improved w.r.t MLP(0.522) and LSTM (0.4). Could you please explain why?\", \"title\": \"Interesting work\"}"
]
} |
rJgzzJHtDB | Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference | [
"Ting-Kuei Hu",
"Tianlong Chen",
"Haotao Wang",
"Zhangyang Wang"
] | Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019). Such a dilemma is shown to be rooted in the inherently higher sample complexity (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier. In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications. Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a “sweet point" in co-optimizing model accuracy, robustness, and efficiency. Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation. We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.
| [
"adversarial robustness",
"efficient inference"
] | Accept (Poster) | https://openreview.net/pdf?id=rJgzzJHtDB | https://openreview.net/forum?id=rJgzzJHtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HMvkpVdyBY",
"BylE8C1osH",
"H1lCVUpwir",
"Skgy0H6woH",
"Bke0kZ6vjB",
"HJgVmFG6qH",
"ryeFuGK1qH",
"HyxN_BoaFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726791,
1573744204076,
1573537333658,
1573537222958,
1573535974284,
1572837659675,
1571947120648,
1571825004349
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1571/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1571/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1571/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1571/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1571/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1571/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1571/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors develop a novel technique to train networks to be robust and accurate while still being efficient to train and evaluate. The authors propose \\\"Robust Dynamic Inference Networks\\\" that allows inputs to be adaptively routed to one of several output channels and thereby adjust the inference time used for any given input. They show\\n\\nThe line of investigation initiated by authors is very interesting and should open up a new set of research questions in the adversarial training literature.\\n\\nThe reviewers were in consensus on the quality of the paper and voted in favor of acceptance. One of the reviewers had concerns about the evaluation in the paper, in particular about whether carefully crafted attacks could break the networks studied by the authors. However, the authors performed additional experiments and revised the paper to address this concern to the satisfaction of the reviewer.\\n\\nOverall, the paper contains interesting contributions and should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your careful response.\", \"comment\": \"I have read the responses and I will consider the revised manuscript. I appreciate the reporting of the new results.\"}",
"{\"title\": \"Response to Reviewer#1\", \"comment\": \"Q1. About highlighting our contribution\\n\\nYou are precisely correct. We reduce the average computation loads by input-adaptive routing to achieve \\\"triple wins\\\", rather than proposing a specific design of robust light-weight models. We have highlighted the difference in the revised paper (Section 3 beginning).\", \"q2\": \"More diverse attack forms.\\nWe appreciate your insightful suggestion. As you suggested, we create the new \\\"randomized attack\\\": that attack will randomly combine the multi-exit losses, where the weight coefficients are i.i.d. sampled from Gaussians and then normalized to have their sum equal one. On our strongest defended model of RDI-Resnet38 (using Max-Average), it achieves TA = 83.79%, ATA = 44.86%, with 28.88% MFlops saving.\\n\\nWe also tried a direct attack on the decision function and report the results in the response to Reviewer #3.\", \"q3\": \"Closer comparison with ATMC\\nWe communicated with the ATMC authors and obtained their original implementation, to train a new model whose number of flops is much closer (to the extent possible) to our RDI-ResNet38. The results below demonstrate that ours outperforms ATMC in this setting. We have confirmed the results with ATMC authors.\\n------------------------------------\\nModel ATA TA MFlops\\nATMC 42.66% 83.51% 58.03\\nOurs 43.32% 83.79% 57.81 \\n------------------------------------\", \"q4\": \"Missing citation.\\nWe appreciate your suggestion and have cited it.\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"We sincerely appreciate your positive opinion and insightful suggestions about our work. \\n\\nRegarding the role of \\\"increasing capacity\\\", we draw the conclusions from the theoretical analysis in (Tsipras et al. (2019), Nakkiran (2019), which are aligned with our current observations in experiments. That said, those works present more a high-level motivation than any actual algorithmic foundation, for our work.\"}",
"{\"title\": \"Response to Reviewer#3\", \"comment\": \"\", \"q1\": \"Paper organization and writing style.\\nThank you very much for your suggestion. We have revised the paper in an effort to improve its clarity and reader friendliness:\\n\\n-\\tWe have collected the two discussion paragraphs: \\\"Intuition: Multi-Output Networks as Special Ensembles\\\" and \\\"Do Triple Wins Go Against the Model Capacity Needs?\\\", into one dedicated section \\\"Discussion and Analysis\\\" after Section 4. We hope that resolves the current impression \\\"the entire paper reads constantly like a literature review\\\".\\n-\\tWe also re-organized Section 3, to first provide an overview of RDI-Nets, followed by discussing concrete attack and defense forms. \\n\\nWe are more than happy to take any further suggestions to revise this manuscript.\", \"q2\": \"Directly attack the decision function.\\nWe appreciate this insightful and important comment. We conduct the requested evaluation and see our strongest defense (Max-Average) still perform effectively under the new attack. \\nThe decision function of the multi-exit network for an input example is the single exit loss function through which that specific example is actually routed. In view of that, we implemented this \\\"direct decision attack\\\", by every time computing the adversarial perturbations w.r.t. the actual single exit. The resulting new attack thus can be viewed as an input-adaptive selection version of single attacks. On RDI-ResNet38 with Max-Average defense, the result is:\\n------------------------------------\\nATA TA MFlops\\n43.70% 83.31% 64.82\\n------------------------------------\\nClearly, this new form of attack is indeed considered more challenging by RDI-Nets, as more examples are now routed to higher-level exits for more scrutiny. It leads to the improved averaged inference cost (~10 MFlops higher than the max-average attack in Table 3), a model behavior aligned with our expectation. \\nDespite so, for this new attack, the ATA of our RDI-Net is 1.41% higher than the original ResNet-38 (Table 1) and the TA is comparable (0.30% lower), with 18.3% computational savings on average. Thus it still achieves our aimed \\\"triple wins\\\", though understandably with fewer margins, under this new stronger attack. \\nWe would like to thank you again for bringing up this new attack possibility. We would be more than happy to include the above discussions, and possibly more results into our paper if you are in favor of so. We also tried another suggested randomized attack and report the results in the response to Reviewer #1.\", \"q3\": \"About minor comments.\\nWe appreciate your careful proofreading and have revised the paper accordingly.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a framework coined as \\u2018Robust Dynamic Inference Networks (RDI-Nets)\\u2019. The goal is concurrently achieving accuracy, robustness and efficiency via \\u2018input-adaptive inference\\u2019 and \\u2018multi-loss flexibility\\u2019 on a multi-output architecture. The observation is that\\nin a deep architecture, the representations in earlier layers can also be used for solving a specific downstream classification task. So by attaching several final classification stages at the intermediate layers and by using the uncertainty of the softmax output as a decision criteria as when to use the current output as the final decision, the authors aim to achieve a triple win.\\nThe paper then proposes some attack criteria for a multi-output network.\\n\\nThe paper is not very well written. The paper has a designated related work section but the entire paper reads from its abstract to conclusions constantly like a literature review. This makes it hard to focus and identify the original contribution. The proposed architecture is only introduced later in detail in 3.3 after the attacks. I found the organization and writing style not very reader friendly.\\n\\nThe authors provide a large experimental section, however the key problem with the paper is that it blurs the evaluation issue. While the observation of using uncertainty of estimates at intermediate levels has some intuitive appeal, the decision criteria that the authors propose requires careful selection of thresholds and a good calibration. But given the thresholds the final decision is just a function of the entire network - as it should be. So a natural attack here is just attacking this decision function (or an approximate differentiable proxy) to see if this model provides extra robustness. Is such an evaluation available? Otherwise the proposed approach provides a false sense of robustness as the proposed attacks are not geared towards the actual underlying model.\", \"minor\": \"The definition of entropy in (6) is missing a minus sign. \\n\\nThe notation f(\\\\theta| x) for theta as parameters and x as input for a function is in conflict with probability notation of conditional probabilities.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper considers the following problem: in a classification setting, it appears that by increasing the model capacity, the model accuracy and robustness seem to be improved, at the expense of model size and latency. Thus, the authors want to design an approach where at the same time accuracy, robustness and efficiency are improved at the same time.\", \"Their idea is \\\"multi-exit networks\\\" with inference that adapts based on the input. Particularly, their proposed \\\"Robust Dynamic Inference Networks\\\" allows each input -- clean or adversarial -- to choose adaptively one of the multiple output layers to output its prediction. This way, they can do an investigation to new variations of adversarial attacks and adversarial defenses. Their experiments show that indeed via this approach, they can achieve the triple wins of accuracy, robustness, and efficiency.\", \"novel idea, promising results,\", \"Although I like the discussion of accuracy-robustness tradeoff in par 2 of Introduction, I am not sure about the statement that increasing model capacity both robustness and accuracy are improved, as used in the abstract, is always true.\", \"First time adversarial attacks and defenses are studied in a multi-output model.\", \"Interesting connection of multi-output networks with ensemble models.\", \"Overall, I believe that this is an interesting, novel paper, which could be of high interest in the ICLR community, and I would vote for its acceptance.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper exploited input-adaptive multiple early-exits, an idea drawn from the efficient CNN inference, to the field of adversarial attack and defense. It is well-motivated by the dilemma between the large model capacity required by accurate and robust classification, and the resulting model complexity as well as inference latency.\\n\\nOverall, this paper presents an interesting perspective, with strong results. The usage of input-adaptive inference reduces the average inference complexity, without conflicting the \\\"larger capacity\\\" assumption for co-winning robustness and accuracy. \\n\\nSince no literature has discussed the attacks for a multi-exit network, the authors constructed three attack forms, and then utilized adversarial training to defend correspondingly. The design of Max-Average Attack is particularly smart - to balance between \\\"benefiting all\\\" and \\\"maximally boosting one\\\" (its result is also convincingly good).\\n\\nThe authors presented three groups of experiments, from relatively heavy networks (ResNet38), to very compact ones (MobileNet-V2). It is especially meaningful to see their strategy work on MobileNet too (though the computational saving is a bit less, no surprise). The authors also did due diligence in ablation study and comparing with recent alternatives.\", \"several_points_that_could_be_addressed_to_potentially_improve_the_paper\": [\"The authors want to make it clearer that: their \\\"triple win\\\" is not about constructing a light-weight model that is both accurate and robust. It's instead about given an accurate + robust, yet heavy-weight model, how to reduce its AVERAGE computational load per sample inference, by routing \\\"easier\\\" examples to earlier exits.\", \"Can the authors think of and construct more diverse and stronger attacks for RDI-Nets? For example, it would be interesting to attacking RDI-Nets (e.g., defended by Max-Average) with randomized weighted combinations of single attacks?\", \"Note that, at inference time, the same \\\"randomized combination\\\" cannot be also adopted as defense, because an input always wants to exit the earliest possible for efficiency gains.\", \"The advantage over ATMC is not obvious: slightly lower TA, slightly higher ATA, and slightly more parameters. Could the authors try to align their parameters more closely (to the extent possible)?\", \"A missing related work: \\\"Shallow-Deep Networks: Understanding and Mitigating Network Overthinking\\\", ICML 2019. It also discussed how to append early exits to pre-trained backbones.\"]}"
]
} |
SJlbGJrtDB | Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers | [
"Junjie LIU",
"Zhe XU",
"Runbin SHI",
"Ray C. C. Cheung",
"Hayden K.H. So"
] | We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures. | [
"neural network pruning",
"sparse learning",
"network compression",
"architecture search"
] | Accept (Poster) | https://openreview.net/pdf?id=SJlbGJrtDB | https://openreview.net/forum?id=SJlbGJrtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"K8NTvvn5aYv",
"lFLO6HkLiph",
"jbk602lLWk2",
"hWadPLd9bR",
"9PLgo4CwAxV",
"HiBkPJsoJcT",
"-EqbxMoa73",
"QKf-ZlqnW",
"rkxycJcqsS",
"rylmPotfjB",
"ByxiKSw-or",
"B1xLNtN-sB",
"BJgGbhJ-iB",
"ryeb52WxiB",
"SJg4p8keoS",
"BkxVwrQM5S",
"Bkl9Cpy0Yr"
],
"note_type": [
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1588630405270,
1588323861865,
1588322045910,
1588143011782,
1587423900427,
1587144833667,
1587113968296,
1576798726763,
1573719943262,
1573194587274,
1573119362894,
1573108013528,
1573088250153,
1573031049510,
1573021371599,
1572119899928,
1571843537975
],
"note_signatures": [
[
"~Aditya_Kusupati1"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"~Aditya_Kusupati1"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"~Aditya_Kusupati1"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"~Aditya_Kusupati1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1570/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1570/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1570/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Reply\", \"comment\": \"Thanks for getting back to me. I have checked with the authors and it turns out the 1x expts are run for the same epochs as dense baselines and it is the tensorflow google TPU implementation that they used. 1.5x has 50% more epochs than the dense baselines.\\n\\nI have used GMP and it does get good accuracies in the same number of epochs as normal dense training. 128000 iterations ~ 90-100 epochs for 1024 batch size which is the typical training time for ResNet50 on ImageNet.\"}",
"{\"title\": \"Reply\", \"comment\": \"I have carefully read the survey paper, especially section 5 for several times. The author only mentions that \\\"Each model was trained for 128000 iterations with a batch size of 1024 images\\\". Referring to table 2, there is a sparsity range from 50% to 98%.\\n\\nFrom my understanding, this means that all the sparse models are trained and pruned with the same configuration. I do not find any clear statement about how the dense baseline is trained and whether the dense baseline is trained with the same training steps. This point is somehow vague. Maybe you can contact the authors of the survey paper for more detailed information. \\n\\nMaybe we need to first figure out whether the GMP could get good accuracies even for sparse networks in the same number of epochs as normal dense training.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for the clarification. I agree with 2nd point.\\n\\nI am not sure about the first point though. What do you mean by \\\"authors do not clearly indicate that GMP can get good accuracies even for sparse networks in the same number of epochs as normal dense training.\\\"? I don't understand that statement. The survey paper on GMP (the survey paper does a much better hyperparam sweep for GMP improving results significantly. The original paper's experiments are limited when compared to the survey paper) runs the dense networks and GMP for the same number of epochs. Please check Section 5 of https://arxiv.org/pdf/1902.09574.pdf. They run both dense and sparse training for the same number of epochs. \\n\\nIMP and GMP are conceptually similar, but their inference costs change vastly due to the allocation of sparsity.\", \"https\": \"//github.com/google-research/google-research/tree/master/state_of_sparsity has the models and numbers of all the sparse networks from GMP and they are very comparable to dense training and drops are similar to that of your results.\\n\\nAny clarification on this would be great. Sorry I missed your session in ICLR.\"}",
"{\"title\": \"Reply\", \"comment\": \"1) I think IMP and GMP are basically the same pruning methods. Meanwhile, referring to the original GMP paper (https://openreview.net/pdf?id=Sy1iIDkPM) and the survey paper, it seems that the authors do not clearly indicate that GMP can get good accuracies even for sparse networks in the same number of epochs as normal dense training. Could you please indicate more clearly about this part.\\n\\n2) Considering the additional overhead for a layer with parameter matrix $W$, Sparse Momentum needs to store and compute the momentum for the parameter of each layer at each training step. Our method only adds a vector threshold for each layer. Meanwhile, I believe that an implicit mask will be used to prevent updating the pruned weights in practical implementation. So the overall computation overhead of our method is very likely to be less than Spare Momentum.\"}",
"{\"title\": \"Minor follow-up\", \"comment\": \"I didn't get a notification about the reply, I just saw it now. Thanks for the clarification. A couple of follow-up questions:\\n\\n1) I agree we need fine-tuning for three state pruning techniques like IMP (Iterative Magnitude Pruning), however, GMP (Gradual Magnitude Pruning) - https://arxiv.org/abs/1902.09574 (a good survey paper) shows that GMP can get good accuracies even for sparse networks in the same number of epochs as normal dense training. Can you please let me know if there is something else here?\\n\\n2) Yes, Sparse Momentum uses dense gradients (more recent works built on SM also do the same), but it is occasional and periodic which leads to a minimal overhead during training. These methods are not completely sparse-sparse in spirit but are very close to that. I would also like to point you to Discovering Neural Wirings - https://arxiv.org/abs/1906.00586 which uses STE for pruning as well and runs for the same number of epochs as the dense training.\\n\\nTo be clear none of them are training the masked layers but rather still rely on sorting. However, their underlying pruning ideology is still the same.\\n\\nLet me know,\\nAditya\"}",
"{\"title\": \"Reply about the comparison problems\", \"comment\": \"Hi, Aditya.\\n\\nThank you for your attention to our work. \\n\\nThe traditional three-stage pruning algorithms you mentioned usually require many additional fine-tuning epochs compared with normal dense training. One advantage of DST is avoiding the expensive pruning and fine-tuning iterations. DST only requires the same number of training epochs as normal dense training to obtain a sparse model.\\n\\nTo compare fairly with those traditional pruning algorithms, we may need to train the same number of epochs with DST. However, this kind of comparison is not fair either cause increasing the number of training epochs usually won't lead to better performance. The model accuracy tends to saturate after a certain number of training epochs. Those additional fine-tuning epochs are only required for the three-stage pruning algorithms to regain the model performance loss due to pruning operation.\\n\\nMeanwhile, Sparse Momentum also utilizes the dense gradients to revive the pruned weights.\"}",
"{\"title\": \"Comparison to pruning methods\", \"comment\": \"Hi,\\n\\nGreat work on trainable masked layers. \\n\\nMy question is about the comparison to DSR and Sparse Momentum methods. Both these techniques are end-to-end sparse training mechanisms, but DST starts off dense and gets dense gradients (always) due to STE and gets to become sparse over time. In this case, won't it be fair to compare against pruning techniques that start out dense and become sparse like global thresholding, iterative magnitude pruning or gradual magnitude pruning?\\n\\nLet me know if I am missing something.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper lies on the borderline. An accept is suggested based on majority reviews and authors' response.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"We appreciate all the detailed reviews and suggestions . Following reviewers' suggestions, we have updated the manuscript and uploaded a revision. Here we give a summary of the major changes.\", \"in_response_to_reviewer_1\": [\"We present our motivations and contributions more clearly\", \"We add more experimental results on CIFAR-10 and add the missing result on ImageNet in Section 4.\", \"We add more analysis about the experimental results on Section 5.1 and 5.2\"], \"in_response_to_reviewer_2\": [\"Our motivations and contributions are listed more clearly in the introduction part.\", \"More experimental results on CIFAR-10 and the result on ImageNet are presented in Section 4.1\", \"We revise the Section 3 to present the main idea more clearly. More details about the feed-forward and back-propagation process are included in Appendix A.2 and A.3. to address the concerns about the \\\"structure gradient\\\" and \\\"performance gradient\\\".\", \"We present more details to in Section 5.1 and 5.2 the address the ambiguity of previous edition and provide more evidences to the effectiveness of our method\"], \"in_response_to_reviewer_4\": [\"In Section 4.1 and 4.2, we present that our method is able to get sparse models with increased performance than dense models.\", \"The results on ImageNet-2012 is present in Section 4.1\", \"Our motivations and the novelty of our method are summarized and highlighted in the introduction part (Section 1)\"]}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you so much for your time and constructive comments.\\n\\nIt is our negligence that does not present our motivation and contributions of our work clearly in the early manuscript. We are revising it to clearly present our motivations, methods and contributions. \\n\\nWe really appreciate your positive assessment of our experimental results. We are running experiments on the more complex dataset (ImageNet 2012) to make our results more convinced. \\n\\nThank you so much for pointing out the language problems in our manuscript. We are polishing the writing continuously and will update the revised manuscript soon.\", \"following_are_the_responses_to_the_concerns\": \"1. The sparsity of activations.\\nUsually, only the sparsity of weights is considered for the evaluation of network pruning methods. Your suggestion that we should evaluate the sparsity of activations provides a new point of view. We are conducting related experiments and will add this in the revised manuscript. \\n\\n2. The choice of sparse regularizer.\\nThe sparse regularizer is used to penalize low threshold values to increase the degree of sparsity. So the basic requirement is that the value of the regularizer function $f(x)$ should decrease as $x$ increases.\\n\\nWe actually tested several options like $\\\\exp(-x)$, $\\\\frac{1}{x}$, and $\\\\log({\\\\frac{1}{x}})$. It seems that other choices except $\\\\exp(-x)$ penalize too much. Therefore the training loss is dominated by the sparse regularizer term $L_s$, which tends to mask out all the weights easily. \\nDue to our experiments, $\\\\exp(-x)$ is the best choice among all these options that support wider range choice of $\\\\alpha$ and get higher degree of sparsity. So $\\\\exp(-x)$ is adpoted for the sparse regularizer. We are still searching for the better sparse regularizer.\\n\\n\\n3. Sparse regularizer dominated by the smallest thresholds\\nYes, as you point out, the sparse regularizer will be dominated by some small thresholds. Although we discuss the layer-wise sparsity in the paper, the thresholds are actually neuron-wise or filter-wise. Considering a masked fully connected layer with parameter $W\\\\in R^{m\\\\times n}$ and threshold vector $t\\\\in R^m$, this layer will have $m$ output neurons. For each output neuron $i$, our method assigns a neuron-wise threshold $t_i$. \\n\\nThe elements in $t$ are all initialized to $0$, which means that we assume the neurons and weights in this layer have the same importance before the training. And the same penalties for small value are added for all these thresholds. \\n\\nAt the end of the training process, if some thresholds still have small values, it only indicates that these neurons are more important than other neurons so that the weights corresponding to these neurons should have a small degree of sparsity. \\nUsually, there are hundreds of neurons in each layer. So the layer-wise sparsity will still be high even if there are few small neuron-wise thresholds. \\n\\n4. The analysis in 4.3\\nThank you for your positive feedback about the analysis in section 4.3. \\nAs we present in section 4.3, our method can generate consistent sparse pattern that indicates the degree of redundancy for each layer. Besides, our method can distinguish neuron-wise or filter-wise importance with fine-grained neuron-wise and layer-wise thresholds as we present above. Currently, we are not aware of any other method that can have similar effects.\"}",
"{\"title\": \"Response to AnonReviewer2 - Summary of contributions\", \"comment\": \"Thank you so much for your time and constructive comments.\\nIt is our negligence that does not clearly present our motivations and contributions of our work in the early manuscript. Here we present the motivations and contributions of our paper.\", \"there_are_several_problems_about_network_pruning_that_previous_methods_cannot_properly_settle\": \"1. Expensive pruning and fine-tuning iterations and non-trivial hyperparameters setting\", \"the_pruning_and_fine_tuning_iterations_are_expensive_and_need_many_additional_hyperparameters_like\": [\"How many pruning steps should be adopted\", \"How many epochs for the fine-tuning stage after each pruning step\", \"Use the same pruning rate or dynamic pruning rate at each step\"], \"our_contribution\": \"In our method, since both the network weights and the pruning thresholds will be updated vis back-propagation at each training step. Our method can have continuous fine-grained pruning and recovering at each training step. \\nThat is our method support step-wise pruning instead of epoch-wise pruning.\\n\\n\\nThank you again for the detailed review and look forward to your feedback\", \"most_of_the_current_pruning_and_all_the_sparse_training_methods_conduct_hard_pruning_with_following_properties\": [\"Pruned weights will be directly set to 0\", \"No further update via back-propagation for pruned weights\", \"The importance of weight is not fixed and will change dynamically during the pruning and training process. Previously unimportant weights may tend to be important. So the ability to recover pruned weight is of high significance.\", \"Those sparse learning methods present in Section 4.1 support the recover of pruned weights. However, they all conduct hard pruning. Directly setting pruned weight to 0 causes the loss of historical parameter importance, which make it hard to determine:\", \"When and which pruned weights should be recovered.\", \"What value should be assigned to the recovered weights.\", \"Therefore, these methods that allow the recovery of pruned weights randomly choose a predefined portion of pruned weights to recover and these recover weights are randomly initialized.\"]}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you so much for the constructive comments. We find these comments really helpful. Following are the explanations about the limitations:\\n\\n1) The Performance of sparse models.\\nOur method indeed gets sparse models with increased performance than dense models. \\nAs present in section 4.4, our method can get sparse models with better performance when the sparsity of models is less than 90%. It is only when we want to get modes with high sparsity (>90%) that there will be a noticeable performance loss. \\nWe think this is a common case for network pruning that there will be a loss of performance when over 90% of parameters are removed.\\n\\n2) The lack of result on ImageNet\\nThe results are updated in the revision.\\n\\n3) Summary of Novelty \\nIt is our negligence that does not present the novelty clearly.\", \"the_followings_present_the_novelty_of_our_method\": \"1. Directly get sparse models in the training process\\n\\nThe typical pruning process is a three-stage pipeline, i.e., training, pruning and fine-tuning. In our method, no further pruning and fine-tuning are needed. We can directly get sparse models in the training process.\\nThere exist some similar works but we get better performance compared with the existing methods as present in Section 4.1. \\n\\n\\n2. Trainable fine-grained pruning thresholds\\n\\nMost of the previous pruning algorithm adopt a single pruning threshold for each layer or the whole architecture. In our method, since a threshold vector $t\\\\in R^m$ is used for each layer with parameter $W\\\\in R^{m\\\\times n}$, we have neuron-wise pruning thresholds for fully connected and recurrent layer and filter-wise pruning thresholds for convolutional layer.\\nMeanwhile, all these fine-grained pruning thresholds can be updated automatically via back-propagation as present in Section 3. We are not aware of any other method that achieves this.\\n\\n\\n3. The ability to properly recover the previously pruned weights.\", \"most_of_the_current_pruning_and_all_the_sparse_training_methods_conduct_hard_pruning_with_following_properties\": [\"Pruned weights will be directly set to 0\", \"No further update via back-propagation for pruned weights\", \"Directly setting pruned weight to 0 causes the loss of historical parameter importance, which make it hard to determine:\", \"When and which pruned weights should be recovered.\", \"What value should we assigned to the recovered weights.\", \"Therefore, current sparse training methods that allow the recovery of pruned weights randomly choose a predefined portion of pruned weights to recover and these recover weights are randomly initialized.\"], \"our_method_has_following_properties_that_properly_solve_these_problems\": [\"Pruned weights will not be directly set to 0. Instead, the mask $W$ will store the information about which weights are pruned.\", \"Pruned weights can still be updated via back-propagation.\", \"The corresponding pruning thresholds will also be updated via back-propagation\", \"Therefore, it is the dynamic change of both pruned weights and the corresponding pruning thresholds that determine when and which pruned weights should be recovered. Meanwhile, the recovered weight has the value that it learns via back-propagation instead of a randomly assigned value.\", \"4. Continuous fine-grained pruning and recovering over the whole training process\"], \"a_typical_pruning_process_is_conducted_after_a_certain_training_epoch_as_following\": \"- Determine current importance of weight\\n- Prune certain percentage of weight. \\n\\nUsually, a training epoch will have tens of thousands of training steps, which is the feed-forward and back-propagation pass for a single mini-batch. \\nIn our method, since both the network weights and the pruning thresholds will be updated vis back-propagation at each training step. Our method can have continuous fine-grained pruning and recovering at each training step. We are not aware of any other method that can achieves this. \\n\\n\\n5. Automatic and dynamic layer-wise pruning rates adjustment over the network\", \"there_are_two_critical_problems_in_network_pruning\": \"- How many weights should be pruned in each pruning step\\n- What is the proper pruning rates for each layer over the network\\n\\nUsually, the pruning is conduct by some predefined pruning schedule like pruning 5% at each step with totally 10 pruning steps. Meanwhile, it is quite hard to properly determine the pruning rates for each layer. Current methods either use a single global pruning threshold for the whole model or layer-by-layer greedy pruning. We illustrate their limitation on Page 1. \\n\\nIn our method, with the property present above, the portion of weights to be pruned at each step and the proper pruning rates for each layer are automatically determined by the dynamic update of parameter $W$ and threshold $t$. Meanwhile, as present in section 4.2 and 4.3, the swift adjustment of pruning rates and consistent sparse patterns prove the effectiveness of our method in proper adjustment of layer-wise pruning rates.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents a novel network pruning algorithm -- Dynamic Sparse Training. It aims at jointly finding the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. The experiments on MNIST, and cifar-10 show that proposed model can find sparse neural network models, but unfortunately with little performance loss.\\nThe key limitation of the proposed model come from the experiments.\\n(1) Nowadays, the nature and important question is that, one can not tolerate the degraded performance, even with sparse neural network. Thus, it is important to show that the proposed model can find sparse neural network models, and with increased performance. \\n(2) Another weakness is that proposed model has to be tested on large scale dataset, e.g. ImageNet-2012.current two datasets are too small to support the conclusive results of this proposed model. \\n(3) As for the model itself, I donot find very significant novelty. For example, Sec. 3.3 (TRAINABLE MASKED LAYERS) in general is quite following previous works\\u2019 designing principle. Thus, the novelty should be summarized, and highlighted in the paper.\"}",
"{\"title\": \"Response to AnonReviewer2 - The concerns about Quality\", \"comment\": \"Thank you for the constructive comments about the quality of our paper.\\nOur method achieve state of the art performance compared with other sparse training algorithms due to the fine-grained neuron-wise or filter-wise trainable thresholds together with the proper update of both network parameters and threshold via back-propagation during the whole Dynamic Sparse Training process. We have published the source code for reproducibility. \\n\\nTo make the illustration more clear, the models that replace the conventional layers with corresponding trainable masked layers and are trained with Dynamic Sparse Training (DST) are referred to as sparse models. For example, LeNet-300-100 with a trainable masked layer and trained with DST are referred to as sparse LeNet-300-100. The dense models train with normal SGD are referred to as dense models. Meanwhile, sparse accuracy indicates the test accuracy after a certain epoch for the sparse models and dense accuracy indicates that for the dense models.\\n\\nThe following are the explanation for your question about the Quality. \\n1) The result on ImageNet \\nThe result are updated in the revision\\n\\n2) The \\\"contradiction\\\" in Figure 3 and Figure 4.\\nThank you for pointing out. it is our negligence to improperly present the experiment results.\\nAs illustrated in section 4.2, the last layer itself can be regarded as a single layer classi\\ufb01er that takes the features extracted by preceding layers as input. The layer remaining ratio of the last layer can be regarded as an indicator of the e\\ufb00ectiveness of the features extracted. If bad features are extracted, the \\ufb01nal test accuracy will not be high and a relatively large portion of weights in the last layer can be masked. This means that bad features make a large portion of weights in the last layer unimportant. In turn, if a large portion of features extracted is helpful for the classification task, the layer remaining ratio of the last layer as well as the test accuracy should be high. \\nReferring to Figure 3(b), the test accuracy of sparse LeNet-300-100 on MNIST reaches over 93% just after the \\ufb01rst epoch and reaches over 97% after the fourth epoch. Since MNIST is a simple dataset, the model converges very fast. This means that just after the first training epoch, all the features extracted by the preceding layers are somehow helpful for the classification.\\nDue to our additional experiment, If a more \\ufb01ne-grained change of layer remaining ratio is present, a similar trend of the remaining ratio of the last layer will be discovered for sparse LeNet-300-100. For example, If the layer remaining ratio after each training step during the first epoch is present, the remaining ratio of the last layer will decrease to a lower value first and then increases up to 1, which is consistent with the trend in Figure 4(a). This phenomenon can be observed with the source code we publish. \\n\\n3) More illustration about Figure 4.\\nWe indeed claim that the parameter importance of the last layer for the sparse VGG-16 model during the first 80 epochs is quite low. However, we also argue that the parameter importance will change dramatically during the training process as the hyperparameters like learning rate change. \\nThe fact that the accuracy increases up to near 90% before the learning rate decay does not conflict with the fact that most of the features extracted by preceding layers are useless. As present in Figure 4(b), although the remaining ratio of the last layer keeps around 0.05 before the learning rate decay, the test accuracy of the sparse model after each training epoch is better than the test accuracy of the dense model that use all the features extracted. \\nWe have conducted an additional experiment with the code we publish by training the sparse model without learning rate decay. The accuracy just keeps fluctuating around 85% and the remaining ratio of the last layer also keeps around 0.05. Training more epochs will not increase both the sparse and dense accuracy. If there is no decay of the learning rate, only around 5% of the features extracted are indeed helpful for classification.\\nHowever, if the learning rate is decayed at 80 epoch, the sparse accuracy and the remaining ratio of the last layer increases immediately at the same time as present in Figure 4(b). More useful features are extracted, higher test accuracy will get and the higher remaining ratio will be for the last layer with the decay of learning rate. \\nWe want to demonstrate the parameter importance may change dramatically and our method can handle this kind of situation properly in Figure 4.\\n\\n\\n4) Reduction of the computation\\nThank you so much for pointing this out. As you suggest, the quantified result about the reduction of computation and memory should be included. We will add this in the updated version\"}",
"{\"title\": \"Response to AnonReviewer2 - The concerns about Clarity\", \"comment\": \"Thank you so much for the detailed reviews and valuable remarks. I am sorry for the unclarity caused by the lack of information and improper usage of terms. Here I will use the trainable masked fully connected layer as an example to explain your concerns about Clarity.\\nConsider a trainable masked fully connected layer with parameter $W\\\\in R^{m\\\\times n}$ and trainable threshold vector $t\\\\in R^m$. This means that this layer get $n$ input neurons and $m$ output neurons. A neuron-wise threshold $t_i$ is defined for the $i$th output neuron. \\n\\n1) How the mask $M\\\\in R^{m\\\\times n} $ is generated and used in the feed forward process\\n\\n$M_{ij} = S(|W_{ij}|-t_i)$ for $1\\\\leq i \\\\leq m$, $1\\\\leq j \\\\leq n$, where $S(x)$ is the unit step function.\\n\\nFor each connection connects to output neuron $i$, the magnitude of the corresponding weight $W_{ij}$ will be compared with the neuron-wise threshold $t_i$. Instead of directly setting $W_{ij}$ to 0 like traditional pruning algorithms, the value of $W_{ij}$ is preserved in our method. The information about whether to prune this connection is stored in $M_{ij}$, where 0 means pruned (masked) and 1 means unpruned (unmasked). We denote $P = W\\\\odot M$. Instead of the original parameter $W$, $P$ will be used in the matrix-vector multiplication.\\n\\nMeanwhile $Q\\\\in R^{m\\\\times n}$ is just a intermediate variable, where $Q_{ij} = |W_{ij}|-t_i$ for $1\\\\leq i \\\\leq m$, $1\\\\leq j \\\\leq n$\\n\\n2) What are \\\"structure gradient\\\" and \\\"performance gradient\\\" mathematically\\n\\nRefer to Figure 1, in the back-propagation process, $P$ will receive a gradient and we denote it as $dP$. \\nLet's consider the gradients that flow from right to left.\\n\\nThe performance gradient is $dP \\\\odot M$\\n\\nThe gradient received by $M$ is $dP\\\\odot W$\\n\\nThe gradient received by $Q$ is $dP\\\\odot W\\\\odot H(Q)$, where $H(x)$ is the long-tail derivative estimation for $S(x)$ and $H(Q)$ is the result of $H(x)$ applied to $Q$ elementwisely. \\n\\nThe structure gradient is $dP\\\\odot W\\\\odot H(Q)\\\\odot sgn(W)$, where $sgn(W)$ is the result of sign function applied to $W$ elementwisely. \\n\\nThe gradient received by the vector threshold $t$ is $dt\\\\in R^m$. We denote $dT = -dP\\\\odot W\\\\odot H(Q)$, then $dT\\\\in R^{m\\\\times n}$. And we will have $dt_i = \\\\sum_{j=1}^nT_{ij}$ for $1\\\\leq i \\\\leq m$.\\n\\n3) How the gradient flow to pruned (masked) weights\\n\\nThe gradient received by the parameter $W$ is $dW = dP\\\\odot M + dP\\\\odot W\\\\odot H(Q)\\\\odot sgn(W)$ \\n\\nSince we add $\\\\ell_2$ regularization in the training process, all the elements in $W$ are distributed within $[-1, 1]$. Meanwhile, almost all the elements in the vector threshold are distributed within $[0, 1]$. The exceptions are the situation as shown in Figure 3(a) and Figure 4(a) where the last layer get no weight pruned (masked). Regarding the process of getting $Q$, all the elements in $Q$ are within $[-1, 1]$. Therefore $H(Q)$ is a dense matrix. Then $W$, $H(Q)$ and $sgn(W)$ are all dense matrices and the pruned (masked) weights can receive the structure gradient $dP\\\\odot W\\\\odot H(Q)\\\\odot sgn(W)$ \\n\\n4) How the pruned (masked) connection get recovered\\n\\nThe masked weights, the unmasked weights and the vector threshold can all receive gradients and be updated constantly during the training process. A connection with corresponding weight $W_{ij}$ and threshold $t_i$ may be pruned (masked) if $|W_{ij}| < t_i$ at certain time point in the training process. Meanwhile, it can be easily recovered (unmasked) if $|W_{ij}| > t_i$ during the later training process. \\n\\n5) Question regarding Equation 3 and Figure 2(d)\\n\\nThe Equation 3 and Figure 2(d) are both present the long-tail derivative estimation. The Figure 2(a) present the unit step function.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an algorithm for training networks with sparse parameter tensors. This involves achieving sparsity by application of a binary mask, where the mask is determined by current parameter values and a learned threshold. It also involves the addition of a specific regularizer which encourages the thresholds used for the mask to be large. Gradients with respect to both masked-out parameters, and with respect to mask thresholds, are computed using a \\\"long tailed\\\" variant of the straight-through-estimator.\\n\\nThe algorithm proposed in this paper seems sensible, but rather ad hoc. It is not motivated by theory or careful experiments. As such, the value of the paper will be determined largely by the strength of the experimental results. I believe the experimental results to be strong, though I am not familiar enough with this subfield to be confident there are not missing baselines.\\n\\nThere are many minor English language problems (e.g. with articles, prepositions, plural vs. singular forms, and verb tense), though these don't significantly interfere with understanding.\\n\\nRounding up to weak accept, though my confidence is low because I am basing this positive assessment on experimental results for tasks on which I am not well calibrated.\", \"more_detailed_comments\": \"\\\"using the same training epochs\\\" -> \\\"using the same number of training epochs\\\"\\n\\\"achieves prior art performance\\\" -> \\\"achieves state of the art performance\\\"\\n\\\"the inference of deep neural network\\\" -> \\\"inference in deep neural networks\\\"\\n\\nThis paper considered only sparsity of weights -- it might have been nice to also discuss/run experiments exploring sparsity of activations.\\n\\neq. 4 -- Can you say more about why this particular form is the right one for the regularizer? It seems rather arbitrary. (it will tend to be dominated by the smallest thresholds, and so would seem to encourage a minimum degree of sparsity in every layer)\\n\\nI appreciate the analysis in section 4.3.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"## Update after the rebuttal\\nI appreciate the author's clarification in the rebuttal and the additional result on ImageNet, which addressed some of my concerns.\\n\\n# Summary\\nThis paper proposes a trainable mask layer in neural networks for compressing neural networks end-to-end. The main idea is to apply a differentiable mask to individual weights such that the mask itself is also trained through backpropagation. They also propose to add a regularization term that encourages weights are masked out as much as possible. The result on MNIST and CIFAR show that their method can achieve the highest (weight) compression rate and the lowest accuracy reduction compared to baselines. \\n\\n# Originality\\n- The idea of applying trainable mask to weights and regularizing toward masking out is quite interesting and new to my knowledge. \\n\\n# Quality\\n- The performance seems to be good, though it would be more convincing if the paper showed results on larger datasets like ImageNet. \\n\\n- The analysis is interesting, but I am not fully convinced by the \\\"strong evidence to the efficiency and effectiveness of our algorithm\\\". For example, the final layer's remaining ratio is constantly 1 in Figure 3, while it starts from nearly 0 in Figure 4. The paper also argues that the final layer was not that important in Figure 4 because the lower layers have not learned useful features. This seems not only contradictory to the result of Figure 3 but also inconsistent of the accuracy being quickly increasing up to near 90% while the remaining ratio is nearly 0 in Figure 4. \\n\\n- If the motivation of the sparse training is to reduce memory consumption AND computation, showing some results on the reduction of the computation cost after sparse training would important to complete the story. \\n\\n# Clarity\\n- The description of the main idea is not clear. \\n\\n- What are \\\"structure gradient\\\" and \\\"performance gradient\\\"? They are not mathematically defined in the paper.\\n\\n- I do not understand how the proposed method can \\\"recover\\\" from pruned connection, although it seems to be indeed happening in the experiment. The paper claims that the use of long-tailed higher-order estimator H(x) makes it possible to recover. However, H(x) still seems to have flat lines where the derivative is 0. Is H(x) in Equation 3 and Figure 2d are showing \\\"derivative\\\" or step function itself? In any cases, I do not see how the gradient flows once a weight is masked out. \\n\\n# Significance\\n- This paper proposes an interesting idea (trainable mask), though I did not fully get how the mask is defined/trained and has a potential to recover after pruning. The analysis of the compression rate throughout training is interesting but does not seem to be fully convincing. It would be stronger if the paper 1) included more results on bigger datasets like ImageNet, 2) described the main idea more clearly, and 3) provided more convincing evidence why the proposed method is effective.\"}"
]
} |
Skl-fyHKPH | A Mean-Field Theory for Kernel Alignment with Random Features in Generative Adverserial Networks | [
"Masoud Badiei Khuzani",
"Liyue Shen",
"Shahin Shahrampour",
"Lei Xing"
] | We propose a novel supervised learning method to optimize the kernel in maximum mean discrepancy generative adversarial networks (MMD GANs). Specifically, we characterize a distributionally robust optimization problem to compute a good distribution for the random feature model of Rahimi and Recht to approximate a good kernel function. Due to the fact that the distributional optimization is infinite dimensional, we consider a Monte-Carlo sample average approximation (SAA) to obtain a more tractable finite dimensional optimization problem. We subsequently leverage a particle stochastic gradient descent (SGD) method to solve finite dimensional optimization problems. Based on a mean-field analysis, we then prove that the empirical distribution of the interactive particles system at each iteration of the SGD follows the path of the gradient descent flow on the Wasserstein manifold. We also establish the non-asymptotic consistency of the finite sample estimator. Our empirical evaluation on synthetic data-set as well as MNIST and CIFAR-10 benchmark data-sets indicates that our proposed MMD GAN model with kernel learning indeed attains higher inception scores well as Fr\`{e}chet inception distances and generates better images compared to the generative moment matching network (GMMN) and MMD GAN with untrained kernels. | [
"Kernel Learning",
"Generative Adversarial Networks",
"Mean Field Theory"
] | Reject | https://openreview.net/pdf?id=Skl-fyHKPH | https://openreview.net/forum?id=Skl-fyHKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Lml9dqy7Ul",
"B1xUXK3isr",
"BylvawhsiB",
"BkxQkynosr",
"Syx3pUyf5S",
"B1lbCgh6KB",
"SJx5gjF2FB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726735,
1573796126152,
1573795774927,
1573793499133,
1572103875681,
1571827912640,
1571752690487
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1569/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1569/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1569/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1569/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1569/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1569/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper was assessed by three reviewers who scored it as 6/1/6. The main criticism included somewhat weak experiments due to the manual tuning of bandwidth, the use of old (and perhaps mostly solved/not challenging) datasets such as Mnist and Cifar10, lack of ablation studies. The other issue voiced in the review is that the proposed method is very close to a MMD-GAN with a kernel plus random features. Taking into account all positives and negatives, we regret to conclude that this submission falls short of the quality required by ICLR2020, thus it cannot be accepted at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Simulations on synthetic data are provided in Supplementary.\", \"comment\": \"Thank you for your constructive feedback. We appreciate you for noting the strength of the theoretical results and appreciate the opportunity to improve the clarity so that the paper can be made accessible. We address the specific questions you raised below and will incorporate that feedback into the updated version of the paper:\\n\\n\\n\\n- We have addressed the concern of the reviewer by adding Subsection 3.4 about the computational complexity of our kernel learning method. Notice that in the implicit kernel learning as well as the MMD GAN papers, the kernel is learned via applying a SGD to the empirical MMD and a batch size of larger than 2 is used. In our method, the batch size is fixed at 2 and the complexity of our method also depends on the number of particles that is used. So, a direct comparison between the complexity of our method and that of MMD GAN and implicit kernel learning is not\\n\\n\\n- Thank you for your suggestion. In the supplementary, we have indeed provided more experiments on synthetic data-set for a two-sample test between two multivariate Gaussian distributions with different covariances. Indeed, in Figure 4, we have shown that the kernel learning method can improve the statistical power of the hypothesis test in Eq. (30) for larger values of the threshold. It is clear from Figure 4 and the new Figure 5 added in the revised draft that our kernel learning method improves the MMD value, beyond what is attainable by an auto-encoder. We agree with the reviewer that our kernel learning approach can also be evaluated for supervised tasks such as in the kernel support vector machines. However, the focus of this paper is on generative models for unsupervised learning problems. \\n\\n-Thank you. The typo in the scaling parameter is fixed.\\n-Thank you. The citations are fixed.\"}",
"{\"title\": \"More experiments are added.\", \"comment\": \"We appreciate the reviewer for careful read of our manuscript, and encouraging comments regarding the theoretical results. We believe that the main merit of our proposed method is its simplicity as the implementation of the SGD algorithm is relatively easy compared to other kernel learning methods. We address your comments regarding the theoretical and experimental results below:\", \"theory\": \"-The notion of the Orlicz norm is indeed used to state Lemma C1. We have moved these definitions to the notation section of the Supplementary instead of presenting them on the fly.\\n-We appreciate the reviewer for bringing the interesting work of [Chizat2018] to our attention. We indeed agree with the reviewer that the distributional optimization is a non-convex optimization and at this moment, we do not have a guarantee that the Wasserstein gradient flow converges to the global solution of the distributional optimization in Eq. (11). However, as the authors of [Chizat2018] have shown, such conditions can be found.\\n\\nWith regard to Thm. 4.1. and Thm. 4.2., we would like to provide more clarification about their connections. We notice that Thm 4.1. is a general statement about the optimal solution of the population MMD optimization in Eq. (8), and the optimization of its empirical estimator in Equation (13) in the revised draft. This result is not concerned with any particular optimization algorithm. In contrast, Thm 4.2., deals with a specific optimization algorithm, namely SGD. Although SGD is solving a non-convex optimization problem, and the resulting solution from this algorithm is not necessarily a global solution, Thm 4.2. ensures that when the number of particles $N\\\\rightarrow\\\\infty$ tend to infinity, the resulting empirical measure from SGD solution is a local minima for the distributional optimization problem in Eq. (11). \\n\\n- Thank you for this interesting comment. The authors is correct that there is a stochastic term involved due to the random sampling in particle SGD. This stochastic term is precisely the Martingale process defined in Equation 87b. However, as shown in Lemma B.10, the absolute value of this process goes to zero almost surely in the asymptotic of large number of particles $N\\\\rightarrow \\\\infty$. Indeed, this is a consequence of the law of large number. A similar result is shown for the mean field equation of the online (streaming) PCA in [Chuang2017]. In the work of [Hu2018], the dynamic of each particle is governed by a linear stochastic linear system that involves a Brownian motion term. Therefore, the limiting measure of the empirical measure must satisfy a PDE that captures the effect of this diffusion term in the dynamic of each particle.\", \"experiments\": \"-Thank you. The bandwidths are not manually tuned by us in our paper. We used the bandwidths that are selected in the following paper:\\n[1] MMD GAN: Towards Deeper Understanding of Moment Matching Network, NIPS 2017.\\nWe presume that these bandwidths in [1] are to provide the best scores. The bandwidths can also be optimized using the techniques of [Arbel2018]. But, we suspect the results after bandwidth tuning would be the same as in [1]. We also mention that [Arbel2018] consider the class of parametric kernels. In practice, the choice of this kernel class itself can create a model selection problem.\\n\\n-Thank you. We point out that alpha is merely a scaling parameter that determines the separation of features after applying a kernel. If alpha is equal to infinity, this means that the learned kernel separates the features better than the kernel learned using a small alpha. In this regard, alpha=1 does not results in any special case. The discriminator for alpha=1 is still described by the test statistic given in Eqs. (28) and (29). \\n\\n- To address the concern of the reviewer, we have added more simulations in the revised manuscript using CelebA, and LSUN Bedroom datasets. The results are provided in Section B of the Supplementary in the revised paper. Unfortunately, we need more time to generate all the scores due to the need to optimize the kernel bandwidths parameters for MMD GAN.\\n\\n-To address this interesting point raised by the reviewer, we have added Figures 5 in the Supplementary. In Figure 5, we show the MMD value during the two phase procedure for kernel learning. In the first phase, we optimize the auto-encoder as in MMD GAN paper, and in the second phase, we train the kernel using the embedded features from auto-encoder. From Figure 5, it is clear that optimizing the kernel after auto-encoder is necessary and it significantly improves the MMD value. In Figure 4, we show the power of the test in hypothesis testing for high threshold values for the test statistic given in Equation (30). We observe the improvement in the statistical power due to the kernel training in Figure 4. Clearly, our two-phase method yield higher statistical power for larger values of the threshold in Figure 4(a) compared to the auto-encoder in Figure 4(b).\"}",
"{\"title\": \"The discrepancy in FID and IS scores is (potentially) caused by the implementation of the Inception-v3 network.\", \"comment\": \"We thank the reviewer for positive feedback and constructive comments. Please find the answer to your comments below:\\n\\n1- With regard to the Inception Score (IS), we used the standard implementation of IS score using the codes released by the authors of the following paper on Github:\\n\\n[1] Improved techniques for training gans. In NIPS, 2016.\\n\\nThe authors of [2] mention in their work that they also use the same implementation to compute their inception scores. \\n\\n[2] MMD GAN: Towards Deeper Understanding of Moment Matching Network, NIPS 2017.\\n\\nHowever, we noticed some discrepancies between the scores reported in [1] and [2] for the implementation on the real data. Our IS is in agreement with that of [1]. Indeed, our IS for the real data (CIFAR-10 images) is 11.237\\u00b1.116, which is in close agreement with the number 11.24 \\u00b1 .12 reported in [1]. In contrast, the number reported in [2] for the real data is 11.95\\u00b10.20. We suspect this discrepancy between the numbers in [1] or [2] is due to the different implementation of the Inception-v3 Network in Pytorch and Tensorflow. Unfortunately, the authors of [2] have not released their Python codes for the inception score calculation. So we can only speculate about the cause of this discrepancy. \\n\\nWith regard to the implementation of FID score, we used the TTUR codes released by reference [3] on Github:\\n\\n[3] Gans trained by a two time-scale update rule converge to a local Nash equilibrium, NIPS 2017,\", \"we_suspect_the_discrepancy_in_the_reported_fid_scores_in_our_paper_and_other_works_is_due_to_the_following_reasons\": \"-We note that the FID score is computed by measuring the mean and covariance statistics of generated and real data using the 2048-dimensional activations of the Inception-v3 pool3 layer. Once again, different implementations of the Inception-v3 in Pytorch and Tensorflow can causes a problem.\\n\\n-The FID score is sensitive to the number of generated and real data samples that is used for its calculation. For instance, consider the simple experiment where we use the samples from CIFAR-10 training data-set as the real images and the CIFAR-10 test data-set as the generated images. In theory, the FID score is expected be zero since real and generated images both are from the same data-set. Nevertheless, in practice, for a finite number of samples from each data-set, the FID score is different from zero. Indeed, using 100 samples from each of the training and test data-sets, we calculated the FID score to be 176.916. For 1000 samples, the FID score reduces to 49.340. From this simple experiment, it is clear that the FID is very sensitive to the number of samples. In our experiments, we used 50000 samples from fake and real images. \\n\\n2-The result in Corollary 4.2.1. is indeed a statement about the order of $\\\\eta$, and therefore the exact value of $\\\\eta$ cannot be derived from this Corollary. Furthermore, Corollary 4.2.1 is a probabilistic bound, and depending on the choice of the confidence level, different bounds for $\\\\eta$ can be derived. Nevertheless, Corollary 4.2.1 implies that when $R$ is sufficiently large, or $\\\\eta$ is sufficiently small, the feasibility constraint on the empirical measure can be satisfied. This can be used as a guideline for choosing the step-size in numerical experiments. In our simulations, a large value for $R$ is used which guarantees the feasibility of the empirical distribution for $\\\\eta=10$. Thanks to the result of Thm. 4.1, the consistency still can be guaranteed for a large $R$ if the number of training samples $n$ is also large (which is the case for CIFAR-10 and MNIST). In practice and for the general case, a backtracking line search is needed to adequately compute the step-size that satisfies the distribution ball constraint on the empirical measures.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper aims to improve the kernel selection issue of the MMD-based generative models. The author formulates the kernels via inverse Fourier transform and the goal is to learn the optimal N finite random Fourier features (RFF). The RFF samples are optimized by the proposed kernel alignment loss where the positive and negative labels are defined as samples coming from real and negative data distributions, respectively. Some theoretical analysis regarding the consistency of the learned kernel is provided. Experiment results on the IS score and FID on CIFAR-10 show improvement of the proposed methods over MMD-GAN baselines, while the results are not comparable to the original MMD-GAN due to unknown results.\\n\\nWhile motivated from the mean-field theory, the algorithm 1 is essentially doing stochastic gradient on the RFF samples with fixed learning rate. Learning spectral distribution of kernel via optimising RFF samples is also not entirely new, as [0] presented in the Appendix C4 of [1]. they show the difference between two different realization of kernel learning.\", \"i_would_love_to_increase_my_score_if_the_author_could_address_the_following_comments\": \"(1) Can you explain why the IS and FID results of MMD-GAN presented in Table 1 is inconsistent (i.e. considerably worse) with other papers [1,2,3]?\\n(2) In experiment setting, the learning rate eta of learning RFF samples is fixed to 10. Does this guarantee the learned spectral distribution lying in the constraint set P as specify in Eq (8)?\\n\\n[1] Implicit kernel learning, AISTATS 2019\\n[2] DEMYSTIFYING MMD GANS, ICLR 2018\\n[3] MMD GAN: Towards Deeper Understanding of Moment Matching Network, NIPS 2017\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper addresses the problem of kernel learning in MMD GAN using particle stochastic gradient descent to solve an approximation of the intractable distributional optimization problem for random features. The paper provides theoretical guarantees for the consistency of approximations, although proofs are deferred to Appendix.\\nIt seems to be a good result theoretically thanks to the consistency guarantees for the particle SGD approximation of the optimization problem. However, its practical efficacy is not completely clear.\\n1. There is no discussion on how the method fares in terms of time/space complexity and if it is scalable to higher-dimensional datasets or larger batch sizes. How many steps T for good results are needed? How much time does it take to learn the model compared to the Implicit Kernel Learning or original MMD GAN?\\n2. For a more detailed analysis of performance, it would be helpful to see the benefits of the kernel learned with the proposed method on synthetic data and its performance on supervised learning tasks compared with other kernel learning methods on supervised tasks.\", \"some_minor_remarks\": \"1. Scaling parameter alpha has become parameter beta on page ix.\\n2. Some citations and equation references should be fixed.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to learn a kernel for training MMD-GAN by optimizing over the probability distribution that defines the kernel by means of random features. This is unlike the usual setting of MMD-GAN where the kernel is parametrized by composing a fixed top-kernel with a discriminator network that is optimized during training. The main motivation for this approach is to avoid having to 'manually' fix some parameters of the top-level kernel like the bandwidth of the RBF kernel. They provide an algorithm to achieve such optimization in probability space along with some consistency results and perform experiments on MNIST and Cifar10 to demonstrate empirically the advantage of such an approach over those that fix the top-level kernel.\", \"theory\": \"Theorem 4.1 provides a convergence result of an oracle finite-sample estimator: that is the one obtained by exactly solving the optimization problem in 19b. In that case, they show the consistency of the proposed estimator. The result is somehow expected but the proof relies on nice duality results for measures and is very technical.\", \"the_clarity_of_the_proof_could_be_improved\": \"- Currently, the structure of the main proof mixes direct lemma (lemma B.7) with less obvious ones (lemma B.6). Also, some concepts are introduced in the main proof but not necessary for its understanding: The notion of Orlicz norm is in introduced in Definition B.3 on the fly to state lemma B.6, but only equations 50 and 51 are used in this lemma which does not make use of the notion of Orlicz norm at all.\\n\\n\\tTheorem 4.1 doesn't say anything about the consistency of the algorithm itself. To partly address this, the authors show in theorem 4.2 that as the number of particles grows the empirical process converges to a McKean Vlasov PDE (equation 22). This means that the proposed algorithm is approximating some gradient flow in metric space (25). \\n- However, this gradient flow is a non-convex optimization problem and there is no guarantee that a global solution is reached. Recent work provides cases when global convergence occurs [Chizat2018] but it is not the case in general. Some further clarification about the connection between Thm 4.1 and Thm 4.2 would be therefore useful.\\n\\t\\nThm 4.2 is also curious in the sense that the process defined by equation 16, which is noisy since it relies on one sample from the data, would converge towards (22) which is a Mc-valsov equation with a drift only (no diffusion or other noise). What happened to the noise coming from sampling from P_v and P_w in equation 16? Wouldn't there be some sort of diffusion term as in [Hu2018]?\\n\\t\\nMore generally it would be nice to have a discussion of the assumptions and results in the paper as they seem to rely on methods that people in the machine learning community are not totally familiar with.\", \"experiments\": [\"The experiments are not convincing for several reasons:\", \"The comparison with the other methods is somehow unfair since the bandwidth is manually tuned for the competing methods. It is easy to adaptively learn the bandwidth as well: in this case, it will be just an additional parameter of a discriminator network. This was done in [Arbel2018] where a single gaussian kernel is used and a regularization of the critic allows to learn the bandwidth without manual tuning. Does the proposed method offer an additional advantage compared to those?\", \"In practice and for a scaling parameter alpha=1, isn't the algorithm strictly equivalent to considering an MMD-GAN with a dot product kernel and a discriminator given by the feature \\\\phi(x,\\\\zeta)?\", \"Mnist and Cifar10 are somehow very simple, what would happen on more complicated datasets (CelebA or imagenet)?\", \"I also think there is an ablation that is missing: If the auto-encoder also needs to be optimized then does it also help to optimize over the particles as well or is optimizing the auto-encoder discriminator enough to achieve a similar performance? In other words, does optimizing the auto-encoder compensate for the need to learn the distribution mu? Of course, this would depend on how the auto-encoder is parametrized but I don't see why it wouldn't in many cases.\", \"Overall, I'm not convinced that the proposed approach would lead to any substantial improvement for MMD-GAN in practice, and the experiments are not really convincing as they are now. However, I find the theoretical results interesting and might be used to better analyse the dynamics of GAN's. But as the paper is currently framed, it is hard to put these theoretical results in perspective.\"]}"
]
} |
Hkxbz1HKvr | Learning Key Steps to Attack Deep Reinforcement Learning Agents | [
"Chien-Min Yu",
"Hsuan-Tien Lin"
] | Deep reinforcement learning agents are known to be vulnerable to adversarial attacks. In particular, recent studies have shown that attacking a few key steps is effective for decreasing the agent's cumulative reward. However, all existing attacking methods find those key steps with human-designed heuristics, and it is not clear how more effective key steps can be identified. This paper introduces a novel reinforcement learning framework that learns more effective key steps through interacting with the agent. The proposed framework does not require any human heuristics nor knowledge, and can be flexibly coupled with any white-box or black-box adversarial attack scenarios. Experiments on benchmark Atari games across different scenarios demonstrate that the proposed framework is superior to existing methods for identifying more effective key steps. | [
"deep reinforcement learning",
"adversarial attacks"
] | Reject | https://openreview.net/pdf?id=Hkxbz1HKvr | https://openreview.net/forum?id=Hkxbz1HKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4_xxSSPLPE",
"HygBOdmsjr",
"Syet6D7ijH",
"S1lULw7jir",
"BJxeMwmioH",
"SJehO3OJ9r",
"SJgW8qeTYB",
"HkeJyj8wtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726704,
1573759084770,
1573758913032,
1573758797772,
1573758728144,
1571945588166,
1571781192772,
1571412694970
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1568/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1568/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1568/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper considers adversarial attacks in deep reinforcement learning, and specifically focuses on the problem of identifying key steps to attack. The paper poses learning these key steps as an RL problem with a cost for the attacker choosing to attack.\\n\\nThe reviewers agreed that this was an interesting problem setup, and the ability to learn these attacks without heuristics is promising. The main concern, which was felt was not adequately addressed in the rebuttals, was that the results need to be more than just competitive with heuristic approaches.\\n\\nThe fact that the attack ratio cannot be reliably changed, even with varying $\\\\lambda$ still presents a major hurdle in the evaluation of the proposed method.\\n\\nFor the aforementioned reasons, I recommend rejecting this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank Reviewer #3 for the valuable feedback. We have run more experiments and updated a new version. Responses to your comments are listed below:\\n\\n**Comment**\\nOne major limitation of the work is that the attack rate is not readily modifiable. [. . .]\\nMore importantly, there is no clear relationship between the value of lambda and the attack rate. With the same lambda, depending on the game, the opponent settles on attack rates varying between 10% (on pong) and 70% (on Riverraid).\\n**Response**\\nWe agree that our framework does not have direct control on the attack rate, since we replace the hard constraint with a soft penalty parameter. However, we do observe some similar trends in the new experiment results:\\n- In the same environment, training with smaller $\\\\lambda$ usually would find attack policies with higher attack rates.\\n- Across different environments, setting $\\\\lambda$ around the same order of magnitude as *potential reward loss* (see Section 4.3) tends to produce stable results.\\n\\nOur results suggest that we should choose $\\\\lambda$ carefully in order to learn effective key steps. We will consider the problem of controlling attack rates as a direction for future improvements.\\n\\n**Comment**\\nThe results themselves show only marginal improvement over the baselines, and in the absence of clear error bars / confidence intervals, it is difficult to state the significance of the method. In the particular case of Space Invaders (figure 5b), the proposed method seems tied to the best heuristic (at attack-rate 70%), but the said heuristic reaches the same performance level for much lower attack-rate (as low as 40-50%), [. . .]\\n**Response**\\nThank you for the suggestion. We have tested more $\\\\lambda$ and provided the confidence bounds. By setting an appropriate penalty parameter $\\\\lambda$, our method is also able to find attack policy with low attack rate (40-50%) in Space Invaders (see Figure 5b). Although the amount of improvement depends on the environment, we find that\\n- Key steps are learnable.\\n- Our approach shows comparable performance to competitors across different attack ratios consistently.\\n- It is possible to learn more effective key steps using our approach.\\n\\n\\n**Comment**\\nIn equation (2), you present a target objective function designed using Lagrange relaxation. However, the RL algorithm uses decay (\\\\gamma = 0.99), which means that the resulting function that is effectively minimized is different. Could you clarify the impact of the decay on the lagrange-relaxed objective function?\\n**Response**\\nThank you for pointing this out. We agree that the decay changes the minimization problem a little bit. The added decay has similar effects as in the original RL setting. Since the uncertainty of the future may not be fully captured by the current state, the decay could make the attacker emphasize short-term reward loss (of the target agent) more than delayed reward loss. We would consider investigating the decay more in future work.\\n\\n**Comment**\\nCould you clarify a bit the section 4.3 on black-box attacks? [...] Is the attack robust to differences in the algorithm?\\n**Response**\\nThank you for the question. We take the substitute agent from pretrained DQN agent in Dopamine. They are trained with the same architecture as the target agent but with different random seeds. We have updated the paper to clarify these details.\\n\\nI'm a bit unsure about what you mean by \\u201cIs the attack robust to differences in the algorithm\\u201d. If you mean the robustness of black-box attack in different scenarios, this problem has been studied in [1]. Our setting is the same as \\\"Transferability Across Policies\\\" in their work (Section 5.3.1). While not tested, we believe that our framework is applicable in other black-box settings, since we do not have any assumption on the adversarial attack algorithm.\\n\\n[1] Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. 2017.\\n\\n**Comment**\\nFinally, I'm a bit surprised by the choice of DQN as the base algorithm, especially since the chosen framework (Dopamine), offers significantly stronger algorithms (Rainbow or IQN). [. . .] Did you try to apply this method on more robust policies?\\n**Response**\\nThe main reason that we choose DQN is for computation efficiency. Another reason is that we do not want to complicate the perturbation generation procedure. Given that the network structure of Rainbow/IQN is different from traditional classifiers, applying adversarial attacks on them takes additional considerations.\\n\\nWe haven't tried applying to other agents. While evaluating the robustness of stronger agents is indeed an interesting problem, we focus more on the learnability of key steps as an initial work. We'll consider this suggestion in future works.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank Reviewer #1 for the valuable feedback. We respond to your questions below:\\n\\n**Comment**\\n1) Results from Figure 5 and Figure 6 seem to disagree with what the authors claim in the paper. In many (the majority) of the environments, the proposed algorithm has only trivial improvement and even worse performance under the same attack rate.\\n**Response**\\nWe have performed more experiments, and revised the overly strong claim. Although the amount of improvement depends on the environment, we find that\\n- Key steps are learnable.\\n- Our approach shows comparable performance to competitors across different attack ratios consistently.\\n- It is possible to learn more effective key steps using our approach.\\n\\n\\n**Comment**\\n2) It is not very convincing when only one result sample is plotted in Figure 5 and Figure 6.\\nI think it is necessary to show the performance of the proposed algorithm under different attack rate.\\nA wide range of candidate penalty parameter lambda should be tested, so that a curve can be fitted for the proposed algorithm similar to the baselines (similar to the one shown in Table 1, but with much more test values).\\n**Response**\\nThanks you for the suggestion. We have tested more $\\\\lambda$ and updated the paper. Figure 5 and Figure 6 plot the mean score with one standard deviation across different attack rates.\\n\\n\\n**Comment**\\n3) Related to 2), it seems the Lagrange relaxation makes it hard to control the attack rate in the proposed algorithms. How sensitive it is to control the attack rate?\\n**Response**\\nSince we replace the hard constraint with a soft penalty parameter, how to control the attack rate is indeed a tricky problem. Empirically, we observe that training with smaller $\\\\lambda$ tends to produce attack policies with higher attack rates (see Table 1 and Table 3). However, we do not have direct control of the attack rate in the current framework. We will consider this problem as a direction for future improvements.\\n\\n\\n**Comment**\\n3) Can the authors elaborate on why the algorithms is not too sensitive to the value of penalty in section 4.5? [. . .]\\n**Response**\\nOriginally, by \\\"not too sensitive to $\\\\lambda$\\\" we mean that our framework is able to learn effective key steps for a number of different $\\\\lambda$. To avoid confusion, we removed this statement in the updated version. Also, based on the new experiments, we add more analysis on how to choose an appropriate $\\\\lambda$. In particular, we observe that setting $\\\\lambda$ around the same order of magnitude as *potential reward loss* (see Section 4.3) tends to produce stable results.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank Reviewer #2 for the valuable suggestions. We have updated a version based on your feedback. Responses to your comments are listed below:\\n\\n**Comment**\\nThe setting addressed in this work, where the attacker only learns whether/when to attack or not is a greatly simplified version of the full problem. [. . .] but additionally I wonder if the type and difficulty of attacks would vary as the target agent trains.\\n**Response**\\nWe agree that learning the type of perturbation is an interesting direction and may lead to stronger attacks. Our work focuses more on the learnability of key steps, which is unexplored in previous studies. In order to have fair comparisons against previous methods, we fix the perturbation generation part as a subroutine. We would consider further generalizations in the future.\\n\\n\\n**Comment**\\nIf you vary the lambda parameter (as in Table 1, but for all games) you should be able to get similar line plots in Figure 5&6 for the RL approach, [. . .] The results, currently, do not appear very significant because (1) the gap between the RL solution and the heuristics is very small and (2) these appear to be single runs without standard deviations displayed. Can you argue for why these results are in fact significant (statistically or otherwise)?\\n**Response**\\nThank you for the suggestion. We have run more experiments with different $\\\\lambda$, and updated the results in Section 4.2 and 4.3. Figure 5 and Figure 6 plot the mean score with one standard deviation.\\n\\nOut method do not incorporate human knowledge, but is able to perform comparably to heuristics across different attack ratios consistently, and achieves superior performances in some environments. For example, we observe significant improvements in Pong. These results demonstrate that the attacker trained by our framework learns effective key steps and has the potential to outperform human-designed heuristics.\\n\\n\\n**Comment**\\nAnother question raised by the results is how performance of the attacking policy varies with training. The authors point out that the training is quite small compared to the target policy, but is that because it has already found the best solution it can in that time? How would it improve with more training?\\n**Response**\\nWe choose a rather small training budget due to limited computation resources. Empirically, we observe that if $\\\\lambda$ is set in an appropriate range, the attack policy often converges in 10M training steps. We have provided more details during training in Appendix (Figure 8). We do not exclude the possibility that the attack policy might improve if given more training steps.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all reviewers for the valuable and constructive feedback. We have updated the paper based on the reviewers' comments and suggestions. The updates are summarized as follows:\\n\\n- We ran more experiments. We train the attack policy using five different values of $\\\\lambda$ across all environments with both white-box and black-box attacks.\\n- We rewrote the experiment part. Section 4.2 compares the performance between our method and competitors. Section 4.3 investigates the effect of penalty parameter $\\\\lambda$.\\n- Based on the new results, we revised some overly strong claims and provided more analyses.\\n\\nThe important updates are highlighted in red. Any further suggestion or feedback is welcomed.\\n\\n**Updates at Nov. 15 12:58 (Pacific Time)**\\nWe have uploaded the final version (without red highlights).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to learn the \\u2018key-steps\\u2019 at which to to apply an adversarial attack on a reinforcement learning agent. The framing of this problem is a Lagrangian relaxation of a constrained minimization problem which takes the form of an RL problem itself, where the attacking agent\\u2019s reward is the negative reward of the target agents plus a penalty (lambda, hyper-parameter) for choosing to attack. The attacking agent\\u2019s action space is binary, attack or no attack.\\n\\nThe RL approach is compared with a random attack policy and two heuristic methods for attacking agents in games on the Atari benchmark.\\n\\nThe setting addressed in this work, where the attacker only learns whether/when to attack or not is a greatly simplified version of the full problem. It is (in my opinion, but feel free to correct me) likely that the type of perturbation also being learned has even greater potential, but is (obviously) much harder.\\n\\nAdditionally, although the authors mention this as future work, I think the co-training setting is particularly interesting. As the authors suggest, this could lead to more robust target agents, but additionally I wonder if the type and difficulty of attacks would vary as the target agent trains.\\n\\nBesides the (picky) complaints, I think the problem formulation is quite reasonable. Again, I find the larger problem extremely interesting, but perhaps this is far too intractable right now. The formulation is straightforward, so while I recognize it as a contribution, it is not a particularly large one.\\n\\nOn the other hand, the experimental results appear somewhat weak to me. \\n\\nIf you vary the lambda parameter (as in Table 1, but for all games) you should be able to get similar line plots in Figure 5&6 for the RL approach, which would give a much better comparison for the trade-offs as you get for the heuristic methods. I think this comparison would be very interesting and strengthen the existing results in those figures.\\n\\nThe results, currently, do not appear very significant because (1) the gap between the RL solution and the heuristics is very small and (2) these *appear* to be single runs without standard deviations displayed. Can you argue for why these results *are* in fact significant (statistically or otherwise)?\\n\\nAnother question raised by the results is how performance of the attacking policy varies with training. The authors point out that the training is quite small compared to the target policy, but is that because it has already found the best solution it can in that time? How would it improve with more training?\\n\\nSmall note on Figure 7, I think the point for these would be better made by normalizing the respective histograms.\", \"update\": \"Thank you for your responses and updating with new results. I think these provide a much better picture of the performance of the RL-based system. Although I am revising my score upward, I still think this is generally a rejection. Obviously I still think the full problem is interesting, but even the key step identification problem would be publishable if the performance was improved or further analysis helped me understand why RL is not doing better (since the heuristics are after all just heuristics). Good luck on future versions of this work.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Learning Key Steps to Attack Deep Reinforcement Learning Agents\\n\\nThis paper proposes an extension of existing discrete action image space adversarial attack algorithms.\\nInstead of choosing the steps by heuristic, the authors propose to choose the key steps by augmenting the reward function with a penalty to decrease the ratio of attacks.\\n\\nI tend to vote rejection for this paper, given that the proposed algorithms seem incremental compared to the existing algorithms, and the experiments seem not sufficient enough to support the core claim proposed in the paper.\", \"pros\": [\"The paper is well written, with sufficient background and related work section for the paper to be self-contained.\", \"The proposed framework is an interesting and practical framework for attacking RL agents.\"], \"cons\": [\"The experiment section is insufficient.\"], \"more_specifically\": \"1) Results from Figure 5 and Figure 6 seem to disagree with what the authors claim in the paper. In many (the majority) of the environments, the proposed algorithm has only trivial improvement and even worse performance under the same attack rate.\\n\\n2) It is not very convincing when only one result sample is plotted in Figure 5 and Figure 6.\\nI think it is necessary to show the performance of the proposed algorithm under different attack rate.\\nA wide range of candidate penalty parameter lambda should be tested, so that a curve can be fitted for the proposed algorithm similar to the baselines (similar to the one shown in Table 1, but with much more test values).\\n\\n3) Related to 2), it seems the Lagrange relaxation makes it hard to control the attack rate in the proposed algorithms. How sensitive it is to control the attack rate?\\n\\n3) Can the authors elaborate on why the algorithms is not too sensitive to the value of penalty in section 4.5?\\nTable 1, where the performance is almost the same for different penalty parameter, does not necessarily show that the algorithms is not too sensitive to the choice of the penalty parameter.\\nAs mentioned by the author, -21 is the minimum reward (or random reward) an agent can get from Pong.\\n\\n\\nIn general, given the current status of the paper, where there is a lot of room for improvement of experiment section, I will vote for a rejection.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"One major limitation of the work is that the attack rate is not readily modifiable. It uses a penalty term in the reward function with a tunable weight $\\\\lambda$, but changing this weight seemingly requires retraining the opponent from scratch, which is unpractical. By constrast, note that the previous attack methods allow to change the attack rate at will.\\nMore importantly, there is no clear relationship between the value of lambda and the attack rate. With the same lambda, depending on the game, the opponent settles on attack rates varying between 10% (on pong) and 70% (on Riverraid).\\n\\n\\nThe results themselves show only marginal improvement over the baselines, and in the absence of clear error bars / confidence intervals, it is difficult to state the significance of the method. In the particular case of Space Invaders (figure 5b), the proposed method seems tied to the best heuristic (at attack-rate 70%), but the said heuristic reaches the same performance level for much lower attack-rate (as low as 40-50%), implying that the presented method did not do a good job at minimizing the number of attacks.\\nOverall, a rigorous way of comparing the methods need to be devised. Maybe something like an AUC with proper confidence bounds could do the trick?\\n\\n\\nIn equation (2), you present a target objective function designed using Lagrange relaxation. However, the RL algorithm uses decay (\\\\gamma = 0.99), which means that the resulting function that is effectively minimized is different. Could you clarify the impact of the decay on the lagrange-relaxed objective function?\\n\\n\\nCould you clarify a bit the section 4.3 on black-box attacks? It seems that you are using a substitute model attack, but it's not clear to me how the substitute is obtained. Is it the same model? How is it trained? Is the attack robust to differences in the algorithm?\\n\\nFinally, I'm a bit surprised by the choice of DQN as the base algorithm, especially since the chosen framework (Dopamine), offers significantly stronger algorithms (Rainbow or IQN). DQN doesn't even reach perfect score on Pong, which means that the raw policy itself is a bit brittle, since it looses 5 points. Did you try to apply this method on more robust policies?\"}"
]
} |
rkllGyBFPH | Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks | [
"Yu Bai",
"Jason D. Lee"
] | Recent theoretical work has established connections between over-parametrized neural networks and linearized models governed by the Neural Tangent Kernels (NTKs). NTK theory leads to concrete convergence and generalization results, yet the empirical performance of neural networks are observed to exceed their linearized models, suggesting insufficiency of this theory.
Towards closing this gap, we investigate the training of over-parametrized neural networks that are beyond the NTK regime yet still governed by the Taylor expansion of the network. We bring forward the idea of randomizing the neural networks, which allows them to escape their NTK and couple with quadratic models. We show that the optimization landscape of randomized two-layer networks are nice and amenable to escaping-saddle algorithms. We prove concrete generalization and expressivity results on these randomized networks, which lead to sample complexity bounds (of learning certain simple functions) that match the NTK and can in addition be better by a dimension factor when mild distributional assumptions are present. We demonstrate that our randomization technique can be generalized systematically beyond the quadratic case, by using it to find networks that are coupled with higher-order terms in their Taylor series.
| [
"Neural Tangent Kernels",
"over-parametrized neural networks",
"deep learning theory"
] | Accept (Poster) | https://openreview.net/pdf?id=rkllGyBFPH | https://openreview.net/forum?id=rkllGyBFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"guh36edbgC",
"Ap1QRvBcr4",
"S1gHvMjMjS",
"rJl42lFfor",
"B1eV8eFMoH",
"HJxsMltzoB",
"SylJRJYMsB",
"B1eYDESfcH",
"SyePj4XCtH",
"Hylx2510tH"
],
"note_type": [
"official_comment",
"decision",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1581746534942,
1576798726675,
1573200477022,
1573191851691,
1573191756044,
1573191699309,
1573191623122,
1572127841017,
1571857567035,
1571842727803
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"~Greg_Yang1"
],
[
"ICLR.cc/2020/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1567/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1567/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1567/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Differences between Taylorized models (higher-order NTK) and NTH\", \"comment\": \"Hi Greg,\\n\\nThank you for the very insightful question! It is indeed useful to think about our higher-order NTK and the Neural Tangent Hierarchy (NTH) together, as both regimes refine the NTK theory. \\n\\nAs far as we can tell now, there are two main differences between NTH and our higher-order NTK:\\n-- NTH provides a *correction* to the NTK whereas higher-order NTK can potentially \\u201cescape\\u201d the NTK regime. More concretely, in NTH, each individual neuron only moves $O(1/\\\\sqrt{m})$ (relative to its initial scale) similar as in the NTK regime. In contrast, when training is coupled with our quadratic model (e.g. our Theorem 4), each individual neuron moves $O(m^{-1/4})$ relatively, i.e. they move in a much larger ball than the NTK. This can have further consequences such as allowing provable convergence under a larger learning rate. \\n\\n-- NTH and our higher-order NTK refines the original NTK from different perspectives, and does not \\u201ccover\\u201d each other: 1) If we train a quadratic model and compute its NTH, then all the higher-order NTH terms would be non-vanishing, so that we need the actual infinite NTH hierarchy to accurately describe the training of a quadratic model; 2) Conversely, if we look at the NTH dynamics truncated at the first level higher than the NTK, then it does not correspond to training the Taylorized model (higher-order NTK) of any order. \\n\\nDespite these differences, one common thing about the NTH and our higher-order NTK is that they both provide finer approximations to the full training trajectory when training is actually in the NTK regime.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies the training of over-parameterized two-layer neural networks by considering high-order Taylor approximation, and randomizing the network to remove the first order term in the network\\u2019s Taylor expansion. This enables the neural network training go beyond the recently so-called neural tangent kernel (NTK) regime. The authors also established the optimization landscape, generalization error and expressive power results under the proposed analysis framework. They showed that when learning polynomials, the proposed randomized networks with quadratic Taylor approximation outperform standard NTK by a factor of the input dimension. This is a very nice work, and provides a new perspective on NTK and beyond. All reviewers are in support of accepting this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comparison with Neural Tangent Hierarchy\", \"comment\": \"Dear Authors,\\n\\nThanks for your interesting paper. How do the higher-order NTKs in your case compare with the neural tangent hierarchy [1]?\\n\\n[1] https://arxiv.org/abs/1909.08156\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"We have uploaded a revision of our paper to add in some updates and address the reviews. The main changes in this revision are below:\\n\\n\\u2014 Restated expressivity result with more generality\\nWe have updated our expressivity result (Theorem 7) to cover the case where the ground truth is a *sum* of one-directional polynomials, rather than a single polynomial as we previously had. Such an extension was already (implicitly) used in Section 5.2 in our original version, and here we\\u2019d like to explicitly state it in this form for more clarity.\\n\\n\\u2014 Additional example on comparing quadratic model and NTK: low-rank matrix sensing\\nWe have added an additional example of low-rank matrix sensing in our comparison between sample complexities in Section 5.2. Using our randomized quadratic-like net, the sample complexity upper bound for achieving $\\\\epsilon$ test loss in this problem is $\\\\widetilde{O}(d)$ better than that of linear NTK.\\n\\n\\u2014 Additional results and reorganization for higher-order NTKs\\nWe have added generalization and expressivity results for k-th order NTKs for all $k\\\\ge 2$ as a systematic extension of our results in the quadratic case. These results can be found in Appendix D.2. Due to space constraint, the original randomized coupling argument between neural nets and $k$-th order NTKs is now migrated to Appendix D.1. \\n\\nApart from the above changes, we have also added some additional citations and fixed certain typos.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your thoughtful and detailed comments. As you have raised, we also think that it\\u2019s important to understand how to provably escape the NTK regime for training wide neural nets, and we are glad that you liked our attempt.\", \"we_address_the_specific_concerns_below\": \"\\u2014 Problem settings\\nAs pointed out, our setting is indeed different from (and slightly more non-standard than) the vanilla/NTK setting for learning wide two-layer neural nets. We want to highlight though that most of these settings are purely for technical reasons in order to analyze the quadratic model, which we clarify below:\\nRe (1): The activation needs to be at least continuously twice differentiable in order for the second-order expansion to exist and for the loss to be $C^2$;\\nRe (3): Noisy SGD is required to efficiently escape saddles; \\nRe (4): The high-order regularization is also needed so that optimization never leaves a big $\\\\|\\\\cdot\\\\|_{2,4}$ ball, so the coupling result holds.\\nTweaks like (3,4) are also present in e.g. Allen-Zhu et al. 2018 in order to provably learn a three-layer network. \\n\\nRe (2): This randomization step is the only key modification we\\u2019ve done (adding a $\\\\{-1, 1\\\\}$ multiplicative noise), which is the crucial step allowing us to escape the NTK and couple with a quadratic model. \\n* On the one hand, this randomization is arguably quite similar to dropout, which essentially adds $\\\\{0, 1\\\\}$ multiplicative noise. We\\u2019d like to emphasize though that our randomization technique can be extended systematically\\u2014-beyond such dropout-like noise\\u2014-to obtain k-th order NTKs for all k (cf. Section 6). \\n* On the other hand, we do think it is important to figure out how to train wide neural nets in quadratic / higher-order regimes without randomization, which will be an interesting direction for future work.\\n\\n\\u2014 Comparison with (Cao & Gu, 2019ab) on generalization bounds & (Hu et al. 2019) on regularizer.\\nWe have cited these papers and commented on the comparison in our revision. Here we comment more on the comparison on the generalization bounds given in Section 5.2.\\n\\nTo achieve a low generalization error, (Cao & Gu 2019ab) require that there exists a ground truth function with low RKHS norm in the random feature / NTK space that achieves low regression loss / large classification margin. In this same setting, our generalization bound (through learning a quadratic-like model) matches the above, but can in addition be better a dimension factor under mild distributional assumptions such as isotropic features. This happens since the quadratic model expresses high-degree polynomials in a more \\u201ccompact\\u201d fashion than the linearized ones, which allows us to bound the generalization error through the *operator norm* of a certain matrix feature (cf. Theorem 6); in comparison, generalization bound obtained from linearized models would depend on the *Frobenius norm* of a similar feature matrix.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your thoughtful and detailed comments, and we are glad that you appreciate our motivation and liked our presentation. We address the specific concerns below:\\n\\n\\u2014 Non-standard setting\\nOur setting does contain a few tweaks that make it different from standard vanilla/NTK-regime for training a wide two-layer network. However, most of the tweaks (specifically the multiplicative randomization and the regularizer) are to help escape the NTK regime. Without such randomization, it is unclear/still open how to train wide neural nets beyond the NTK regime with provable guarantees, which we think is indeed an important future direction.\\n\\n\\u2014 Definition of OPT\\nOPT is defined throughout the statements of Lemma 1 - Theorem 4 as the optimal value of $L^Q(W)$ inside a 2, 4-norm ball. \\n\\nWe have clarified our choice of OPT in the concrete examples later (e.g. in the sample complexity comparison in Section 5.2) in our revision.\\n\\n\\u2014 Expressivity through $L^Q$, and conditions for OPT\\nDue to an aggregation effect, $f^Q$ is able to express $O(1)$ functions even though each individual weight is only $O(m^{-1/4})$. We illustrated this in Section 3 in the comparison between $f^L$ and $f^Q$ [More precisely, when $\\\\|w_r\\\\|_2 = O(m^{-1/4})$, \\u2026]\\n\\nWe have also updated our Theorem 7 in our revision to clarify conditions (as well as specify our choice of OPT) under which $L^Q(W_\\\\star)\\\\le {\\\\rm OPT}$ is satisfied for a fairly broad class of functions (sum of \\u201cone-directional\\u201d polynomials.)\\n\\n\\u2014 Comparison between quadratic model and NTK\\nOur expressivity result (Theorem 7) works so long as there exists an underlying \\u201csum of (one-directional) polynomial\\u201d type function that achieves low loss, and thus can be fairly general. This includes a single polynomial, or functions such as XOR that can be written as the sum of O(1) polynomials as is done in Section 5.2. \\n\\nTo further demonstrate the generality of our sample complexity results, we have added an additional example of low-rank matrix sensing in our revision (Section 5.2), in which we show that the sample complexity upper bound for the quadratic model is also O(d) lower than that of the NTK.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the thoughtful feedback, and we are glad that you liked our idea and presentation. We address the concerns below:\\n\\n\\u2014 Assumption on activation function. \\nThe requirement on $\\\\sigma$ in this paper comes from two sources:\\n(1) Smoothness, so that the loss is twice *continuously* differentiable. \\n(2) $\\\\sigma\\u2019\\u2019$ is a sufficiently expressive nonlinearity, e.g. such that we can prove concrete results for $\\\\sigma\\u2019\\u2019$ to express functions, e.g. polynomials (Theorem 7).\\nA quadratic ReLU satisfies (2) ($\\\\sigma\\u2019\\u2019$ is an indicator, which is at least as expressive as a ReLU) and *almost* (1), in that the loss will be twice differentiable (in a proper sense) but the second derivative will not be continuous, where in contrast a cubic ReLU has a continuous second derivative. We prefer to have this continuity, as existing convergence results on escaping-saddle algorithms require such continuity/Lipschitzness of the Hessian (see e.g. Jin et al. 2019). Apart from such an algorithmic concern, all our landscape results in Section 4 will hold if $\\\\sigma$ is the quadratic ReLU (and we formally take $\\\\sigma\\u2019\\u2019(t) = 1\\\\{t \\\\ge 0\\\\}$.)\\n\\n\\u2014 Computational cost of optimizing randomized network. \\nAs mentioned below Theorem 4, we use stochastic gradient descent to efficiently optimize the loss: at iteration $k$, we sample a fresh $\\\\Sigma_k$ and perform a gradient step on $\\\\widetilde{L}_\\\\lambda(W\\\\Sigma_k)$. As $L_\\\\lambda(W)=E_\\\\Sigma[\\\\widetilde{L}_\\\\lambda(W\\\\Sigma)]$, the above procedure gives an unbiased estimate of $\\\\nabla L_\\\\lambda(W)$ which only requires polynomial time to compute. Further, with such additional stochasticity, the algorithm (SGD) will still converge to second-order stationary points in polynomial time, by existing results on escaping saddle points via SGD (Theorem 16, Jin et al. 2019). Such an algorithm is in fact very similar to adding dropout noise in practice, which uses i.i.d. $\\\\{0,1\\\\}$ multiplicative noise (instead of our $\\\\{-1, 1\\\\}$ noise) over the neurons at each iteration.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents an approach for going beyond NTK regime, namely the linear Taylor approximation for network function. By employing the idea of randomization this paper manages to reduce the magnitude of the linear part while the quadratic part is unaffected. This technique enables further analysis of both the optimization and generalization based on the nice property of quadratic approximation.\\n\\nI believe this paper should be accepted because of its novel approach to go beyond the linear part, which I view as a main contribution. Also, this paper is well written and presents its optimization result and proof in a clear manner. Although it is an innovation in the theoretical perspective, I still want to raise two questions about the object this paper trying to analyze:\\n\\n1. The activation function is designed so that it has really nice property: it has second-order derivative which is lipshitz. Actually I believe Assumption A is first motivated by cubic ReLU. Why is cubic ReLU so favorable in this paper? Is it possible to use quadratic ReLU in the proof? Also I wonder what is the key property of activation function which is allowed here.\\n\\n2. The network model considered here, is modified to a randomized neural network. So maybe this paper just circumvents the difficulty in going beyond NTK regime? I have this concern because in reality optimizing this network model seems intractable. To evaluate the loss $L(\\\\mathbf{W})$ or $L_{\\\\lambda}(\\\\mathbf{W})$ we take expectation over $\\\\Sigma$; when doing gradient descent, apparently the gradient also takes exponential time to compute.\\n\\nOverall, I believe this paper has high quality and I enjoyed reading it. However, I do expect response from authors which could address my concerns raised above. This can help me achieve a better understanding and a more precise evaluation on this paper.\\n\\n***\\n\\nI have read the author's response and decide to remain the rating as weak accept.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper investigates higher order Taylor expansion of NTK (neural tangent kernel) beyond the linear term. The motivation of this study is to investigate the significant performance gap between NTK and deep neural network learning. The conventional NTK analysis only deals with the first order term of the Taylor expansion of the nonlinear activation, but this paper deals with higher order terms. In particular, it thoroughly investigates the second order expansion. It is theoretically shown that the expected risk can be approximated by the quadratic approximation, and show that the optimal solution under a quadratic approximation can achieve nice generalization error under some conditions.\\n\\nOverall, I think the analysis interesting. Actually, there are big gap between NTK and real deep learning. However, this gap is not well clarified from theoretical view points. This is one of such attempts.\\nAs far as I understand the analysis is solid. The writing is clear. I could easily understand the motivation and the results. The quadratic expansion is clearly different from the recent series of NTK analyses. In that sense, this study has sufficient novelty.\\n\\nOn the other hand, I think the study has several limitations. My concerns are summarized as follows:\\n- The proposed objective function is different from the normal objective function used for naive SGD because there is a \\\"random initialization\\\" term and some special regularization terms. Therefore, the analysis in this paper does not give precise understanding on what is going on for the naive SGD in deep learning.\\n- As far as I checked, there is no definition of OPT. Is it the optimal value of \\\\tilde{L}(W)? Since OPT is an important quantity, this must be clarified.\\n- L^Q(W) considers essentially a quadratic model, and is different from the original model. It is unclear how expressive the quadratic model is. Since the region where the quadratic model is meaningful is restricted (i.e., ||w_r|| = O(m^{-1/4}) is imposed), the expressive power of the model in this regime is not obvious. It is nice if there are comments on how large its expressive power is. In particular, it is informative if sufficient conditions for L^Q(W_*) <= OPT is clarified.\\n- (This comment is related to the right above comment) There are two examples in which the linear NTK is outperformed by the quadratic model. However, they are rather simple. It would be better if there was more general (and practically useful) characterization so that there appears difference between the two regimes.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the training of over-parameterized neural networks. Specifically, the authors propose a novel method to study the training beyond the neural tangent kernel regime by randomizing the network and eliminating the effect of the first order term in the network\\u2019s Taylor expansion. Both optimization guarantee and generalization error bounds are established for the proposed method. It is also shown that when learning polynomials, the proposed randomized networks outperforms NTK by a factor of d, where d is the input dimension.\\n\\nOverall, I enjoy reading this paper. The presentation is clear, the arguments are insightful, and the proofs seem to be solid. Moreover, this paper offers some interesting ideas to show that neural networks can outperform NTK, which might be impactful. However, this paper also has certain weak points, mainly due to the less common problem setting. \\n\\nAlthough it is believed that NTK cannot fully explain the success of deep learning, results in the NTK regime have the advantage that the problem setting (initialization method, training algorithm) is very close to what people do in practice. Therefore, ideally, a result beyond NTK should demonstrate the advantage of NN over NTK under similar, or at least practical settings. If the problem setting is changed in some strange way, then it might not be that meaningful even if the training behavior is different from the NTK setting. In my opinion, the following four points about the problem setting in this paper are less desired:\\n\\n(1) Assumption A is not satisfied by any commonly used activation functions. The authors only provided cubic ReLU as an example.\\n\\n(2) The randomization technique in this paper is not standard, and is pretty much an artifact to make the first order term in the Taylor expansion of neural networks small.\\n\\n(3) The $(\\\\| \\\\cdot \\\\|_{2,4})^8$ regularization term introduced on page 6 is highly unusual. Due to reparameterization, this regularization is on the distance towards initialization, indead of the norms of the weight parameters.\\n\\n(4) The training algorithm used in this paper is noisy SGD due to the need to escape from saddle points.\\n\\nDespite the issues mentioned above, I still think this paper is of good quality, and these limitations are understandable considering the difficulty to escape from the NTK regime. It would be interesting if the authors could provide some direct comparison between the generalization bound in this submission and existing generalization bounds in the NTK regime, for example the results in\\n\\nYuan Cao, Quanquan Gu, Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks\\nYuan Cao, Quanquan Gu, Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks\\n\\nMoreover, since this paper studies optimization with regularization on the distance towards initialization, it would also be nice to compare with the following paper:\\n\\nWei Hu, Zhiyuan Li, Dingli Yu, Understanding Generalization of Deep Neural Networks Trained with Noisy Labels\"}"
]
} |
SklgfkSFPH | On PAC-Bayes Bounds for Deep Neural Networks using the Loss Curvature | [
"Konstantinos Pitas"
] | We investigate whether it's possible to tighten PAC-Bayes bounds for deep neural networks by utilizing the Hessian of the training loss at the minimum. For the case of Gaussian priors and posteriors we introduce a Hessian-based method to obtain tighter PAC-Bayes bounds that relies on closed form solutions of layerwise subproblems. We thus avoid commonly used variational inference techniques which can be difficult to implement and time consuming for modern deep architectures. We conduct a theoretical analysis that links the random initialization, minimum, and curvature at the minimum of a deep neural network to limits on what is provable about generalization through PAC-Bayes. Through careful experiments we validate our theoretical predictions and analyze the influence of the prior mean, prior covariance, posterior mean and posterior covariance on obtaining tighter bounds. | [
"PAC-Bayes",
"Hessian",
"curvature",
"lower bound",
"Variational Inference"
] | Reject | https://openreview.net/pdf?id=SklgfkSFPH | https://openreview.net/forum?id=SklgfkSFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WzCPPkpoUF",
"HylamSZnor",
"HygxQ1qFiS",
"BkxxJuKtiB",
"rJehjrPusS",
"SkgRRv3lor",
"SJl3nwnljr",
"SkgoVP3lor",
"Bkg_WwneoS",
"BkgdJN2xor",
"SJgRwH_loB",
"HklGiy_gsr",
"rygvRwUgjH",
"SygqmIu4qS",
"SyxIC7Le5r",
"HkgKHTkCKS",
"BkeaQvJptS",
"BygyoQSKYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1576798726647,
1573815589376,
1573654295959,
1573652440506,
1573578148085,
1573074902348,
1573074867925,
1573074739229,
1573074687694,
1573073888191,
1573057894268,
1573056409716,
1573050319438,
1572271650335,
1572000717812,
1571843392740,
1571776293336,
1571537815347
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1566/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1566/Authors"
],
[
"~kento_nozawa1"
],
[
"ICLR.cc/2020/Conference/Paper1566/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1566/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper computes an \\\"approximate\\\" generalization bound based on loss curvature. Several expert reviewers found a long list of issues, including missing related work and a sloppy mix of formal statements and heuristics, without proper accounting of what could be gleaned from some many heuristic steps. Ultimately, the paper needs to be rewritten and re-reviewed.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to all reviewers Part 4. Paper Revisions.\", \"comment\": \"The authors have changed significantly the abstract, introduction and contributions sections of the submission, addressing a number of reviewer concerns. In the revised text we highlight that we make a number of approximations to the original PAC-Bayes objective and derive some formal results for this approximation. We have added a number of citations regarding the validity of the second order approximation. Wether our results translate to the original objective PAC-Bayes still needs to be shown through experiments, of which we include 2 architectures and 6 different datasets. The experiment number is in line with the number of experiments presented in previous influential works on PAC-Bayes.\\n\\nWe have stated more clearly the goals of our paper. Previous works mix various modelling choices to obtain non-vacuous but loose bounds, such as non-convex optimization and well motivated priors. While these bounds are valid this obfuscates the contribution of each choice and invalidates flat minima intuition regarding PAC-Bayes. We believe that it is important to see wether the relationship between PAC-Bayes and flat minima can be taken literally without resorting to arguments of empirical correlation. Motivated by our theoretical analysis we see that datasets and architectures can be separated in easy and hard cases, where one can and can't prove generalization even through \\\"cheating\\\" with a data dependent prior. For the easy cases, a simple baseline with a prior centered at the DNN initialization matches our lower bound and is sufficient to prove generalization, contrary to what is implied in previous works.\"}",
"{\"title\": \"references\", \"comment\": \"Regarding the last two points, we respect the opinion of the reviewer and are also planning more experiments which unfortunately cannot be completed adequately as part of this review.\\n \\n[1] Dziugaite, Gintare Karolina, and Daniel M. Roy. \\\"Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data.\\\" arXiv preprint arXiv:1703.11008 (2017).\\n\\n[2] Zhou, Wenda, et al. \\\"Non-vacuous generalization bounds at the imagenet scale: a PAC-bayesian compression approach.\\\" arXiv preprint arXiv:1804.05862 (2018).\\n\\n[3] Keskar, Nitish Shirish, et al. \\\"On large-batch training for deep learning: Generalization gap and sharp minima.\\\" arXiv preprint arXiv:1609.04836 (2016).\\n\\n[4] Neyshabur, Behnam, et al. \\\"Exploring generalization in deep learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n[5] Blier, L\\u00e9onard, and Yann Ollivier. \\\"The description length of deep learning models.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"AnonReviewer1 Reply\", \"comment\": \"After training a deep neural network through SGD, we obtain a set of \\\"original\\\" weights that parametrize a function computed by the DNN. Through PAC-Bayes, one then assesses if a related stochastic DNN classifier will generalize. If one assumes a Gaussian posterior with mean equal to the original weights, then the stochastic classifier whose generalization error one bounds through pac bayes is different from the original, given that it is *stochastic* while the original is deterministic. However, one can argue that they are closely related.\\n\\nHow are they related? A number of works relate flatness of a minimum to generalization, and to PAC-Bayes [3][4] through empirical correlations. Under this view choosing a posterior that is Gaussian and centered at a minimum obtained by SGD, is just a formal way of quantifying if minima are flat. These works imply that, at least in principle, one should be able to prove generalization solely through the flatness of a given minimum. \\n\\nIn [1] the authors choose the posterior to be a gaussian distribution and then optimize for both the mean and the covariance of this distribution in order to minimize the PAC-Bayes bound directly. This will result in a different posterior mean. Correspondingly, the deterministic function computed by the DNN and parametrized by this mean could be significantly different from the original set of weights. A similar issue exists with [2], where the authors compress a neural network changing the weights and possibly changing significantly the classifier whose complexity is evaluated.\\n\\nTherefore, the works [1][2] show non vacuous bounds but the flat minimum intuition is no longer valid, at least to the eyes of the authors, given that the mean of the posterior changes. By assuming a posterior with a different mean, one is again measuring flatness but in a different point of the parameter space.\\n\\nAs such, the authors consider it as an open problem to test whether one can find a noise distribution centered on the \\\"original weights\\\", with higher variance along directions of the loss that are flat, and that results in non-vacuous bounds. We propose to solve this problem by estimating the Hessian at the \\\"original\\\" minimum and scaling the variance of a Gaussian posterior based on the curvature information provided by the Hessian, adding noise with higher variance to flat directions. One can also, in the case of [1], simply fix the mean of the posterior and optimise the covariance through SGD of the stochastic PAC-Bayes objective, implicitly measuring the flatness of the minimum. However, we believe that our approach has benefits, which relate to the significance of \\\"claim (ii) about (c)\\\".\\n\\nLet's say that one estimates the optimal covariance (and corresponding flatness) either through SGD on the stochastic PAC-Bayes objective as in [1] or through the closed form solution that we are proposing. Then how should one interpret the results if the bound is loose or vacuous? PAC-Bayes gives the possibility of choosing a better prior. It might be that we if we find an informative prior, through for example a separate training set, we might get a tighter or non-vauous bound. We think that it's useful to be able to find a closed form solution with respect to the prior covariance and see as a sanity check whether we can get an improvement *in principle* (we focus on the prior covariance given that, setting the prior mean equal to the random DNN initialization is already a very good choice). We think that it is an interesting result that in Cifar experiments we cannot turn a vacuous bound to non-vacuous even by \\\"cheating\\\".\\n\\nMotivated by the above we test as a baseline a Gaussian posterior centered on the original weights with diagonal and constant covariance and a prior centered at the random initialization with the same covariance as the posterior. While the same simple prior centered at zero results in vacuous bounds, this simple prior centered at the initialization results in bounds matching our lowerbound and non-vacuous for mnist. While the emphasis in [1] is in *computing* a non-vacuous bound this experiment reveals that most of the gains compared to previous bounds are from a well chosen prior mean and not optimization of the stochastic objective as is implied in [1]. This is in line with work in VI and description length of DNNs [5]. VI succeeds as an encoding scheme for simple Mnist experimetns but fails for more complex Cifar experiments.\\n\\nOf course, as the reviewers mentioned, we make some approximations. We can only argue through experiments and citing prior work whether these results translate from the second order approximation of the IB lagrangian back to the PAC-Bayes objective. We presented some arguments regarding this in previous replies.\"}",
"{\"title\": \"AnonReviewer1 review update\", \"comment\": \"The authors in their response point out that previous PAC-Bayes/compression based bounds are limited since they are not bounding the error of \\u201ca different classifier than the original classifier\\u201d. How do the authors address this in their paper? It would be great if the authors could also be more precise what they mean by \\u201coriginal\\u201d and \\u201cdifferent\\u201d.\\n\\nI also fail to see why \\u201ctwo claim about ( c ) \\u201d that the authors prove are siginificant. (c ) is already an approximation of a quantity. What exactly does it tell us? A similar comment applies to finding a posterior for an approximation, where \\u201cthe solution we obtained was the best possible under our approximations\\u201d. An optimal posterior for a fixed PAC-Bayes prior is already known. Why not to work with that directly? What does your approximation offer? \\n\\nThe authors claim that comparison and further work on off-diagonal Hessian approximations is beyond the scope of this submission and is left as future work. Given that the contributions in the paper are minimal (and significance is questionable), I do not think it's unreasonable to expect further comparisons.\\n\\nOverall, I believe the mathematical investigations presented in the paper lack precision, and the writing lacks clarity. In its current state, the paper is well below the acceptance level.\"}",
"{\"title\": \"Reply to Reviewer 1 Part 1\", \"comment\": \"Thank you for your *very* detailed review.\\n\\nLooking at the work of Tsuzuki, Sato and Sugiyama (2019)[3] there are some superficial similarities, specifically the authors expand the loss using the Hessian, and apply PAC-Bayes. However we note a number of important differences:\\n\\ti) The Hessian is assumed to be diagonal, we make no such assumption, at least in Lemma 4.1. . \\n\\tii) The authors make a number of choices which we consider fundamentally flawed, and suboptimal. In page 7 between equations 6 and 7 the authors optimize the KL term independently of the approximated loss. Specifically they set the prior variance equal to the posterior variance. In equation 10 they have already assumed arbitrarily that the two variances are the same then reoptimize with respect to both the KL term and the Taylor expansion. Notice also a constant that continues throughout the calculations and is finally omitted in equation 13, this constant is almost surely making the bound vacuous (even after removal the bound remains vacuous as evidenced by the experiments). From equation 9 it is also obvious that implicitly the authors assume the same noise variance in all parameters of each layer. By contrast we assume much more rich noise which can be different for every parameter (it is clear how this is beneficial, clearly not all parameters in a layer have the same relevence). We also derive the true optimal prior, given the taylor expansion.\\n\\tiii) Finally we note that in our opinion the research direction of the paper [3] will not be fruitful. In [4] the authors attack flatness based measures of complexity by reparametrizing DNNs to have the same GE and arbitrary sharpness at the minimum. Consequently papers such as [3] seek complexity measures that are invariant to these reparametrizations. Following [5] we consider the entire debate to be flawed. Flatness might very well be a sufficient but not necessary condition for good generalization. To the extent that all solutions reached naturally by SGD are flat it suffices to utilize this flatness to prove generalization. Whether the technique fails in contrived counterexamples is irrelevant. \\n\\n\\nWe believe the reviewer is refering to the following identity E[\\\\eta^T H \\\\eta] = E[tr(H \\\\eta \\\\eta^T)] = tr(H\\\\Sigma_0) then tr(H\\\\Sigma_0) = \\\\sigma_0tr(H) only if \\\\Sigma_0 is assumed to be diagonal and with constant variance. We do not consider such a restrictive case. We furthermore use this identity (without assuming the covariance to be diagonal) in Appendix B, Lemma 4.1., equations 12. We would appreciate if the reviewer could point out if we can use this identity somewhere else.\\n\\nWe make a number of approximations and do not claim that they are tight. However at least for the case of the second order approximation we note that it has seen extensive use in the DNN compression literature. Examples of works by well known authors and in prestigious venues include:\\n\\ni)Dong, Xin, Shangyu Chen, and Sinno Pan. \\\"Learning to prune deep neural networks via layer-wise optimal brain surgeon.\\\" Advances in Neural Information Processing Systems. 2017.\\nii)Wang, Chaoqi, et al. \\\"EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.\\\" International Conference on Machine Learning. 2019.\\niii)Peng, Hanyu, et al. \\\"Collaborative Channel Pruning for Deep Networks.\\\" International Conference on Machine Learning. 2019.\\niv) LeCun, Yann, John S. Denker, and Sara A. Solla. \\\"Optimal brain damage.\\\" Advances in neural information processing systems. 1990.\\nv) Hassibi, Babak, and David G. Stork. \\\"Second order derivatives for network pruning: Optimal brain surgeon.\\\" Advances in neural information processing systems. 1993.\\n\\nproducing state of the art results in a number of cases. Correspondingly the approximation while almost certainly not tight has proven quite useful and meaningful. \\n\\nWe have not conducted and are not aware at this point of a detailed comparison of k-FAC and the layerwise approximation we used which is based on [6]. We consider expanding to more rich approximations of the Hessian as an interesting future research direction.\"}",
"{\"title\": \"Reply to Reviewer 1 Part 2\", \"comment\": \"As noted by the reviewer optimising a non-convex objective as in [7] and compressing a DNN heuristically as in [8] always results in valid bounds. We note two main weaknesses of such techniques.\\n\\n\\ti) The bounds in both cases hold for a *different* classifier than the original. While this is fine as a research direction, we believe that it is equally important to investigate the complexity of the original classifier. Why is for example an uncompressed network still able to generalize? Does it mean that the loss landscape is extremely flat in most directions? If so does SGD bias the network to lie on very flat minima and does this phenomenon alone result in good generalization? If yes we should in princible be able to find these flat directions add an appropriate noise distribution and get a non-vacoous bound. These are questions that certainly the current literature has not answered definitively and approaches such as [7][8] avoid.\\n\\tii) The methods [7][8] result in bounds that are non-vacuous but loose. What is the source of this looseness? It is difficult to tell. It could be that simply optimizing a stochastic objective so as to get a good posterior in [7] has just not been done using the correct hyperparameters (such as SGD step size). In fact a number of works note how tedious is the task of finding proper hyperparameters such as SGD learning rate for the task of VI [9]. At the very least a contribution of our approach is that it trades off an assumed approximation error of the loss with the benefit of not spending time on manual hyperparameter tuning. \\n\\nOn \\u201cAs the analytical solution for the KL term in 1 obviously underestimates the noise robustness of the deep neural network around the minimum...\\u201d, this refers to the work of [10] which derives an analytical solution to the KL term of the PAC Bayes bound. The resulting bound is vacuous by several orders of magnitude. The analytical solution corresponds to defining a Gaussian posterior distribution over the parameters, with a specific choice of variance. This variance is chosen by assuming that noise added to a layer propagates approximately as a product of the spectral norms of the subsequent layers. This is obviously pessimistic and probably underestimates significantly how much noise can be added to different parameters without changing DNN predictions. Hence the above sentence \\u201cAs the analytical solution for the KL term in 1 obviously underestimates the noise robustness of the deep neural network around the minimum...\\u201d.\\n\\nOn \\u201c..while we will be minimizing an upper bound on our objective we will be referring with a slight abuse of terminology to our results as a lower bound.\\u201d. We make a number of approximations in our analysis. We substitute the PAC-Bayesian objective with the IB-Lagrangian and then the IB-Lagrangian with a second order Taylor expansion. Our theoretical results are formal only for the *second order Taylor expansion* of the IB-Lagrangian. They might or might not hold for the IB-Lagrangian and the PAC-Bayesian objective. We use the term lower bound for all three objective even though it is formal only for the second order Taylor expansion. We believe that our experiments show that it is meaningful also for the original PAC-Bayes objective.\\n\\nWe agree that \\\"modelling assumptions\\\" should be changed to modelling choices.\\n\\nThe union bound cost is identical to the one from [7][10] and negligible. We agree that it should be discussed.\\n\\nWe understand the concern of the reviewer regarding the number of MC samples. We note the works [1][2] where the samples are of the order of 10^1. We also note that in our own experiments a higher number of samples didn't result in significant differences in accuracy.\\n\\nUnder the PAC-Bayes framework the prior cannot depend on the training set, but can depend on the data distribution. To the best of the authors' understanding the training set can be used under the differential privacy setting to derive an imformative prior while ensuring that the prior remains distribution and not training set dependent. The phrase \\\"The concept of a valid prior has been formalized under the differential privacy setting.\\\" can indeed be rewritten in a more accurate way.\"}",
"{\"title\": \"Reply to all reviewers Part 1\", \"comment\": \"We would like to thank all reviewers for their *very* detailed reviews. As noted by reviewer 3 our paper contains a number of heuristic approximations, as well as some formal results. In the submission text, we are confident that we have taken care to state our claims modestly and to include disclaimers whenever heuristic approximations were made. However these scattered disclaimers seem to have confused the reviewers, leading them to assume that we are overclaiming our results. With hindsight a clear discussion of approximations and formal results should have been included in the introduction. We include a summary and discussion in this post, which addresses a number of concerns by the reviewers and we will address more specific criticisms individually for each reviewer.\\n\\nThe earliest use of PAC-Bayes in the modern era of *deep* neural networks (post-2012) is to the best of our knowledge the work of Neyshabur et al.[1]. The problem with this bound as well as other analytically derived bounds [2][3] is that it is vacuous by several orders of magnitude. \\n\\nConcequently at least two works [4][5] have used heuristic compression of DNNs [5] as well as Variational Inference(VI) style optimisation of a stochastic DNN [4] to find non-vacuous bounds. We believe that these works have two main limitations:\\n\\ti) The bounds derived by these methods apply to a *different classifier* than the original. Compressing a DNN corresponds to finding a completely different point in the parameter space than the original minimum. At the same time optimising a stochastic DNN as in [4] where the means of the stochastic parameters change, also corresponds in finding a different minimum in the parameter space. \\n\\tii) As correctly mentioned by reviewer 1 it is true that the bounds are valid given any compression solution or optimization solution. However in practice these bounds are not only non-vacuous but also quite loose. An issue then arises given this looseness. How can one be sure that the looseness is the result of the proof technique and not incorrect compression or optimisation of a stochastic objective? Furthermore considerable time can be wasted in hyperparameter tuning both in compressing a DNN adequately but more importantly getting a stochastic objective as in [4] to converge adequately (see [6] for a detailed discussion). Even then one cannot be confident that the bound is as tight as possible.\\n\\n.Our paper then makes a number of heuristic approximations which correspond roughly to the following hierarchy. \\n\\n(a)PAC Bayes bound -> (b)IB-Lagrangian -> (c)Taylor expansion of IB-Lagrangian -> (d)Layerwise upperbound and Taylor expansion of IB-lagrangian\\n\\nWe then can prove formally two claims about (c):\\n\\ti) (c) is convex with respect to \\\\Sigma_0 and the global minimum can be found in closed form.\\n\\tii) (c) is non-convex with respect to \\\\Sigma_0 and \\\\Sigma_1 jointly however we can find the global minimum with respect to both these variables as long as they are diagonal. We call this result an invalid solution and the corresponding prior an invalid prior. This is because under the PAC-Bayes framework the prior cannot depend on the training data. Otherwise one could chose the prior to be equal to the posterior and the KL term would be easily equal to zero.\"}",
"{\"title\": \"Reply to all reviewers Part 2\", \"comment\": \"We are transparent in that we are minimizing an approximation to the PAC-Bayes objective, and we don't wish to claim that the approximation is tight, however that doesn't prevent it from being meaningful. We will discuss this matter at the end of the post in detail. In our approach one has to accept an approximation error, however it (our method) and our results have a number of benefits over previous works.\\n\\ti) The optimum for the posterior can be found in closed form and there is no uncertainty about whether the corresponding bound is as tight as possible due to non-convex optimisation that hasn't been done succesfully. Considerable time can be saved by avoiding hyperparameter tuning as in [4]. We are not claiming that our approach will work better than [4], it might not since [4] optimises directly the non-convex objective, however we can be confident that the solution we obtained was the best possible under our approximations, and won't require tuning of hyperparameters. \\n\\tii) Works such as [7] have raised the possibility of computing better priors using the training set, by utilizing the differential privacy framework. In fact one can in theory do the same without differential privacy, using a separate training set[8]. Note that data driven optimization of the prior will probably be non-convex and will require extensive hyperparameter tuning. By contrast we can optimise over the prior and find a closed form \\\"invalid\\\" solution. If the feasible and non-vacuous regions don't overlap as in the case of Cifar in Figure 2, then we can be confident that we cannot prove generalization using our approximations, even if we searched for a better valid prior.\\n\\tiii) We analyze the complexity of the *original* classifier. This is impossible in general for [5].\\n\\nThese results (i and ii) hold for (c), one would ideally want these results to hold for (a) as well, at least around the specific weight instantiation corresponding to the DNN who's complexity we are trying to evaluate. We cannot show this formally nor do we think that it is easy to prove. Specifically we have shown a lowerbound on (c) which might or might not correspond empirically to a lower bound on (a). In section 4 we mention \\\"Furthermore while we will be minimizing an upper bound on our objective we will be referring with a slight abuse of terminology to our results as a lower bound.\\\" in section 7 we note another problem \\\"Crucially all results depend on high quality estimates of the Hessian which remains an open topic of research for large scale modern deep neural networks.\\\". \\n\\nWe can, however, conduct experiments to see if our theoretical results, have any merit. We conduct experiments with 3 valid posterior choices. The baseline is i.i.d. Gaussian noise on each parameter and *doesn't* rely on any approximation of the objective. The second method is an optimal gaussian with diagonal covariance under our approximation. Both the baseline and method number 2 which correspond to Gaussian posteriors with diagonal covariance, do not violate our lowerbound *empirically*. Method number 3 which corresponds to a Gaussian with *non-diagonal* covariance seems to violate our lowerbound slightly. More rich posteriors possibly depending on richer approximations of the Hessian might violate the lowerbound further, however we consider this as interesting future work.\"}",
"{\"title\": \"Reply to Reviewer 1 Part 3\", \"comment\": \"The authors are not aware of literature where a researcher has tried to evaluate when computation of the Hessian can be done exactly, and the authors have not conducted experiments on the subject, which is also outside of the scope of the current work. In most works that we are aware of, exact computation of the Hessian is assumed to be infeasible and approximations are used instead. Recently [11] has claimed that computation of the Hessian is possible in principle but is simply not supported by current autodiff libraries. In short the authors are aware of various claims unsupported by evidence, have not conducted experiments on the subject and therefore cannot comment on it (hence the term \\\"ambiguity\\\"). The next sentence is meant to be complementary to the previous statements in that the authors can instead comment on how much memory an uncompressed Hessian matrix will take up in RAM. An uncompressed Hessian of moderate size, should require approximately 20GB to store, requiring almost certainly some sort of compression or clever manipulation. This is meant to highlight that dealing with the full Hessian for moderate network sizes should be challenging.\\n\\nThe sentence in 5.1 is lifted directly from [11] which has been recently accepted in NIPS 2019. The authors are willing to double check, whether the sentence needs to include specific conditions.\\n\\nWe do not add a dumping term. From equation 5 in section 4.1 calculating the optimal posterior requires inverting a matrix (H+\\\\beta/\\\\lambda \\\\Sigma_1^{-1}). We note that if we choose a zero mean gaussian prior with diagonal and constant covariance this corresponds to inverting a matrix of the form (H+\\\\alpha I) where \\\\alpha = \\\\beta/\\\\lambda.\\n\\n***\", \"additional_citation_issues\": \"We agree to include the work of (e.g. Germain, Bach, Lacoste, Lacoste-Julien (2016). We do not claim to be the first to find a connection between PAC-Bayes and VI. In fact [7] exploits this connection and we also cite [12] for exactly this reason. If this is not clear we can explain so in the introduction. \\n\\nWe note that the entire paper [7] is a criticism of empirical correlations, being the first to derive non-vacuous bounds. However the works cited by the authors in this text section attack more fundamental elements of current bounds, specifically mainly uniform convergence, which [7] does not address in detail. It is for this reason that they are mentioned separately, although we had to remove the more detailed discussion due to space constraints.\\n\\nOn \\\"Both objectives in \\u2026 are however difficult to optimize for anything but small scale experiments.\\\". We have presented our arguments and what we consider as benefits of our approach in detail earlier in this reply. \\n\\n\\n[1]Li, Yingzhen, and Yarin Gal. \\\"Dropout inference in bayesian neural networks with alpha-divergences.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\\n[2]Gal, Yarin, and Zoubin Ghahramani. \\\"Dropout as a bayesian approximation: Representing model uncertainty in deep learning.\\\" international conference on machine learning. 2016.\\n[3]Tsuzuku, Yusuke, Issei Sato, and Masashi Sugiyama. \\\"Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis.\\\" arXiv preprint arXiv:1901.04653 (2019). \\n[4]Dinh, Laurent, et al. \\\"Sharp minima can generalize for deep nets.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\\n[5]Kawaguchi, Kenji, Leslie Pack Kaelbling, and Yoshua Bengio. \\\"Generalization in deep learning.\\\" arXiv preprint arXiv:1710.05468 (2017).\\n[6]Dong, Xin, Shangyu Chen, and Sinno Pan. \\\"Learning to prune deep neural networks via layer-wise optimal brain surgeon.\\\" Advances in Neural Information Processing Systems. 2017.\\n[7]Dziugaite, Gintare Karolina, and Daniel M. Roy. \\\"Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data.\\\" arXiv preprint arXiv:1703.11008 (2017).\\n[8]Zhou, Wenda, et al. \\\"Non-vacuous generalization bounds at the imagenet scale: a PAC-bayesian compression approach.\\\" arXiv preprint arXiv:1804.05862 (2018).\\n[9]Wu, Anqi, et al. \\\"Deterministic variational inference for robust bayesian neural networks.\\\" (2018).\\n[10]Neyshabur, Behnam, Srinadh Bhojanapalli, and Nathan Srebro. \\\"A pac-bayesian approach to spectrally-normalized margin bounds for neural networks.\\\" arXiv preprint arXiv:1707.09564 (2017).\\n[11]Kunstner, Frederik, Lukas Balles, and Philipp Hennig. \\\"Limitations of the Empirical Fisher Approximation.\\\" arXiv preprint arXiv:1905.12558 (2019).\\n[12]Achille, Alessandro, and Stefano Soatto. \\\"Emergence of invariance and disentanglement in deep representations.\\\" The Journal of Machine Learning Research 19.1 (2018): 1947-1980.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you for your detailed review.\", \"major\": \"In the paper we make a number of approximations which we do not claim are tight. We substitute the PAC-Bayesian objective with the IB-Lagrangian, we then approximate the IB-Lagrangian using a second order Taylor expansion of the loss. We then prove some formal results for the second order taylor expansion. These might or might not translate to the original PAC-Bayes objective. We believe our experiments show that to some extent the results are meaningful.\\n\\nFurthermore there is ample evidence in the literature that although a second order Taylor expansion of the loss around a minimum is loose it can be quite informative. In particular there has been a long line of research in the literature of DNN compression where a second order Taylor expansion of the loss has produced state of the art results in parameter number reduction. We refer to the following which include a number of well known researchers and conferences:\\n\\ni)Dong, Xin, Shangyu Chen, and Sinno Pan. \\\"Learning to prune deep neural networks via layer-wise optimal brain surgeon.\\\" Advances in Neural Information Processing Systems. 2017.\\nii)Wang, Chaoqi, et al. \\\"EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.\\\" International Conference on Machine Learning. 2019.\\niii)Peng, Hanyu, et al. \\\"Collaborative Channel Pruning for Deep Networks.\\\" International Conference on Machine Learning. 2019.\\niv) LeCun, Yann, John S. Denker, and Sara A. Solla. \\\"Optimal brain damage.\\\" Advances in neural information processing systems. 1990.\\nv) Hassibi, Babak, and David G. Stork. \\\"Second order derivatives for network pruning: Optimal brain surgeon.\\\" Advances in neural information processing systems. 1993.\\n\\n. Correspondingly the approximation while almost certainly not tight has proven quite useful and meaningful. \\n\\nConcerning substituting the PAC-Bayes objective for the IB-Lagrangian we note that in [1] page 8, section 6, the authors mention that they have used the PAC Bayes bound and the IB-Lagrangian interchangably when optimising and didn't notice a difference in results. \\n\\nWe do not object to changing the theorems to lemmas. We agree that comparison to [1] would be useful (assuming that we fix the posterior mean and only optimise for the variance using non-convex optimisation) however we note that, this requires careful experimentation and hyperparameter tuning which is certainly not trivial.\", \"minor\": \"Yes the plots correspond to the invalid prior see section 4.2 page 6 for details. We would be interested in any literature relating to limitations of reverse-mode autodiff libraries. MC is indeed the correct term.\\n\\n[1]Dziugaite, Gintare Karolina, and Daniel M. Roy. \\\"Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data.\\\" arXiv preprint arXiv:1703.11008 (2017).\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for your detailed review.\\n\\n1) The Taylor expansion of the loss is indeed an approximation in general. For the specific case of a DNN solution, assuming that we have reached a local minimum, and that the loss function is locally convex a second order expansion can be seen as an upper bound to the loss. However we do not make this formal, nor do we think that it is easy or useful to state strict conditions on whether the approximation is an upper bound. We point however to a long line of work in the DNN compression literature where a second order Taylor expansion of the loss has led to state of the art results in DNN compression.\\n\\ni)Dong, Xin, Shangyu Chen, and Sinno Pan. \\\"Learning to prune deep neural networks via layer-wise optimal brain surgeon.\\\" Advances in Neural Information Processing Systems. 2017.\\nii)Wang, Chaoqi, et al. \\\"EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.\\\" International Conference on Machine Learning. 2019.\\niii)Peng, Hanyu, et al. \\\"Collaborative Channel Pruning for Deep Networks.\\\" International Conference on Machine Learning. 2019.\\niv) LeCun, Yann, John S. Denker, and Sara A. Solla. \\\"Optimal brain damage.\\\" Advances in neural information processing systems. 1990.\\nv) Hassibi, Babak, and David G. Stork. \\\"Second order derivatives for network pruning: Optimal brain surgeon.\\\" Advances in neural information processing systems. 1993.\\n\\nThe distribution over parameters in the Gaussian posterior case with diagonal covariance should be highly concentrated around the mean making the approximation meaningful.\\n\\n2)We call the prior \\\"invalid\\\" in that through the way we calculate it, it depends on the posterior. This is not allowed in the PAC-Bayes framework, the prior has to be independent of the training set. Note that even though we calculate a invalid posterior based on the second order Taylor expansion of the IB-Lagrangrian, it remains invalid for the PAC-Bayes bound. In section 4.2 we make a detailed discussion, regarding the benefits and limitations of this result. In particular after one has computed an optimal posterior using equation 5, seeing that the result is non-vacuous or loose one may be tempted to search for a better prior, for example through a separate training set. Our result using the invalid prior corresponds to a lower bound for equation 4 (the second order taylor expansion of the IB-Lagrangian). We can trace a corresponding feasible region vs non-vacuous region using this lower bound (Figure 2). Thus if the two regions don't overlap we should not be able to prove generalization even if we chose a better prior in a valid manner. To be precise what we can show is that we cannot minimize the IB-Lagrangian second order approximation further. Ideally we would like these results to translate to the PAC-Bayes theorem directly (we would like the feasible regions to be meaningful for equation 1 even though we calculated them through equation 4), however we cannot prove this formally and have to rely on experiments. In practice we have found the calculated feasible region to be meaningful, the baseline and the diagonal gaussian posteriors fail to cross it. The non-diagonal posterior crosses it slightly.\\n\\n3) Q_{lj} = \\\\mathcal{N}(\\\\mu_{0lj},\\\\Sigma_{0lj}) and the dimension number of the multivariate Gaussian is equal to the dimensionality of the input to the layer \\\"l\\\". While H_{l1}=H_{l2}=...=H_{l*} = \\\\frac{1}{N}\\\\sum_i z^i_{l-1}z^i_{l-1}^T it is not true that H_{lj} = H_{l}. H_{lj} is a local Hessian for each neuron therefore has a dimensions equal to layer_input_dims \\\\times layer_input_dims. H_{l} is the local Hessian of the layer and therefore has dimensions equal to (layer_input_dims * layer_output_dims) \\\\times (layer_input_dims * layer_output_dims). H_{l} is block diagonal with blocks H_{lj}. \\n\\nNote that the layerwise approximation of section 5.2 is only a heuristic which is independent from the rest of our analysis and is mainly used to motivate solving for the optimal posterior in an layerwise manner, which is in general a quite cheap calculation. In fact one can completely ignore section 5.2 and perform most experiments by computing a diagonal Hessian using equation 9.\"}",
"{\"title\": \"Reply to all reviewers Part 3.\", \"comment\": \"------%% Notes on validity of approximations %%-----\\nIn [4] page 8, section 6, the authors mention that they have used the PAC Bayes bound and the IB-Lagrangian interchangably when optimising and didn't notice a difference in results. \\n\\nAt a minimum there has been a long line of research in the literature of DNN compression where a second order Taylor expansion of the loss has produced state of the art results in parameter number reduction. We refer to the following which include a number of well known researchers and conferences:\\n\\ni)Dong, Xin, Shangyu Chen, and Sinno Pan. \\\"Learning to prune deep neural networks via layer-wise optimal brain surgeon.\\\" Advances in Neural Information Processing Systems. 2017.\\nii)Wang, Chaoqi, et al. \\\"EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.\\\" International Conference on Machine Learning. 2019.\\niii)Peng, Hanyu, et al. \\\"Collaborative Channel Pruning for Deep Networks.\\\" International Conference on Machine Learning. 2019.\\niv) LeCun, Yann, John S. Denker, and Sara A. Solla. \\\"Optimal brain damage.\\\" Advances in neural information processing systems. 1990.\\nv) Hassibi, Babak, and David G. Stork. \\\"Second order derivatives for network pruning: Optimal brain surgeon.\\\" Advances in neural information processing systems. 1993.\\n\\n. Correspondingly the approximation while almost certainly not tight has proven quite useful and meaningful. \\n----------------------------------------------------\\n\\nOn a more personal level we would request that the reviewers refrain from comments such as \\\"the authors need to read and understand (!) related work\\\" which we don't find conducive to a useful review process. The authors have read extensively the relevant literature and to the extent that any points have been misunderstood are willing to change their perspective as part of a constructive review process.\\n\\n[1]Neyshabur, Behnam, Srinadh Bhojanapalli, and Nathan Srebro. \\\"A pac-bayesian approach to spectrally-normalized margin bounds for neural networks.\\\" arXiv preprint arXiv:1707.09564 (2017).\\n[2]Bartlett, Peter L., Dylan J. Foster, and Matus J. Telgarsky. \\\"Spectrally-normalized margin bounds for neural networks.\\\" Advances in Neural Information Processing Systems. 2017.\\n[3]Golowich, Noah, Alexander Rakhlin, and Ohad Shamir. \\\"Size-independent sample complexity of neural networks.\\\" arXiv preprint arXiv:1712.06541 (2017).\\n[4]Dziugaite, Gintare Karolina, and Daniel M. Roy. \\\"Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data.\\\" arXiv preprint arXiv:1703.11008 (2017).\\n[5]Zhou, Wenda, et al. \\\"Non-vacuous generalization bounds at the imagenet scale: a PAC-bayesian compression approach.\\\" arXiv preprint arXiv:1804.05862 (2018).\\n[6]Wu, Anqi, et al. \\\"Deterministic variational inference for robust bayesian neural networks.\\\" (2018).\\n[7]Dziugaite, Gintare Karolina, and Daniel M. Roy. \\\"Data-dependent PAC-Bayes priors via differential privacy.\\\" Advances in Neural Information Processing Systems. 2018.\\n[8]Parrado-Hern\\u00e1ndez, Emilio, et al. \\\"PAC-Bayes bounds with data dependent priors.\\\" Journal of Machine Learning Research 13.Dec (2012): 3507-3531.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: The paper provides several approximations of PAC-Bayes generalization bounds for Gaussian prior and posterior distributions, with various restrictions on the covariance matrices.\\nIn particular, the paper: \\n(1) Assumes that the expectation of the loss can be Taylor expanded around each point in the support, and all but the quadratic (Hessian) term can be ignored. \\n(2) Proves a lower bound on the PAC-Bayes generalization objective. \\n(3) Provides an upper bound on the PAC-Bayes objective via a \\\"layerwise\\\" Hessian objective.\", \"evaluation\": \"I found this paper extremely difficult to follow because it's sloppy in various places -- both in terms of what claims are formal, and what are heuristic approximations -- and in terms of properly defining crucial quantities. I will go in the same numbered order in which I listed the main contributions above:\\n(1) (Taylor approximation): The equation (4) is an *approximation* -- not a lower or upper bound. Moreover, too little is said about this heuristic: note that the authors actually Taylor expand an *expectation* over \\\\theta -- the trivial thing to require for this Taylor approximation to hold is that it holds for *every* theta which clearly will not be true. It seems the authors want to say the distribution Q concentrates over thetas close to some local optimum \\\\theta^*, and over these thetas the approximation holds. At the very least something needs to be said about how much things need to concentrate and whether this is realistic in real-life settings. \\nAlso, because (4) is an approximation, it's a little disengenuous to call Theorems 4.2 and 5.2 \\\"theorems\\\", and it needs to be mentioned in the statements that they hold under some formalization of the approximation I described above. \\n(2) The lower bound is written very oddly -- the \\\"prior\\\" for the lower bound is really dependent upon the posterior -- so it is very strange to call it an \\\"invalid\\\" prior. Moreover, I have serious problems evaluating the meaning of this lower bound -- as it uses the Taylor approximation from (1), but then decides to instantiate the prior *depending on the optimum* of this Taylor approximation. As such, *at the very least* -- some small neural net examples should be tried where the normal (un-approximated) KL bound can be evaluated, to check whether this *actually* is a lower bound most of the time. \\n(3) The upper bound is also written rather sloppily: Q_{lj} is never defined; H_{lj} only depends on l, rather than j -- in fact, I'm fairly sure it should be H_l, and \\\\eta should be sampled from Q_{l} (i.e. a vector with a coordinate for each neuron in layer l) if I understood the proof correctly.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Dear Kento Nozawa,\\n\\nthank you for your thorough reading of our work and for pointing out notations and typos that indeed need to be fixed. We believe that most of the problems mentioned concerning equations 1 and 2 can be corrected by doing the following\\n\\n1) Streamlining the notation denoting \\\"training set\\\". \\\\mathcal{D} denotes any training set. Thus all instances of \\\"S\\\" which is also used to denote any training set need to be replaced by \\\\mathcal{D}.\\n\\n2) In theorem 3.1 we believe it suffices to replace \\\"For any distribution over \\\\mathcal{X} \\\\in \\\\{-1,+1\\\\}\\\" with \\\"For any distribution \\\\mathcal{P} on \\\\mathcal{X}\\\\times\\\\mathcal{Y}\\\".\\n\\nActually it is already implied that (x,y)\\\\sim\\\\mathcal{P} in section 3 in the paragraph preceding Theorem 3.1, when we define the population loss. Thus this paragraph can also be improved by adding something along the lines of \\\"\\\\mathcal{P} is any distribution on \\\\mathcal{X}\\\\times\\\\mathcal{Y}\\\". Note that since \\\"P\\\" without the calligraphic font is already used to denote the prior of the classifier, another letter to be decided might be more applicable for the aforemented distribution on \\\\mathcal{X}\\\\times\\\\mathcal{Y}.\"}",
"{\"comment\": \"I'm interested in this practical PAC-Baye bound and its optimization.\\n\\nI am a little bit confused by some notations while I read this paper;\\n\\n## Theorem 3.\\n\\nThis \\\\mathcal{X} may be a data distribution to sample training data; not binary.\\nIn addition, it is already defined as the input space on the same page.\\n\\n## Eq. 2\\n\\n\\\\mathcal{D} looks undefined.\\n\\n---\", \"i_also_report_the_following_typos\": [\"PAC Baye -> PAC-Baye (page 2)\", \"need a space between \\\"sufficiency,\\\" and \\\"minimality\\\" on page 2?\", \"non vacuous -> non-vacuous (page 2)\", \"Abundant parentheses: \\\\beta KL((Q|P)).\"], \"title\": \"Clarify notations\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper propose a second-order approximation to the empirical loss in the PAC-Bayes bound of random neural networks. Though the idea is quite straightforward, the paper does a good job in discussing related works and motivating improvements.\", \"Two points made about the previous works on PAC-Bayesian bounds for generalization of neural networks (especially Dziugaite & Roy, 2017) are:\", \"Despite non-vacuous, these results are obtained on \\\"significantly simplified\\\" datasets and remain \\\"significantly loose\\\"\", \"The mean of q after optimizing the PAC-Bayes bound through variational inference is far different from the weights obtained in the original classifier.\", \"These points are valid. But it's unclear to me that the proposed method fixes any of them. My concerns are summarized below:\", \"The inequalities are rather arbitrary and not convincing to me. BY Taylor expansion one actually get a lower bound of the right hand side, However the authors write it as first including the higher-order terms, which results in an upper bound, then throwing the higher-order term and arguing the final equation as an approximate upper bound. I believe this can be incorrect when the higher-order terms plays an nonnegligible role.\", \"The theorems are easy algebras and better not presented as theorems.\", \"The proposed diagonal and layer-wise approximation to hessian are very rough estimate of the original Hessian and it is not surprising that it doesn't give meaningful approximation of the original bound.\", \"There is no explicit comparison with previous methods using the same dataset and architecture. It would be much more convincing if the authors include the results of previous works using the same style of figures as Figure 2/3.\"], \"minor\": [\"I understand using the invalid bound (optimizing prior) as a sanity check. But the presentation in the paper could better be improved by explaining why doing this.\", \"Do the plots in Figure 2 correspond to the invalid or valid bound?\", \"Many papers are complaining that Hessian computation is difficult in autodff libs without noticing this is a fundamental limitation of these reverse-mode autodiff libraries and no easy fix exists.\", \"I believe MCMC is not used and the authors are refering to MC (page 7, first paragraph).\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors replace the empirical risk term in a PAC-Bayes bound by its second-order Taylor series approximation, obtaining an approximate (?) PAC-Bayes bound that depends on the Hessian. Note that the bound is likely overoptimistic unless the minimum is quadratic. They purpose to study SGD by centering the posterior at the weights learned by SGD. The posterior variance that minimizes this approximate PAC-Bayes bound can then be found analytically. They also solve for the optimal prior variance (assuming diagonal Gaussian priors/posteriors), producing a hypothetical \\\"best possible bound\\\" (at least under the particular choices of priors/posteriors, and under this approximation of the empirical risk term). The authors evaluate their approximate bound and \\\"best bound possible\\\" empirically on MNIST and CIFAR. This requires computing approximations of the Hessian for small fully connected neural networks trained on MNIST and CIFAR10. There are some nice visualizations (indeed, these may be one of the most interesting contributions.)\\n\\nThe direction taken by the authors is potentially interesting. However, there are a few issues that would have to be addressed carefully for me to recommend acceptance. First, the comparison to (some very) related work is insufficient, and so the actual novelty is misrepresented (see detailed comments below). Further, the paper is full of questionable vague claims and miscites/attributes other work. At the moment, I think the paper is below the acceptance threshold: the authors need to read and understand (!) related work, and expand their theoretical and/or empirical results to produce a contribution of sufficient novelty/impact. \\n\\nDETAILED FEEDBACK.\\n\\nI believe the authors missed some related work by Tsuzuki, Sato and Sugiyama (2019), where a PAC-Bayes bound was derived in terms of the Hessian, via a second-order approximation. How are the results presented in this submission relate to Tsuzuki et al approach? \\n\\nWhen the posterior, Q, is a Gaussian (or any other symmetric distribution), \\\\eta^T H \\\\eta is the so-called Skilling-Hutchinson trace estimator. Thus E(\\\\eta^T H \\\\eta) is the Trace(H) scaled by the variance of \\\\eta. The authors seem to have completely missed this connection, which simplifies the final expression considerably.\\n\\nWhy is the assumption that the higher order terms are negligible reasonable? Citation or experiments required.\", \"regarding_the_off_diagonal_hessian_approximation\": \"how does the proposed layer-wise approximation relate to k-FAC (Martens and Grosse 2015)?\", \"ib_lagrangian\": \"I am not sure why the authors state the result in Thm 4.2 as a lower bound on the IB Lagrangian. What\\u2019s the significance of having a lower bound on IB Lagrangian?\", \"other_comments\": \"\", \"introduction\": \"\\u201cAt the same time neither the non-convex optimization problem solved in .. nor the compression schemes employed in \\u2026 are guaranteed to converge to a global minimum.\\u201d. This is true but it is really not clear what the point being made is. Essentially, so what? Note that PAC-Bayes bounds hold for all posteriors, even ones not centered at the global minimum (of any objective). The claims made in the rest of the paragraph are also questionable and their purposes are equally unclear. I would be grateful if the authors could clarify.\\n\\nFirst sentence of Section 3.1: \\u201cAs the analytical solution for the KL term in 1 obviously underestimates the noise robustness of the deep neural network around the minimum...\\u201d. I have no idea what is being claimed here. The statement needs to be made much less vague. Please explain.\", \"section_4\": \"\\u201c..while we will be minimizing an upper bound on our objective we will be referring with a slight abuse of terminology to our results as a lower bound.\\u201d. I would appreciate if the authors could clarify what they mean here.\\n\\nSection 4.1 beginning: \\u201cWe make the following model assumptions...\\u201d. Choosing a Gaussian prior and posterior is not an assumption. It's simply a choice. The PAC-Bayes bound is valid for any choices of Gibbs classifiers. On the other hand, it is an assumption that such distributions will yield \\\"tight\\\" bounds, related to the work of Alquier et al.\\n\\nSection 4.1 \\u201cIn practice we perform a grid search over the parameters..\\u201d. The authors should mention that such a search should be accounted for via a union bound (or otherwise). The \\\"cost\\\" of such a union bound should be discussed.\\n\\nThe empirical risk of Q is computed using 5 MCMC samples. This seems like a very low number, as it would not even give you one decimal point of accuracy with reasonable confidence! The authors should either use more samples, or account for the error in the upper bound using a confidence interval derived from a Chernoff bound.\\n\\nSection 4.2: \\u201cThe concept of a valid prior has been formalized under the differential privacy setting...\\u201d. I am not sure what the authors mean by that.\", \"section_5\": \"\\u201cThere is ambiguity about the size of the Hessians that can be computed exactly.\\u201d What kind of ambiguity?\\n\\nSame paragraph in Section 5 discusses why there are few articles on Hessian computation. The authors claim that \\u201cthe main problem seems to be that the relevant computations are not well supported...\\u201d. This is followed by another comment that is supposed to contrast the previous claim, saying that storing the Hessian is infeasible due to memory requirements. I am not sure how this claim about memory requirements shows a contrast with the claim on computation not being supported.\\n\\nFirst sentence in Section 5.1: I believe this is only true under some conditions. \\n\\nSection 5.1: The authors should explain why they add a damping term, alpha, to the Hessian, and how alpha affects the results.\\n\\n***\", \"additional_citation_issues\": \"The connections between variational inference, PAC-Bayes and IB Lagrangian have been pointed out in previous work (e.g. Germain, Bach, Lacoste, Lacoste-Julien (2016); Achille and Soatto 2017).\\n\\nIn the introduction, the authors say \\u201c...have been motivated simply by empirical correlations with generalization error; an argument which has been criticized \\u2026\\u201d (followed by a few citations). Note, that this was first criticized in Dziugaite and Roy (2017). \\n\\n\\u201cBoth objectives in \\u2026 are however difficult to optimize for anything but small scale experiments.\\u201d. It seems peculiar to highlight this, since the approach that the authors are presenting is actually more computationally demanding. \\n\\nCitations for MNIST and CIFAR10 are missing.\\n\\n***\", \"minor\": \"Theorem 3.1 \\u201cFor any data distribution over..\\u201d, I think it was meant to be \\\\mathcal{X} \\\\times (and not \\\\in )\\nTheorem 4.2: \\u201cFor our choice of Gaussian prior and posterior, the following is a lower bound on the IB-Lagrangian under any Gaussian prior covariance\\u201d. I assume only the mean of the Gaussian prior is fixed.\\n\\n\\nCitations are misplaced (breaking the sentences, unclear when the paper of the authors are cited).\\nThere are many (!) missing commas, which makes some sentences hard to follow.\\n\\n***\", \"positive_feedback\": \"I thought the visualizations in Figure 2 and 3 were quite nice.\"}"
]
} |
HyeJf1HKvS | Deep Graph Matching Consensus | [
"Matthias Fey",
"Jan E. Lenssen",
"Christopher Morris",
"Jonathan Masci",
"Nils M. Kriege"
] | This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art. | [
"graph matching",
"graph neural networks",
"neighborhood consensus",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=HyeJf1HKvS | https://openreview.net/forum?id=HyeJf1HKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2xmVSfCFEQ",
"S1gWx_K8jr",
"Hye_QDtLsH",
"SkeVqBYLir",
"HJeaC7YUjH",
"SJebbWtIjH",
"SkgnRR_UjB",
"HkgactQu9B",
"Bkg1F6tZ5S",
"Syg85NKpYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726620,
1573455849102,
1573455648017,
1573455244340,
1573454805501,
1573454073140,
1573453523519,
1572514197458,
1572081015398,
1571816590424
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1565/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1565/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1565/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposed an end-to-end network architecture for graph matching problems, where first a GNN is applied to compute the initial soft correspondence, and then a message passing network is applied to attempt to resolve structural mismatch. The reviewers agree that the second component (message passing) is novel, and after the rebuttal period, additional experiments were provided by the authors to demonstrate the effectiveness of this. Overall this is an interesting network solution for graph-matching, and would be a worthwhile addition to the literature.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #2 (Part 2)\", \"comment\": \"Relation to [2] Zhang and Lee: Deep Graphical Feature Learning for the Feature Matching Problem (ICCV'19)\\n============================================================================================\\nWe thank Reviewer #2 for pointing us to this recent work which we were not aware of yet. Here, the authors propose a compositional message passing algorithm that maps point coordinates into a high-dimensional space. The final matching procedure is done by computing the pairwise inner product between point embeddings, proposing enhancements to the initial embedding stage. It does not apply a refinement stage for soft correspondences. Hence, we see improvements in the architecture of the first stage as orthogonal to our work.\\n\\nTo compare our approach, we replicated the experimental setup of [2]. We train our unmodified anisotropic keypoint architecture on the synthetic keypoint training setup from [2] and evaluate the final model on the Pascal PF dataset [4], using only point coordinates as input.\\nOverall, our consensus architecture improves upon the state-of-the-art results of [2] on almost all categories while our $L=0$ baseline is weaker than the results reported in [2]. We report the final results below and will include them in the final version.\\n\\nHits@1 on the Pascal PF dataset:\\n--------------------+---------+--------+----------+--------+-------+----------+--------+-------+-------+---------+--------+\\nMethod | mean | aero | bicycle | bird | boat | bottle | bus | car | cat | chair | cow |\\n--------------------+---------+--------+----------+--------+-------+----------+--------+-------+-------+---------+--------+\\nDGFM [2] | 88.5 | 76.1 | 89.8 | 93.4 | 96.4 | 96.2 | 97.1 | 94.6 | 82.8 | 89.3 | 96.7 |\\n--------------------+---------+--------+----------+--------+-------+----------+--------+-------+-------+---------+--------+\\n $L=0$ | 86.7 | 64.6 | 86.9 | 76.6 | 88.5 | 96.0 | 98.4 | 91.4 | 89.6 | 93.4 | 77.2 |\\nOurs $L=10$ | 95.4 | 83.6 | 92.0 | 94.2 | 98.2 | 99.3 | 99.3 | 98.7 | 98.5 | 99.8 | 96.3 |\\n $L=20$ | 95.5 | 83.0 | 92.1 | 92.9 | 98.2 | 99.3 | 99.0 | 98.7 | 99.2 | 100.0 | 96.3 |\\n--------------------+---------+--------+----------+--------+-------+----------+--------+-------+-------+---------+--------+\\n\\n--------------------+---------+---------+--------+---------+-----------+-----------+---------+---------+---------+--------+--------+\\nMethod | mean | table | dog | horse | m-bike | person | plant | sheep | sofa | train | tv |\\n--------------------+---------+---------+--------+---------+-----------+-----------+---------+---------+---------+--------+--------+\\nDGFM [2] | 88.5 | 89.7 | 79.5 | 82.6 | 83.5 | 72.8 | 76.7 | 77.1 | 97.3 | 98.2 | 99.5 |\\n--------------------+---------+---------+--------+---------+-----------+-----------+---------+---------+---------+--------+--------+\\n $L=0$ | 86.7 | 97.9 | 85.7 | 73.3 | 76.8 | 69.4 | 97.4 | 76.4 | 85.1 | 91.7 | 97.4 |\\nOurs $L=10$ | 95.4 | 100.0 | 98.5 | 86.8 | 87.0 | 87.8 | 100.0 | 79.4 | 99.6 | 100.0 | 98.9 |\\n $L=20$ | 95.5 | 99.5 | 98.9 | 86.8 | 86.4 | 89.0 | 100.0 | 76.5 | 100.0 | 100.0 | 99.3 |\\n--------------------+---------+---------+--------+---------+-----------+-----------+---------+---------+---------+--------+--------+\\n\\nIn addition, it shows that our method works also well even when not taking any visual information into account. Besides, training converges significantly faster in comparison to [2]. Our algorithm does only make use of 64 000 synthetic training examples, while [2] uses 9 million examples and reports convergence not until about 4.5 million examples.\\n\\nRelation to [3] Swoboda et al.: A Study of Lagrangean Decompositions and Dual Ascent Solvers for Graph Matching (CVPR'17)\\n===========================================================================================================\\nThanks for mentioning the reference. We agree that the message passing algorithms presented in [3] shares some similarities with our message passing architecture on an abstract level. However, the dual ascent algorithms also known as \\\"message passing\\\" used here solves the graph matching problem by using MAP-inference, linear assignment problems and (several small) quadratic assignment problems. Therefore, in this case it is difficult to establish such a clear mathematical relation to our method as we were able to show for the GA method. We will add a discussion of [3] to the related work section on graph matching approaches.\\n\\n[2] Zhang and Lee: Deep Graphical Feature Learning for the Feature Matching Problem (ICCV'19)\\n[3] Swoboda et al.: A Study of Lagrangean Decompositions and Dual Ascent Solvers for Graph Matching (CVPR'17)\\n[4] Ham et al.: Proposal Flow (CVPR'16)\"}",
"{\"title\": \"Response to Review #2 (Part 1)\", \"comment\": \"Dear Reviewer,\\n\\nthanks a lot for reviewing our paper and providing valuable comments. We are working on incorporating your feedback in a new revision of our paper. We would like to provide more explanations to address your concerns.\\n\\nMain Contribution\\n================\\nWe emphasize that the main contribution of our work lies indeed in the second stage of our architecture, i.e., refining initial soft correspondences using a trainable message passing scheme that reaches for neighborhood consensus. Our approach allows us to not only distribute local information, but to also distribute global information in the form of node indicator functions/node colorings using purely local operators. The distribution of global information is then used to resolve ambiguities/false matchings made in the first stage of our architecture. In addition, we proposed optimizations to make the consensus stage scale to large real-world instances.\\n\\nWe agree that obtaining initial soft correspondences via similarity scores of node embeddings is not a novel contribution and shouldn't be viewed as one. We will make this more clear in a revised version.\\n\\nIn addition, we argue that our consensus stage has a huge impact on the resulting performance of our model. For example, on the WILLOW-ObjectClass dataset, it at least reduces the error of the initial model ($L=0$) by half across all categories. On the DBP15K dataset, it consistently improves the model's performance by 4 percentage points on average which we claim to be highly significant. We performed additional experiments as suggested by Reviewer #1 to emphasize the usefulness of our consensus stage (please see the response to Review #1 for more details).\\n\\nRelation to [1] Wang and Solomon: Deep Closest Point: Learning Representations for Point Cloud Registration (ICCV'19)\\n======================================================================================================\\nThis work tackles the problem of finding an unknown rigid motion between point clouds by first matching points followed by a differentiable SVD module. We agree that this work tackles the feature matching procedure in a similar fashion as we do in our initial feature matching procedure based on inner product similarity scores. Additionally, this work leverages a Transformer module to let point clouds know about each other before feature matching takes place.\\n\\nOur work differs in that we introduce a consensus stage to resolve ambiguities in matchings after the initial matching procedure based on neighborhood consensus. Hence, our method could be used to improve the results of [1] further. In order to resolve ambiguities, [1] can only rely on the least squares optimization, inherentely utilizing the rigid embedding of the point cloud in $\\\\mathbb{R}^3$, which does not exist for general graphs and is not required for our approach. In addition, our approach is highly scalable due to only operating an local neighborhoods, while the Transformer module operates on the whole point cloud in a global fashion.\\n\\nDue to the different task and difference in assumptions, we do refrain from an in-depth experimental comparison. We will nonetheless discuss the similarities/differences to this work further in our related work.\\n\\n[1] Wang and Solomon: Deep Closest Point: Learning Representations for Point Cloud Registration (ICCV'19)\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Dear Reviewer,\\n\\nthanks a lot for reviewing our paper and for the positive feedback.\\n\\nTo precisely describe our architecture with the theorems and their proofs, we opt for introducing a rigorous mathematical notation. In addition, we wanted to describe our procedure as general as possible. For example, $\\\\Psi_{\\\\theta}$ is not limited to be a particular GNN instance but can be any trainable architecture outputting node embeddings. We will strengthen the intuitive explanations and simplify notation in a revised version.\"}",
"{\"title\": \"Response to Review #1 (Part 3)\", \"comment\": \"Robustness to Node Addition or Removal\\n===================================\\n\\nOur algorithm is not only robust to edge variations, but also to the addition or removal of nodes. This can be explained by the fact that unmatched nodes do not have any influence on the neighborhood consensus error since those nodes do not obtain a color from the functional map given by $\\\\mathbf{S}$. Our neural architecture is able to detect and gradually decrease any false positive influence of these nodes in the refinement stage. Please note that the PascalVOC and DBP15K datasets already contain graph-pairs of varying sizes.\\n\\nWe further verified this experimentally on a synthetic toy dataset following a similar experimental setup to [1], where we additionally add $q$% nodes to the target graph and inter-connect those nodes with all other nodes based on the given Erd\\u0151s\\u2013R\\u00e9nyi edge probability $p$.\\n\\nHits@1 on synthetic graphs with $|\\\\mathcal{V}_s|=50$, $p=0.2$:\\n-------------------------+------------+------------+-------------+------------+-------------+\\nRefinement steps | $q=0.1$ | $q=0.2$ | $q=0.3$ | $q=0.4$ | $q=0.5$ |\\n-------------------------+------------+------------+-------------+------------+-------------+\\n$L=0$ | 78.97 | 55.46 | 42.04 | 31.14 | 26.10 |\\n$L=10$ | 100.00 | 100.00 | 99.66 | 98.98 | 98.94 |\\n-------------------------+------------+------------+-------------+------------+-------------+\\n\\nHits@1 on synthetic graphs with $|\\\\mathcal{V}_s|=100$, $p=0.1$:\\n-------------------------+------------+------------+-------------+------------+-------------+\\nRefinement steps | $q=0.1$ | $q=0.2$ | $q=0.3$ | $q=0.4$ | $q=0.5$ |\\n-------------------------+------------+------------+-------------+------------+-------------+\\n$L=0$ | 72.47 | 43.68 | 32.56 | 22.48 | 19.43 |\\n$L=10$ | 100.00 | 100.00 | 99.99 | 99.82 | 99.63 |\\n-------------------------+------------+------------+-------------+------------+-------------+\\n\\nAs can be seen, our consensus stage is extremely robust to the addition/removal of nodes while the first stage alone has major difficulties in finding the right matching.\\n\\n[1] Xu et al.: Gromov-Wasserstein Learning for Graph Matching and Node Embedding (ICML'19)\"}",
"{\"title\": \"Response to Review #1 (Part 2)\", \"comment\": \"Comparison to the Graduated Assignment Algorithm\\n=============================================\\n\\nAs stated in Section 3.3, our algorithm can be viewed as a generalization of the Graduated Assignment (GA) algorithm extending it by trainable parameters. To evaluate the impact of this we replaced $\\\\Psi_{\\\\theta_2}$ by the fixed function $F(\\\\mathbf{X}, \\\\mathbf{A}, \\\\mathbf{E}) = \\\\mathbf{A} \\\\mathbf{X}$ in the second stage of our approach. Results are shown below:\\n\\nHits@1 on the WILLOW-ObjectClass dataset:\\n--------------------------------+------------------+-----------------+------------------+------------------+\\nIsotropic Methods | Motorbike | Car | Duck | Winebottle |\\n--------------------------------+------------------+-----------------+------------------+------------------+\\n $L=0$ | 83.89 \\u00b1 2.65 | 84.97 \\u00b1 3.00 | 86.80 \\u00b1 2.41 | 94.55 \\u00b1 1.46 |\\n--------------------------------+------------------+-----------------+------------------+------------------+\\nFixed $F$ $L=10$ | 91.33 \\u00b1 2.10 | 88.02 \\u00b1 2.46 | 89.46 \\u00b1 1.69 | 96.58 \\u00b1 0.87 |\\n $L=20$ | 91.51 \\u00b1 2.09 | 87.29 \\u00b1 4.22 | 89.44 \\u00b1 2.43 | 97.04 \\u00b1 0.47 |\\n--------------------------------+------------------+-----------------+------------------+------------------+\\nTrainable $\\\\Psi_{\\\\theta_2}$ $L=10$ | 92.73 \\u00b1 2.60 | 93.18 \\u00b1 3.01 | 91.80 \\u00b1 2.00 | 97.97 \\u00b1 0.78 |\\n $L=20$ | 93.10 \\u00b1 2.50 | 93.77 \\u00b1 3.18 | 92.11 \\u00b1 2.33 | 98.16 \\u00b1 0.78 |\\n--------------------------------+------------------+-----------------+------------------+------------------+\\n\\nHits@1 on the DBP15K dataset:\\n--------------------------------+-----------+------------+-----------+-----------+-----------+-----------+\\nMethod | ZH->EN | EN->ZH | JA->EN | EN->JA | FR->EN | EN->FR |\\n--------------------------------+-----------+------------+-----------+-----------+-----------+-----------+\\n $L=0$ | 72.53 | 67.80 | 73.70 | 70.01 | 86.39 | 84.23 |\\n--------------------------------+-----------+------------+-----------+-----------+-----------+-----------+\\nFixed $F$ $L=10$ | 72.92 | 68.80 | 74.91 | 71.38 | 86.54 | 84.86 |\\n--------------------------------+-----------+------------+-----------+-----------+-----------+-----------+\\nTrainable $\\\\Psi_{\\\\theta_2}$ $L=10$ | 77.16 | 71.77 | 77.36 | 73.93 | 89.12 | 87.50 |\\n--------------------------------+-----------+------------+-----------+-----------+-----------+-----------+\\n\\nAs one can see, using trainable neural networks $\\\\Psi_{\\\\theta_2}$ consistently improves upon the results of using the fixed-function message passing scheme. We will add those results to the paper.\", \"using_trainable_neural_networks_instead_of_the_ga_process_has_further_advantages\": [\"In real-world applications it is often difficult to find meaningful similarities between node and edge features. Our approach learns how to make use of (continuous) features and uses them to guide the refinement procedure further.\", \"It allows us to choose from a variety of task-dependent GNN operators, e.g., for learning geometric/edge conditioned patterns or for fulfilling injectivity requirements. The theoretical expressivity discussed in Section 5 could even be enhanced by making use of higher-order GNNs, which we leave for future work.\"]}",
"{\"title\": \"Response to Review #1 (Part 1)\", \"comment\": \"Dear Reviewer,\\n\\nthanks a lot for reviewing our paper and providing valuable comments. We are working on incorporating your feedback in a new revision of our paper. We would like to provide more explanations to address your concerns.\\n\\nMain Contribution\\n================\\nWe emphasize that the main contribution of our work lies indeed in the second stage of our architecture, i.e., refining initial soft correspondences using a trainable message passing scheme that reaches for neighborhood consensus. Our approach allows us to not only distribute local information, but to also distribute global information in the form of node indicator functions/node colorings using purely local operators. The distribution of global information is then used to resolve ambiguities/false matchings made in the first stage of our architecture. In addition, we proposed optimizations to make the consensus stage scale to large real-world instances.\\n\\nWe agree that obtaining initial soft correspondences via similarity scores of node embeddings is not a novel contribution and shouldn't be viewed as one. We will make this more clear in a revised version. We performed additional experiments to emphasize the usefulness of our consensus stage (see below).\\n\\nImpact of the Consensus Stage\\n===========================\\nWe argue that our consensus stage has a huge impact on the resulting performance of our model. For example, on the WILLOW-ObjectClass dataset, it at least reduces the error of the initial model ($L=0$) by half across all categories. On the DBP15K dataset, it consistently improves the model's performance by 4 percentage points on average which we claim to be highly significant.\\n\\nWe nonetheless agree with Reviewer #1 that those improvements should be more significant when using weaker baselines. To verify this, we conducted additional experiments as suggested where we replaced the first stage GNN module with a weaker MLP. The results are shown below which we will include in the final manuscript.\\n\\nHits@1 on the WILLOW-ObjectClass dataset:\\n----------------------------+------------------+-----------------+------------------+------------------+\\n$\\\\Psi_{\\\\theta_1}$ = MLP | Motorbike | Car | Duck | Winebottle |\\n----------------------------+------------------+-----------------+------------------+------------------+\\n $L=0$ | 56.85 \\u00b1 2.65 | 73.44 \\u00b1 2.65 | 71.93 \\u00b1 2.10 | 86.10 \\u00b1 1.25 |\\n----------------------------+------------------+-----------------+------------------+------------------+\\nisotropic $L=10$ | 80.34 \\u00b1 2.34 | 81.31 \\u00b1 2.48 | 81.16 \\u00b1 2.55 | 93.53 \\u00b1 1.38 |\\nisotropic $L=20$ | 82.24 \\u00b1 3.06 | 82.49 \\u00b1 3.70 | 81.84 \\u00b1 2.92 | 95.14 \\u00b1 1.58 |\\n----------------------------+------------------+-----------------+------------------+------------------+\\nanisotropic $L=10$ | 87.15 \\u00b1 3.27 | 91.56 \\u00b1 2.92 | 88.36 \\u00b1 2.55 | 96.57 \\u00b1 0.83 |\\nanisotropic $L=20$ | 94.16 \\u00b1 3.03 | 94.23 \\u00b1 2.14 | 90.03 \\u00b1 2.21 | 97.24 \\u00b1 0.84 |\\n----------------------------+------------------+-----------------+------------------+------------------+\\n\\nIt can be seen that we can still obtain SOTA results even when starting from a weak initial baseline. Here, the consensus stage improves the initial matchings significantly, with nearly up to 40 percentage points improvements on the Motorbike class. However, it is worth noting that good initial matchings do help the consensus stage to improve its performance further, which stresses the importance of our two-stage approach. Furthermore, starting from weak initial matchings takes significantly more refinement steps to converge (as can be seen by the difference between $L=10$ and $L=20$).\\n\\nIt should be noted that the first stage cannot infer any information about consensus. In order to check for consensus, an initial matching is needed that can be tested. The first stage siamese network has no information flow between both heads until the feature matching. The second stage can rerank those hypotheses based on additional information: matching agreement in neighborhoods. It can not be applied without an initial ranking of correspondences.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors proposed a message passing neural network-based graph matching methods. The overall framework can be viewed as a graph siamese network, where two set of points are passing through the same graph neural network, and then two new embeddings are generated. Using the two embedding the similarity between points can be computed and then the final matching can be generated.\\n\\nThe overall structure of this paper is similar to [1] and [2], the authors should discuss the difference of the proposed with these two papers, if it is possible, the authors may try to compare with these two methods in experiments. Currently, I think the main contribution of the paper should be the new message-passing scheme (in Sec. 3.2). However, from the current experiment, I can not see if the performance improvement is from the new message-passing scheme.\\n\\nIn fact, the message passing scheme is also related to the dual decomposition framework, which is previously used in the graph matching area. For example, in [3], a message-passing algorithm derived from dual decomposition is used to solve the graph matching problem. The authors may also consider add some discussion the difference between message-passing derived from dual decomposition and the message passing in the graph neural network.\\n\\n==================================== After Revision ==============================================\\nIn the new experiment, the authors proved that the new message-passing scheme (i.e. the consensus stage) in Sec 3.2 can successfully improve the performance by refining original assignment. Thus I modify the score to weak accept.\\n\\n\\n[1] Wang, Yue, and Justin M. Solomon. \\\"Deep Closest Point: Learning Representations for Point Cloud Registration.\\\", ICCV 2019,\\n[2] Zhen Zhang, and Wee Sun Lee. \\\"Deep Graphical Feature Learning for the Feature Matching Problem.\\\", ICCV 2019\\n[3] Paul Swoboda et. al. \\\"A study of Lagrangean decompositions and dual ascent solvers for graph matching.\\\", CVPR 2017\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a two-stage GNN-based architecture to establish correspondences between two graphs. The first step is to learn node embeddings using a GNN to obtain soft node correspondences between two graphs. The second step is to iteratively refine them using the constraints of matching consensus in local neighborhoods between graphs. The overall refining process resembles the classic graph matching algorithm of graduated assignment (Gold & Rangarajan, 1996), but generalizes it using deep neural representation. Experiments show that the proposed algorithm performs well on real-world tasks of image matching and knowledge graph entity alignment.\\n\\nThe paper is interesting and has some good potential but lacks some important evaluations and analyses. My main concerns are as follows. \\n\\n1) The consensus in the second stage is crucial? \\nAs the title shows, the main technical contribution lies in the second stage of consensus inducing. But, for the real tasks, in the experiments, the gain by the second stage is not significant or often negligible (L=0 vs. L=10 or 20 in Table 1,2,3). The results of the first stage (L=0) already give better results than all the baselines in many cases, so that most gains appear to come from the usage of GNNs for representation. This makes the major contribution of this work less significant. I hope the authors justify this. And, I guess that's maybe because the consensus information may also be induced in the first stage by matching nodes with relational presentations learned using a GNN. To see this, the authors may run the second stage only without the first stage. \\n\\n2) Comparison to the graduated assignment (GA) process\\nAs discussed in 3.3, the proposed neighborhood consensus can be viewed as a generalization of GA of Eq.6 with trainable neural modules. But, it's not actually shown what is the gain by this generalization. This needs to be shown experimentally by substituting the second stage by GA process. \\n\\n3) Robustness to node addition or removal. \\nAll the experiments look assuming only edges are varied. Is this algorithm robust to node addition or removal, occurring in many practical graph matching problems? This needs to be also discussed. \\n\\n======================================\\n\\nThe rebuttal succeeds in addressing most of my concerns so that I upgrade my initial rating to weak accept. I hope all the points in the rebuttal are included in the final manuscript.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper suggests a framework for answering graph matching questions consisting of local node embeddings with a message passing refinement step.\\n\\nThe paper has well written text, offers what appear to be nice experiments validating the method and discusses its own limitations.\\n\\nI am giving a weak accept. \\n\\nI the weak accept is my reflection of my inability to provide useful feedback. This is also out of domain for me and I am not a useful reviewer for this paper.\\n\\nAs a general comment, I will say that the paper adopts a highly mathematical style that will be off putting to many readers. Many of the expressions are 'high entropy' For instance, the embedding network is given as $\\\\mathbf{\\\\Psi}_{\\\\theta_1}$ throughout. This is seven levels of typographical distinction for the main character of the story. This is a (1) bold (2) capital (3) Greek letter with a (4) subscript that is (5) greek with a (6) subscript that is (7) numeric. I understand that each of these levels of distinction has a purpose and a meaning, but it is also arguably a much richer designation that is necessary. Generally the paper feels like this, as if its being too unnecessarily specific. Personally I found the paper rather difficult to read and decompose, which I believe does count against the paper. The overly specific nature of paper will cut into its potential readership strongly.\"}"
]
} |
B1lJzyStvS | Self-Supervised Learning of Appliance Usage | [
"Chen-Yu Hsu",
"Abbas Zeitoun",
"Guang-He Lee",
"Dina Katabi",
"Tommi Jaakkola"
] | Learning home appliance usage is important for understanding people's activities and optimizing energy consumption. The problem is modeled as an event detection task, where the objective is to learn when a user turns an appliance on, and which appliance it is (microwave, hair dryer, etc.). Ideally, we would like to solve the problem in an unsupervised way so that the method can be applied to new homes and new appliances without any labels. To this end, we introduce a new deep learning model that takes input from two home sensors: 1) a smart electricity meter that outputs the total energy consumed by the home as a function of time, and 2) a motion sensor that outputs the locations of the residents over time. The model learns the distribution of the residents' locations conditioned on the home energy signal. We show that this cross-modal prediction task allows us to detect when a particular appliance is used, and the location of the appliance in the home, all in a self-supervised manner, without any labeled data. | [
"Appliance usage",
"self-supervised learning",
"multi-modal learning",
"unsupervised learning"
] | Accept (Poster) | https://openreview.net/pdf?id=B1lJzyStvS | https://openreview.net/forum?id=B1lJzyStvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"lYwrjnk21",
"BJgTpNPjjS",
"ByxVf3fisS",
"BJebqjMjor",
"SyeDTqGjiS",
"BkxK89GssB",
"r1eYGqGijH",
"BygE9mFVqB",
"SygwxOtM5H",
"HyxczJKYtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726592,
1573774533186,
1573755916404,
1573755785321,
1573755583304,
1573755473353,
1573755408547,
1572275083842,
1572145135305,
1571553041555
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1564/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1564/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1564/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Authors proposed a multi-modal unsupervised algorithm to uncover the electricity usage of different appliances in a home. The detection of appliance was done by using both combined electricity consumption data and user location data from sensors. The unit of detection was set to be a 25-second window centered around any electricity usage spike. Authors used a encoder/decode set up to model two different factors of usage: type of appliance and variety within the same appliance. This part of the model was trained by predicting actual consumption. Then only the type of appliance was used to predict the location of people in the house, which was also factored into appliance related and unrelated factors. Locations are represented as images to avoid complicated modeling of multiple people.\\n\\nThe reviewers were satisfied with the discussion after the authors, and therefore believe this work is of general interest to the ICLR community.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Minor updates to the paper\", \"comment\": \"Dear reviewers,\\n\\nThank you again for the thoughtful comments. We made minor changes to the paper (colored in blue) to address some of the issues. We clarified EL-kmeans and the sentence explaining our clustering algorithm. We also improved the figure caption, strengthened our motivation, and added clearer pointers to model details and discussions in the appendix.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your thoughtful comments. We are glad that you found the work and dataset interesting and new to the community, and the paper well written and clear.\\n\\n[Latent variable z_{t,cat}]\\nYes, you are right that location prediction is important in learning a meaningful latent variable z_{t,cat}. Since our model is trained end-to-end, the encoding of z_{t,cat} is guided by location data, allowing the encoder, E, to produce meaningful latent vectors. \\n\\n[Model details]\\nWe agree with the reviewer that having more model details in the main paper will improve clarity. Due to the page limit, we discussed our model implementation details in Appendix 8.4, including the network architectures, parameters, and training details. Both location predictors use 5 layers of 3D deconvolution to model the location images. We will try to fit more model details into the main paper.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thoughtful comments. We are glad that you appreciate our data collection, analysis, and the proposed algorithm. We are also glad that you find our illustrations intuitive and our code and dataset helpful to the community.\\n\\n[Baseline setup]\\nWe will revise the text to add more details about the baselines. Specifically, EL-KMeans takes the same input as our method, i.e., windows of energy signal and the corresponding windows of location data. For each window, EL-kmeans concatenates the energy signal, the frames of location images (flattened as a 1-d vector), and the context vector to create the feature vector. EL-KMeans then runs KMeans++ on the windows of feature vectors to output the clustering results. We will clarify the explanation in the paper.\\n\\nThe reviewer might also be asking why our method has a big improvement over EL-KMeans. As shown in our results, methods with location information like EL-KMeans do perform better than baselines without location information, showing that location data is useful for detecting appliance activation events. However, simply concatenating location and energy data and clustering them as done by EL-Kmens is not good enough. This is because location and energy data are unrelated most of the time and become related only when an appliance is turned on. Furthermore, there are typically multiple residents in the home, so it is important to design a method to separate the location of the user interacting with the appliance from the locations of other residents. Our model addresses these issues by learning the cross-modal prediction between the two streams. \\n\\n[Density propagation algorithm]\\nSorry for not being clearer. This sentence tries to explain the intuition behind our clustering algorithm and why it performs well. The intuition is that an appliance cluster should have a concentrated location -- i.e., all instances in the cluster should have their location in the same area (i.e., high density). Thus the clustering algorithm iteratively expands a cluster by adding events that are in the same energy embedding neighborhood but also have approximately the same locations. As the algorithm expands the cluster, visually it \\u201cpropagates\\u201d the cluster to regions with high location density. We will update that sentence to avoid confusion.\\n\\n[Results in Home 4] \\nIn Home 4, both the hairdryer and rice-cooker are used only occasionally. As a result, the model does not see enough instances of these appliances to predict their locations with high enough predictability scores. Consequently, the model does not discover them successfully and includes them in the \\u201cnot detected\\u201d cluster. This cluster includes many background events, and hence the f1 score is low for both appliances in that home. \\n\\n[Data size]\\nOur data size (number of homes and duration) is comparable to the commonly used datasets for this task. Past work including unsupervised (AFAMAP [1], VarBOLT [2]) and supervised (NeuralNILM [3], seq2point [4], dAM [5]) methods evaluated on 1 to 5 homes using datasets like REDD [6] and UK-DALE [7]. \\n\\nCollecting such datasets from actual homes for several months is difficult, as acknowledged by the reviewer. It often faces many deployment and system challenges different from collecting a dataset from online resources (e.g., online images or videos), which limits the data size. We hope that our new dataset can help address this problem in the community by providing additional data from a wider variety of homes. \\n\\nAdditionally, our dataset is the first to include both concurrent streams of home energy and residents\\u2019 location data. This creates new opportunities for understanding appliance usage and developing multi-modal solutions.\\n\\n[1] Kolter & Jaakkola, 2012 (see reference in our paper)\\n[2] Lange & Berges, 2018 (see reference in our paper)\\n[3] Kelly & Knottenbelt, 2015 (see reference in our paper)\\n[4] Zhang et al., 2018 (see reference in our paper)\\n[5] Bonfigli et al., 2018 (see reference in our paper)\\n[6] Kolter, J. Zico, and Matthew J. Johnson. \\\"REDD: A public data set for energy disaggregation research.\\\" Workshop on Data Mining Applications in Sustainability (SIGKDD), 2011.\\n[7] Kelly, Jack, and William Knottenbelt. \\\"The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes.\\\" Scientific data 2 (2015): 150007.\"}",
"{\"title\": \"Response to Reviewer 2 (part 1)\", \"comment\": \"Thank you for your thoughtful comments. We are glad that you find our paper well-written, focused, and interesting. We are also happy that you are open to adjusting the rating and helping us improve the paper. We addressed the issues below.\\n\\n[Motivation]\\nAs noted by Reviewer 1, learning appliance usage is useful for improving energy efficiency. There are also other applications, such as health sensing and learning behavioral analytics. We briefly discuss each of these applications below.\\n\\nFor improving energy efficiency, appliance usage information has many consumer, industry, and policy benefits as discussed in [1] (see Table 1 therein for a summary). Various studies have shown that appliance level information can help save 12~20% of energy in residential buildings [1, and citations therein]. There are multiple reasons for this and we explain a few from a utility company\\u2019s perspective. \\n\\nThe cost to supply energy changes minute by minute because electricity is difficult to store, and generation cost and energy demands vary frequently [2]. Appliance usage information allows utility companies to better analyze energy usage patterns, and reduce peak demands by providing personalized feedback for different households. Such granular appliance usage information could also improve load forecasting, which is critical for energy purchasing, generation, delivery, and infrastructure planning [1]. These improvements can ultimately lead to more efficient energy markets.\\n\\nBesides energy efficiency, appliance usage information has applications in health sensing. It provides a passive way to understand user habits and behavior at home. For an elderly person living alone, this information helps caregivers assess whether the person is able to perform basic daily activities such as cooking, eating, washing their clothes, etc. It also alerts the caregiver to changes in the elderly person\\u2019s lifestyle such as changes in their eating habits (skipping meals) or changes in their needs for heating or cooling. Furthermore, an elderly person turning on appliances late at night could indicate changes in their sleep habits.\\n\\nLastly, generating analytics for how people use appliances is important for multiple businesses. One example is behavior-based home insurance. Similarly to how car insurance companies today reward good driving behavior with lower insurance rates, home insurance companies are interested in appliance usage information for better risk assessments. Detecting and analyzing abnormal energy patterns is helpful for reducing residential accidents such as fires due to fire hazards. Another example is related to e-commerce and consumer companies that are interested in using home appliance information to provide more relevant recommendations. For example, people who use the stove for cooking every day may be interested in cooking-related ads. On the other hand, people who hardly use the stove are unlikely to be good targets of such ads. We will include some of the explanations above in our paper to clarify the motivation.\\n\\n[1] Armel, K. Carrie, et al. \\u201cIs disaggregation the holy grail of energy efficiency? The case of electricity.\\u201d Energy Policy 52 (2013): 213-234.\\n[2] U.S. Energy Information Administration, U.S. Department of Energy. \\u201cElectricity explained - Factors affecting electricity prices.\\u201d (2019) https://www.eia.gov/energyexplained/electricity/prices-and-factors-affecting-prices.php\"}",
"{\"title\": \"Response to Reviewer 2 (part 2)\", \"comment\": \"[Why we can\\u2019t just use smart plugs all around]\\nUnfortunately attaching a smart plug to each appliance incurs a large overhead and has several limitations. This is in fact the reason that past work has focused on methods that use data from the utility meter. We list some of the limitations below.\\n\\nFirst, in most modern homes, big kitchen appliances (fridge, microwave, dishwasher, etc.) are embedded into the kitchen cabinets and their plugs are not easily accessible without taking them out of their enclosures. Even when they are not embedded, many large appliances (e.g., stove, washer & dryer) consume very high current and hence their plugs typically have shapes and current levels incompatible with existing smart plugs. \\n\\nSecond, managing a deployment of 10 to 15 smart plugs in one\\u2019s home is difficult. Each smart plug has to be continuously connected to a wireless network such as WiFi. However, WiFi coverage can be spotty. This is difficult since a natural location for a smart plug connected to the fridge or the dishwasher is behind that appliance, but the large metallic backs of such appliances significantly attenuate and can block radio signals. \\n\\nThird, smart plugs are significantly large and bulky. This means that leaving a smart plug plugged into an outlet would no longer allow appliances to be pushed against the wall in front of the outlet. Furthermore, due to their size, smart plugs cover more than one socket when attached to an outlet. As a result, users would need to use extra extension cables to rearrange appliances that would have shared the same outlet without the smart plug.\\n\\nFinally, kids, house cleaners, or other people who are unaware of the function of those smart plugs may remove them or misplace them. For example, the house cleaner might accidentally disconnect appliances for cleaning and then reconnect them to different smart plugs. As a result, a smart plug that was associated with the coffee machine might now become associated with the kettle, etc. Tracking and ensuring that each smart plug stays operational, connected to WiFi, and associated with its correct appliance for months or years is a significant burden. \\n\\nIn contrast, the location sensor is just one sensor, and it does not require calibration for each home. Although we built our sensor, multiple companies already have similar products. This includes big companies like Texas Instruments and startups like Walabot and Emerald Innovations.\"}",
"{\"title\": \"Response to Reviewer 2 (part 3)\", \"comment\": \"[Other minor issues]\\nAs the reviewer recommended, we will revise the text to avoid repeating the same sentences in the abstract and introduction and to make it clear from the title that we are referring to appliance usage by a human/user.\", \"figure_and_table_captions\": \"we thank the reviewer for the suggestion. We made the captions brief due to space limitations. We will revise the text to make the captions more descriptive.\\n\\nFigure 1 (a): The data was collected from a home with multiple graduate students. The subjects have a late schedule and typically leave home around noon. 14-18 (i.e., 2 pm to 6 pm) is typically the time when no one is at home, and hence has very few appliance events. On the day illustrated in the figure, someone returned home and started preparing dinner around 20 (i.e., 8 pm). The heater was turned on around 22 (10 pm), and continued to produce background events until it went off at 12pm.\\n\\nFigure 1(c/d): Thanks for the suggestions. We will try to encode time as color/transparency in Figure 1(d).\", \"softmax\": \"We choose the vector length of z_{t,cat} to be an upper bound on the number of appliances, and aim to learn a sparse dictionary of the appliance types.\", \"person_at_the_edge_of_visible_space\": \"the location sensor has a relatively large coverage area that is up to ~40 feet away from the sensor. Also, since wireless signals traverse walls, the sensor does not lose people when they leave the room. As a result of large through-wall coverage, and since the appliances of interest are all inside the coverage area (away from its boundary), the training is not impacted when people are at the edge of the coverage area.\\n\\nDoor / leaving event: we apologize for the confusing name. It is a large ceiling light fixture that gets turned on when people answer the door or leave home. \\n\\nSec 6.3 and 6.5: Thanks for the feedback. The last page might read a bit rushed because we tried to fit many results into 8 pages. We will revisit it to improve clarity.\", \"fig_5\": \"We are glad that you like the figure. The reason why L_g\\u2019s prediction in Fig 5(c) is blurry is because it is a probability distribution of the likely locations given the context, whereas the locations in Fig.5(a) are sharp because they refer to a specific instance.\", \"limitations\": \"Thanks for the suggestion. We will include a limitation section. We briefly discuss some limitations below. One limitation is that some remotely activated appliances may not have predictable locations (as discussed in Appendix 8.4). Another is that our location sensor has a limited coverage area (around 40 feet in radius). This is enough to cover a typical one-bedroom apartment or the main kitchen and living room areas of larger apartments. For the larger homes, one could deploy a second sensor, similarly to how a WiFi repeater extends the coverage area.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a learning algorithm to recover the events of using an appliance and as well as the location of the appliance in a home by using smart electricity meter and a motion sensor installed a home. In the model, the input is a window of electricity energy consumption and context and the output of the model is the location collected by the motion sensor. The appliance activation as the latent variables is learned using a autoencoder architecture.\\nThis work is very interesting and new to the energy efficiency community and non-intrusive load monitoring. This provide an alternative approach to understand how energy was used in a house and so could be used potentially for energy efficiency and malfunction detection of appliances. The data is also new to the community. The paper is well written and the experiments are clear to me. I have some more detailed concerns:\\n1) One thing is surprising to me: in the autoencoder, you want to learning the latent variables {Z_t, Z_{t,cat}}. It is surprising that for example Z_{t,cat} was exactly the appliance category. I suppose it was aided by the location information presented in P(l|y,c)? I had an experience training an autoencoder for energy disaggregation, but it never worked well because the latent variables Z could be arbitrary.\\n2) It would be more readable if you the model was provided with more details, for example, the detailed model for L_e(Z_{t,cat};\\\\theta_{L_e}). Although p_{\\\\theta_{L_g}} and L_g were defined in a similar way, they should be explicitly given.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"# Review ICLR20, Self-Supervised Learning of Appliance Usage\\n\\nThis review is for the originally uploaded version of this article. Comments from other reviewers and revisions have deliberately not been taken into account. After publishing this review, this reviewer will participate in the forum discussion and help the authors improve the paper.\\n\\n\\n## Overall\\n\\n**Summary**\\n\\nThe authors introduce a new method for classifying which appliance was turned on by looking at the change in total household electricity consumption and tracking people's coarse positions in the house.\\n\\n\\n**Overall Opinion**\\n\\nThe paper is concise and interesting, but for now, I have to reject it because the same major things aren't clear to me:\\n\\n- Why are you doing this, to begin with, i.e. \\\"... to optimize energy consumption\\\"?. What does energy consumption have to do with classifying which appliance was turned on? There might be the rare edge case of a fridge door left open accidentally but that can't be the main argument here, can it?\\n- Now, assuming that there is a benefit to this, why can't we use the smart plugs all around? You already use them to provide ground truth. But I imagine they are cheap and easy enough to distribute at large scale, whereas the location sensor was custom-built by you if I understood correctly and probably has to be calibrated for every house.\\n\\nOther than that, the paper is pretty good. Here are some minor...\\n\\n## Specific comments and questions\\n\\n- I got slightly disappointed when I picked up the paper because I was in a robotics mindset and assumed \\\"Self-supervised learning of appliance usage\\\" was some new way for a robot to analyze appliances and push buttons. I'd add a \\\"human\\\" in the title for clarity, e.g. \\\"Self-supervised Learning of Human Appliance Usage\\\" or \\\"Self-supervised Learning of Appliance Usage from Human Location Data\\\". But that's just my opinion, no pressure on this one.\\n\\n### Abstract\\n\\n- Please never repeat sentences from the abstract in the introduction or vice versa, good as they might be.\\n\\n### Intro & Rel. Work\\n\\nall good\\n\\n### Problem Formulation\\n\\n- Please always provide a proper caption for your figure so that the caption contains enough information for the reader to understand what's going on in the figure without having to search for it in the main text. This applies to Fig.1 and even more so to Fig.2 where I'd recommend explaining in broad strokes the information flow.\\n- Fig. 1 (a) Why is there higher power usage between 02-06 compared to 14-18 o'clock? Is that a heater as the outside temperatures drop overnight? (c/d) I don't think that's a good way of visualizing this. I don't get anything out of diagram (c) and I'd prefer it if you could encode the time aspect in the diagram (d) as color/alpha. E.g. blue agent's oldest position is almost imperceptibly white, growing bluer and darker as the data points get closer to the final step in the recording.\\n\\n### Model\\n\\n- Doesn't the softwax require to know the number of appliances a priori?\\n- How do you deal with $\\\\alpha$ and $P_t$ when a person is at the edge of the visible space? Does this cause problems during training or did this never occur? I could imagine, since the location is noisy, a person could jump in and out of the visible area and cause training instability.\\n\\n### Datasets\\n\\nall good\\n\\n### Results\\n\\n- What's \\\"Door / leaving\\\" in Table 2? Which appliance is this?\\n- I think the standard at ICLR is to have the table and figure captions both below the respective objects\\n- The last 3 sections, 6.3 to 6.5, read a bit rushed and might need a rewrite for clarity. \\n- Fig. 5 is nice. But shouldn't the signal add up to explain the entire scene? There's a bit of blur next to the couch. Is this a training set vs test set artifact?\\n\\n### Conclusion\\n\\n- This needs to be developed a bit more: what are some downsides to your method? Where does it NOT work or what makes it prone to error? And what are future directions based on this work? What's worth researching further or have you solved appliance use detection for good?\\n\\n---\\n\\nI'm sure there are some good answers for my main concerns outlined at the beginning and after that, I'd be happy to adjust my rating. The rest are minor issues. Like I mentioned, save for page 8, the paper is well-written and focused.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"Authors proposed a multi-modal unsupervised algorithm to uncover the electricity usage of different appliances in a home. The detection of appliance was done by using both combined electricity consumption data and user location data from sensors. The unit of detection was set to be a 25-second window centered around any electricity usage spike. Authors used a encoder/decode set up to model two different factors of usage: type of appliance and variety within the same appliance. This part of the model was trained by predicting actual consumption. Then only the type of appliance was used to predict the location of people in the house, which was also factored into appliance related and unrelated factors. Locations are represented as images to avoid complicated modeling of multiple people.\\n\\nAs the final step, a customized clustering algorithm was used to turn appliance related events into clusters that each represent an appliance.\\n\\nAuthors trained the model with real world data collected from 4 homes over a few months each. Smart plugs and some human labeling served as ground truth. The results showed good performance of the proposed unsupervised algorithm in recovering the actual appliances in the homes.\\n\\nReviewer acknowledge the difficulty in collecting such data and the careful analysis into data characteristics which led to the algorithm design. Reviewer also found the illustrations (esp Fig 1) quite intuitive for understanding the problem. Proposed algorithm is quite reasonable. The release of code and data could also be helpful to the community.\", \"reviewer_also_saw_some_non_trivial_issues_that_authors_may_need_to_fix_or_explain\": \"1) Baseline setup seems to be too weak. Authors didn't explain how EL-KMeans was conducted which is non-trivial. The given citation (Arthur & Vassilvitskii, 2007) is a very generic paper discussing KMeans++, not EL-KMeans. In Reviewer's opinion this needs to be fixed for acceptance.\\n\\n2) in Section 6.2 Ablation study, Author mentioned \\\"we adopt a density propagation algorithm that only uses local neighborhood distance\\\" but Reviewer didn't find any content related to the algorithm. Is this the cluster algorithm?\", \"other_issues\": \"1) In table 2, What happened to Home 4's hair dryer and rice cooker? Their F1 are as low as 1.1.\\n\\n2) The data size is quite small, which makes the conclusion less strong.\"}"
]
} |
ryxC-kBYDS | Gaussian Conditional Random Fields for Classification | [
"Andrija Petrovic",
"Mladen Nikolic",
"Milos Jovanovic",
"Boris Delibasic"
] | In this paper, a Gaussian conditional random field model for structured binary classification (GCRFBC) is proposed. The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. The model representation of GCRFBC is extended by latent variables which yield some appealing properties. Thanks to the GCRF latent structure, the model becomes tractable, efficient, and open to improvements previously applied to GCRF regression. Two different forms of the algorithm are presented: GCRFBCb (GCRGBC - Bayesian) and GCRFBCnb (GCRFBC - non-Bayesian). The extended method of local variational approximation of sigmoid function is used for solving empirical Bayes in GCRFBCb variant, whereas MAP value of latent variables is the basis for learning and inference in the GCRFBCnb variant. The inference in GCRFBCb is solved by Newton-Cotes formulas for one-dimensional integration. Both models are evaluated on synthetic data and real-world data. It was shown that both models achieve better prediction performance than relevant baselines. Advantages and disadvantages of the proposed models are discussed. | [
"Structured classification",
"Gaussian conditional random fields",
"Empirical Bayes",
"Local variational approximation",
"discriminative graph-based model"
] | Reject | https://openreview.net/pdf?id=ryxC-kBYDS | https://openreview.net/forum?id=ryxC-kBYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0bhi5RGdN1",
"H1evo4FcjS",
"B1glOEKcjH",
"HkxMCAOqjH",
"ryl7tCdqoS",
"HkxXn-jJ9B",
"B1lu-0I2Fr",
"r1gpwXPsYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726563,
1573717150632,
1573717096429,
1573715658059,
1573715579501,
1571955115299,
1571741183990,
1571677029479
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1563/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1563/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1563/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Main content:\\n\\nBlind review #2 summarizes it well:\\n\\nThe authors provide a method to modify GRFs to be used for classification. The idea is simple and easy to get through, the writing is clean. The method boils down to using a latent variable that acts as a \\\"pseudo-regressor\\\" that is passed through a sigmoid for classification. The authors then discuss learning and inference in the proposed model, and propose two different variants that differ on scalability and a bit on performance as well. The idea of using the \\\\xi transformation for the lower bound of the sigmoid was interesting to me -- since I have not seen it before, its possible its commonly used in the field and hopefully the other reviewers can talk more about the novelty here. The empirical results are very promising, which is the main reason I vote for weak acceptance. I think the paper has value, albeit I would say its a bit weak on novelty, and I am not 100% convinced about the this conference being the right fit for this paper. The authors augment MRFs for classification and evaluate and present the results well. \\n\\n--\", \"discussion\": \"As blind review #1 points out:\\n\\nEven from the experiments (including the new traffic one), it is unclear how much better the method is either because we don't know if the improvements are statistically significant and that in many of the results, unstructured models like RF or logistic regression are very competitive casting some doubt on whether these datasets were well suited for structured prediction.\\n\\n--\\n\\nThis paper is a desk reject as review #2's points out that anonymity was broken by the inclusion of a code link that reveals the authorship, which is true as a simple search on the GitHub user \\\"andrijaster\\\" immediately brings us to https://arxiv.org/pdf/1902.00045.pdf which is a draft of this submission showing all author names.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Feedback Incorporated In New Paper Version\", \"comment\": \"Thanks to all the reviewers for their helpful and constructive feedback. We have uploaded a new paper revision to address the comments and feedback:\\n\\n1. Added discussion on advantages and disadvantages of GCRFBC model (Introduction).\\n2. Additional references and discussion concerning relevant references connected with competing structured classification methods and their applications are added in Related work.\\n3. Added figure 2 and discussion about GCRFBCb and GCRFBCnb model performance in appendix D.\\n4. Fixed typos and grammar mistakes.\\n5. All experiments were updated and hyperparameters were fine-tuned. \\n6. Experimental evaluation on two new structured datasets (now in total we use 5) concerning highway congestion prediction were added.\\n\\nPlease let us know in case of any additional questions or further suggestions on how the paper can be improved.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Point 1. Applying a bernoulli distribution seems trivial.\\n\\nApplying the Bernoulli distribution on the outputs of the GCRF might seem trivial in the case of GCRFBnb model since the mode of GCRF over latent variables is used in learning and inference, which makes this case straightforward. However, as it is well known, probabilistic model which does not consider full distributions but only the mode of the distribution loses important information and often does not perform well. Therefore, we introduced latent dependence structure and marginalized the joint distribution of P(Y,Z) over latent variables, which is not an easy task. In order to lower the computation cost we marginalized the distribution using local variation approximation. To the best of our knowledge, this is the first time that local variation approximation was used in the case of several dependent Bernoulli variables. Additionally because of the model representation we derived the inference procedure which is straightforward and have low computational cost.\\n\\nPoint 2. When the GCRFBCb model would be better than the GCRFBCnb. \\n\\nIn the appendix D we presented in table 5 and figure 2 detailed results obtained on synthetic dataset that can explain where the GCRFBCb model is performing better than the GCRFBCnb. We emphasized in section Experimental evaluation - synthetic dataset, in which cases GCRFBCb performs better than GCRFBCnb. Comparison of GCRFBCb and GCRFBCnb performances are presented in figure 2.\\nIt can be noticed that in cases where variances of latent variables are relatively small, both models have equal performance considering AUC and conditional log likelihood. This means that results obtained by unstructured predictors are equally or more important for classification task compared to the structure between outputs. In such case, MAP estimate is a satisfactory approximation.\\nHowever, when data were generated from a distribution with significantly higher values of $\\\\beta$ compared to $\\\\alpha$, the GCRFBCb performs significantly better than GCRFBCnb. This means that the structure between outputs has significant contribution to classification task compared to the results obtained by unstructured predictors. It can be concluded that GCRFBCb has at least equal prediction performance as GCRFBCnb. and that the models were generally able to utilize most of the information (from both features and the structure between outputs).\\n\\nPoint 3. The learning procedure is untracktable and hard to follow.\\n\\nIndeed, learning procedure of GCRFBCb is more complex than the one of GCRFBCnb. That is due to the marginalization over latent variables which allows the model to exploit more information than the one relying only on the modes of latent variable distributions in case that the variance of these distributions is large (as discussed in Point 2). It is also true that the procedure is computationally more demanding, but we wouldn\\u2019t qualify it as intractable, since we managed to perform training in reasonable time. Also, learning procedures of structured models are usually computationally more demanding and so is ours. But the computational and conceptual complexity of the procedure paid off in better prediction results as our experimental evaluation shows.\\n\\nPoint 4. The multilabel datasets don't seem to be good datasets for structured predictions.\\n\\nWe noticed that in many papers that are focused on structured predictions the multilabel problems were given. The multilabel problems can be defined as structured prediction problems, because structure between labels can have significant impact on classification scores. However, in response to your request we added two additional datasets that are completely structured. Both dataset are connected with the highway congestion prediction on two different highways in Europe. Now we demonstrate consistent improvement on 3 completely structured datasets.\\n\\n\\nPoint 5. There should be more thorough fine-tuning of other models.\\n\\nWhen we considered your suggestion, we noticed that we indeed didn't properly train CRFs, so we rerun the experiments and indeed they now perform better than logistic regression models, but our approach still outperforms them. The pairwise potential that we used in CRF are implemented in pystruct module of CRF (https://pystruct.github.io/user_guide.html).\\n\\nPoint 6. It would be good to have a discussion on when this model would do worse than the others.\\n\\nRegarding potential drawbacks of our model, it relies on the assumption that the underlying distribution of latent variables is multivariate normal distribution, so in the case when this distribution cannot be fitted well to the data (e.g. when the distribution of latent variables is multimodal), the model won't perform as well as it is expected. In response to your suggestion, now we emphasize this in the paper. But also, compared to classical CRFs and SSVM our model has three important advantages, as it is emphasized in the Introduction.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Point 1. The paper could be improved by a careful revision with focus on improving grammar, but as it stands the paper is easy to follow.\\n\\nThank you for your suggestion we revised the paper and improved the grammar.\\n\\nPoint 2. It is not clear to me exactly how the numbers in Table 1 were computed. Is this based on 10-fold crossvalidation as in the following tables?\\n\\nResults obtained in all tables are obtained by 10-fold cross validation.\\n\\nPoint 3. It would be great if the paper provided a better overview of competing structured classification methods.\\n\\nAdditional references and discussion concerning relevant references connected with competing structured classification methods and their applications are added in related work.\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Point 1. The authors break the double blind anonymity with the code link provided.\\n\\nReviewer #2 commented that we broke the double blind review anonymity by providing the link to the code. Please note that the repository does not contain any information on the authors nor their institutions. The link was provided solely for purposes of better evaluation of our work by the reviewers.\\n\\nPoint 2. Can the authors intuit why random forests and neural nets don't perform as well ? \\n\\nThere are two reasons why random forests and neural nets cannot outperform classification accuracy of GCRFBC models.\\n1. The random forests and neural networks are unstructured classifiers, meaning that they model outputs as conditionally independent given inputs, so they do not consider structure between output variables which contains valuable information. In structured (Ski lift congestion dataset and Highway congestion dataset) and multilabel (Gene functional dataset and Music according to emotion dataset) classification tasks information shared among outputs have significant impact on classification accuracy, so unstructured classifiers are outperformed.\\n2. GCRF for classification can also be seen as an ensemble model, which takes the outputs of unstructured classifiers as inputs. Therefore, GCRF for classification is able to figure out how reliable unstructured classifiers are and how much predefined output structure is significant for modeling and that way boost classification accuracy compared to unstructured models.\\n\\nPoint 3. It seems there are many knobs one can tune to get better performance, so I will take the presented results with a grain of salt. \\n\\nIn this revision we payed attention to fine-tune hyperparameters of all classifiers. The results in the paper are updated, but there is no important difference.\\n\\n\\nPoint 4. Also, it seems one can also use other \\\"link\\\" functions with MRFs (similar to link functions in generalized linear models) to not just do logistic but other possible losses as well. How about multiclass classification using softmax ?\\n \\nYou are absolutely right. Due to the conditional independence of elements of output vector (yj) given corresponding latent variable (zj) it is possible to use different loss functions. In the case of non Bayesian GCRFBC (GCRFBCnb) the learning procedure and inference is straightforward in multiclass case, too. However, in our experience, marginalization of joint distribution for multiclass classification problem is not easily defined relying on local variational approximation. Therefore, other variational approaches (e.g. variational autoencoders with reparametrization trick) would be preferred, but they require more computational power compared to procedure that is implemented in this paper. However, we intend to explore that approach in our future work.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors break the double blind anonymity with the code link provided. I'll leave how to deal with this to the meta reviewer.\\n\\nThe authors provide a method to modify GRFs to be used for classification. The idea is simple and easy to get through, the writing is clean. The method boils down to using a latent variable that acts as a \\\"pseudo-regressor\\\" that is passed through a sigmoid for classification. The authors then discuss learning and inference in the proposed model, and propose two different variants that differ on scalability and a bit on performance as well. The idea of using the \\\\xi transformation for the lower bound of the sigmoid was interesting to me -- since I have not seen it before, its possible its commonly used in the field and hopefully the other reviewers can talk more about the novelty here. The empirical results are very promising, which is the main reason I vote for weak acceptance. I think the paper has value, albeit I would say its a bit weak on novelty, and I am not 100% convinced about the this conference being the right fit for this paper. The authors augment MRFs for classification and evaluate and present the results well. \\n\\nCan the authors intuit why random forests and neural nets dont perform as well ? It seems there are many knobs one can tune to get better performance, so I will take the presented results with a grain of salt. Also, it seems one can also use other \\\"link\\\" functions with MRFs (similar to link functions in generalized linear models) to not just do logistic but other possible losses as well. How about multiclass classification using softmax ? I think such generalizations would make this paper lot stronger.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"TITLE\\nGaussian Conditional Random Fields for Classification\\n\\nREVIEW SUMMARY\\nA well justified approach to structured classification with demonstrated good performance. \\n\\nPAPER SUMMARY\\nThe paper presents methods for structured classification based on a Gaussian conditional random field combined with a softmax Bernoulli likelihood. Methods for inference and parameter learning are presented both for a \\\"Bayesian\\\" and maximum likelihood version. The method is demonstrated on several data sets.\\n\\nQUALITY\\nIn general, the technical quality of the paper is good. Except for minor typos, derivations appear to be correct, although I did not check everything in detail. \\n\\nCLARITY\\nThe paper could be improved by a careful revision with focus on improving grammar, but as it stands the paper is easy to follow.\\nIt is not clear to me exactly how the numbers in Table 1 were computed. Is this based on 10-fold crossvalidation as in the following tables?\\n\\nORIGINALITY\\nI am not familiar enough with the field to assess the novelty of the contribution. It would be great if the paper provided a better overview of competing structured classification methods.\\n\\nFURTHER COMMENTS\\n\\n\\\"structured classification\\\" ?\\n\\n\\\"It was shown\\\" -> We show\\n\\n\\\"for given\\\"\\n\\nIs the second sum over k=1 to K in eq. 1 a mistake?\\n\\n\\\"We void\\\" -> We avoid\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The work involves modifiying gaussian conditional random fields to work for classification problems instead of regression problems. The main idea is to apply a bernoulli distribution on top of the regression values to convert them to work with binary classification problems. Two variations are discussed along with the inference and learning methodology. The inference can be done using numerical approximation and learning using variational methods and is still untracktable. Comparisons with other modeling strategies is done using experiments.\\n\\nThe paper is incremental and doesn't really provide improvements to learning parameters (or at least there is no theory showing this in the paper). The experiments do not seem satisfactory as discussed below.\\na) Applying a bernoulli distribution on the output of the GCRF seems trivial. It is not very clear when the GCRFBCb model would be better than the GCRFBCnb. The learning procedure is untracktable and hard to follow on why this might provide better results.\\nb) The datasets (music classification and gene classification) don't seem to be good datasets for structured predictions i.e. the interaction needed between the nodes is not clear. Since they are multilabel problems, one could have just modeled the system with N independent nodes or design a multinomial distribution instead of only for binary classification.\\nc) There should be more thorough fine-tuning of other models, for e.g. in the ski lifts experiment, the CRF does much worse than logistic regression in the results. This is most likely because the parameters were not initialized properly using normal tricks like using logistic regression. Typically for truly structured problems, CRFs do better than their logistic regression counter parts. It is also not clear how the other models (CRF and SSVM) pairwise potentials were modeled.\\n\\nIt would really help to make this paper stronger by showing the new modeling technique does better than CRFs (that are tuned properly) on better structured datasets. It would be good to have a discussion on when this model would do worse than the other structured models and why.\"}"
]
} |
rkgAb1Btvr | Fourier networks for uncertainty estimates and out-of-distribution detection | [
"Hartmut Maennel",
"Alexandru Țifrea"
] | A simple method for obtaining uncertainty estimates for Neural Network classifiers (e.g. for out-of-distribution detection) is to use an ensemble of independently trained networks and average the softmax outputs. While this method works, its results are still very far from human performance on standard data sets. We investigate how this method works and observe three fundamental limitations: "Unreasonable" extrapolation, "unreasonable" agreement between the networks in an ensemble, and the filtering out of features that distinguish the training distribution from some out-of-distribution inputs, but do not contribute to the classification. To mitigate these problems we suggest "large" initializations in the first layers and changing the activation function to sin(x) in the last hidden layer. We show that this combines the out-of-distribution behavior from nearest neighbor methods with the generalization capabilities of neural networks, and achieves greatly improved out-of-
distribution detection on standard data sets (MNIST/fashionMNIST/notMNIST, SVHN/CIFAR10). | [
"Fourier network",
"out-of-distribution detection",
"large initialization",
"uncertainty",
"ensembles"
] | Reject | https://openreview.net/pdf?id=rkgAb1Btvr | https://openreview.net/forum?id=rkgAb1Btvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KtVpihTjd",
"BklRXdoFsr",
"BkxULksYoS",
"SyguRR9FiH",
"H1g8Iq1Z5S",
"H1xbytdCFH",
"BkxqXb7TFr",
"BygwY6oiwr",
"HJeo4VSsDB",
"rJxZt2o5wr",
"HkxSZjucwB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798726535,
1573660709610,
1573658446358,
1573658319714,
1572039245620,
1571879129442,
1571791138156,
1569598847416,
1569571891414,
1569533049423,
1569520380903
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1562/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1562/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1562/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1562/Authors"
],
[
"~Pranav_Poduval1"
],
[
"ICLR.cc/2020/Conference/Paper1562/Authors"
],
[
"~Pranav_Poduval1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a new method for detecting out-of-distribution (OOD) samples.\\n\\nA reviewer pointed out that the paper discovers an interesting finding and the addressed problem is important. On the other hand, other reviewers pointed out theoretical/empirical justifications are limited. \\n\\nIn particular, I think that experimental supports why the proposed method is superior beyond the existing ones are limited. I encourages the authors to consider more scenarios of OOD detection (e.g., datasets and architectures) and more baselines as the problem of measuring the confidence of neural networks or detecting outliers have rich literature. This would guide more comprehensive understandings on the proposed method.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"- Did the Fourier networks learn the input distribution?\\nYes, in all the examples we looked at, they learned the labels as well as the ReLU networks (see also appendix B). There are small differences, sometimes one of ReLU(x) and sin(x) is slightly better than the other, but we did not observe any general trend.\\n \\n- The first maximum or minimum of sin:\\nOn the one hand this is an experimental result (see the new appendix P), but it is also the expected plausible dynamics of Gradient Descent: If a neuron has a positive / negative contribution to one label, its output should be increased / decreased when the feature is present at a sample with this label. This increase / decrease will continue until either the maximum / minimum is reached, or the feature or its connection to the output have changed significantly. So after the network reached a \\u201cstable state\\u201d, we expect that the sin(x) - Neurons that are used significantly in the output have reached a maximum / minimum of the sin(x) function.\\n\\n- How are defined: usual initialization, small initialization and large initialization?\\n\\u201cUsual initialization\\u201d: \\u201cHe initialization\\u201d: variance = 2 / fan_in.\\nThe \\u201clarge initialization\\u201d that we use to evaluate our networks is in general the largest initialization such that the ensemble still gives a good accuracy (at some point the networks no longer train reliably and the accuracy and out-of-distribution detection goes down, but anything before that usually works, see e.g. figure 5).\\nWith \\u201csmall initialization\\u201d we mean anything that behaves similar to the limit of \\u201cinfinitesimal initialization\\u201d, this may still include the \\u201cusual initialization\\u201d (however, this term should only appear in qualitative statements that motivate our methods).\\n\\n- Predictive uncertainty estimation via prior networks, NeurIPS 2018.\\nThis method uses both in-distribution inputs and out-of-distribution inputs for training/tuning the model, whereas our method only uses in-distribution inputs.\\nIf one knows which out-of-distribution inputs to expect, using them is an easy option to greatly increase the performance. However, the performance of this approach depends on how well the \\u201cout of distribution training samples\\u201d match the \\u201cout of distribution test\\u201d samples. \\nWe evaluated the method described in the paper on distinguishing MNIST from fashionMNIST, classes 5-9.\", \"the_main_results_for_the_area_under_the_roc_curve_are\": \"\", \"relu_nets\": \"91.8%\\nDPN, trained on Omniglot as outliers: 92.6%\\nDPN, trained on fashionMNIST classes 0-4: 96.5%\", \"fourier_nets\": \"99.7%\\nWe added a comparison between this method and ours in Table 1 as well as a more detailed discussion in Appendix R. \\n\\n\\n- Generative probabilistic novelty detection with adversarial autoencoders, NeurIPS 2018.\\nThe approach in this paper (GPND) makes use of an interesting yet complex model that is more computationally demanding to train and evaluate. The approach employs two separate adversarial losses which makes the whole system much more delicate to train. Similarly to our method, the training procedure does not require out-of-distribution samples. \\nIt is important to note that all of the experiments in the paper (with one exception) have been set up such that training is performed only on one class of the dataset and the remaining classes are considered to be outliers. Since our model relies on classifiers, we cannot reproduce this setup with our method, since we need more than one class in the training set to obtain the models for an ensemble. With the GPND method, we obtained an area under the ROC curve of 98% when using one MNIST class as in-distribution data and an equally large set of images from the other classes as out-of-distribution data. However, this number is not directly comparable with the results obtained with our experimental setup on MNIST. Alternatively, we trained the GPND method such that several classes are considered as inliers e.g. classes 0-4 from MNIST, with the remaining classes being used at test time as outliers. We obtained 76% as the area under the ROC curve with this setup for GPND, which is far from the ~99% that we achieve with Fourier networks using the same in-distribution and out-of-distribution sets (see Table 5 in Appendix I.3). \\nIt may be possible to get the GPND model to work for in-distribution sets that contain more class clusters by some heavy hyperparameter tuning, or it may be necessary to make some fundamental changes like replacing the GAN used for reconstruction with a conditional GAN to model several different manifolds.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"- Logical reasoning vs. describing:\\nWe feel we do give a logical reasoning; it involves analyzing three deficiencies of ReLU network ensembles and deriving modifications to avoid them:\\n\\nSection 2 (\\u201cUnreasonable extrapolation\\u201d): Our motivation for the \\u201cFourier networks\\u201d is the known fact that ReLU networks become more confident away from the training set. Instead we want the \\u201cevidence functions\\u201d (logits) for each label to decrease with distance to the training points. This can be achieved by a RBF network with Gauss functions, but to get better generalization we would prefer to achieve a similar behaviour with a \\u201cnormal\\u201d network.\", \"our_basis_to_achieve_this_is_that_the_fourier_transform_of_a_gauss_function_is_again_a_gauss_function\": \"Proposition 1 says we can get \\u201cevidence\\u201d functions that decay like a Gauss function around the point 0 also as the expected value of cos(wx) with w sampled according to the corresponding normal distribution, which in turn can be seen as the output of a network with cos(x) (or sin(x)) activation function, trained to have maximal output at 0. This leads us directly to networks with activation function sin(x) and weights that are sampled from a Normal Distribution.\\nProposition 2 then indicates that this indeed gives evidence functions that approach 0 away from the training points.\", \"section_3\": \"(\\u201cUnreasonable agreement between networks\\u201d): We sketch a mathematical argument why different \\u201cinfinitesimal\\u201d initializations of ReLU networks with one hidden layer give the same network with probability 1, which would be bad for the ensemble approach. Avoiding this is one motivation for larger initialization.\\nWhile \\u201clarge initialization\\u201d and \\u201csin(x) activation function\\u201d both invalidate essential parts of the mathematical argument, we rely on experiments to show that this \\u201cunreasonable agreement\\u201d is really gone with our modification.\\n\\nIn section 4 we sketch a mathematical argument why features that do not contribute to the discrimination between labels are \\u201cdropped\\u201d when we start from \\u201cinfinitesimal initialization\\u201d, and how these features do contribute to OOD detection when we start from \\u201clarge initialization\\u201d.\\n\\nAdmittedly, these pieces of mathematical reasoning do not completely \\u201cprove that it works\\u201d since we make some simplifying assumptions: Proposition 2 assumes we freeze the frequencies after sampling, which is a good approximation to the real procedure (Gradient Descent also on the frequencies) only for large initializations. Similarly, the proof of section 3 assumes \\u201cinfinitesimal initializations\\u201d and only is valid for one hidden layer, and the argument of section 4 does not specify the magnitude of this effect. However, they motivate the approach and at least make it plausible that it would work. We feel this is the most useful level of mathematical rigor; it is rare in this area that one can rigorously prove general theorems without simplifying assumptions.\\n\\n\\n- Novelty:\", \"we_think_the_new_contributions_of_this_paper_are\": [\"An analysis of the limitations of the usual Ensemble approach for out-of-distribution detection.\", \"The general idea of using the Fourier transform for mimicking a nearest neighbor method as an average of networks, thus combining \\u201cgeneralization\\u201d properties of networks with \\u201cboundedness\\u201d properties of nearest neighbor methods.\", \"The use of this approach to significantly improve the performance of Ensemble methods without making them more complicated.\", \"Lower confidence and accurate estimates of uncertainty:\"], \"there_seems_to_be_a_misunderstanding\": \"There are two separate metrics one could use to measure the usefulness of \\u201cconfidence scores\\u201d:\\n\\u201cInformation content\\u201d as measured by the ROC curve\\n\\u201cCalibration\\u201d\\nWe use the first measure in our evaluation, it measures directly how well we can distinguish between \\u201cin distribution\\u201d and \\u201cout of distribution\\u201d, and it is invariant under monotonic transformations of the confidence scores used (and can even be used for e.g. nearest neighbor methods that do not have confidence scores that can be thought of as probabilities). So we do not evaluate a \\u201clower confidence\\u201d (or only in the sense that out-of-distribution samples should get a lower confidence than in-distribution samples).\\n\\nIndependent of this, one could also ask about the second evaluation - when the confidence scores can be interpreted as \\u201cprobabilities\\u201d, it would measure how close these confidence scores are to the fraction of correctly classified in-distribution samples. For out-of-distribution samples, we do not have a correct label, and would instead require all labels to get the same probability 1/#labels.\\nWe are less interested in this evaluation since the calibration can always be adjusted later by a monotonic transformation and is more interesting for in-distribution misclassifications, so we did not include it in the paper.\\nBut for this evaluation, a low maximal softmax output is indeed what is desired and measured for out of distribution samples.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"* RBF networks:\\nThanks for flagging this, this is an important point that we should have explained (we have now added an appendix \\u201cG: Comparison to Nearest Neighbor methods\\u201d to discuss this in more detail).\\nFirst, note we are only comparing the use of RBF / Fourier / ReLU networks in the last layer. Of course the generalization of the whole system depends also on other factors, including the architecture in the lower layers, but for this discussion we assume we have only one hidden layer that we vary.\\nThe basic problem with RBF networks or Nearest Neighbor methods is their \\u201clocalized\\u201d nature, as explained in more detail e.g. in section 5.9 of Bishop\\u2019s book \\u201cNeural Networks for Pattern Recognition\\u201d: Since these methods \\u201conly memorize\\u201d known training points, we may need a lot of training points to cover the whole distribution, which can be a problem especially in high dimensions. On the other hand, ReLU networks learn \\u201crules\\u201d which can generalize the \\u201cimportant parts\\u201d and ignore \\u201cnoise\\u201d. \\nTo give a concrete example, we added (in the new appendix G) a random background to the MNIST and fashionMNIST pictures. This reduces the classification accuracy on MNIST for the (ReLU or Fourier) networks from 98% to 91%, but for the nearest neighbor method from 97% to 76% - the nearest neighbor method now has to find images similar in both the digit and the background, whereas the networks \\u201conly\\u201d have to learn to ignore the background. We see the same effect also in OOD detection: The area under the ROC curve is better for the nearest neighbor method in the case of the original images, but with the random background it becomes much worse than for the ReLU networks. In both cases the Fourier networks give better OOD detection than both ReLU networks and Nearest Neighbors.\\nAs another example we use images containing 4 digits, but the labels depend only on the first digit. To produce \\u201coutliers\\u201d, we exchange the first digit for a fashionMNIST image. Again the ReLU network is better in (classification and) OOD detection than the nearest neighbor method (0.85 vs. 0.79 area under ROC), and the Fourier Networks are better than both (0.95). The explanation \\u201cthe ReLU networks learn to focus on the first digit\\u201d would lead us to predict that ReLU networks would not flag as many outliers if we instead changed the last digit to a fashionMNIST image. This is indeed the case. For details, see appendix G.\\n\\n* Proposition 1:\\nSorry, yes, that should be u_i (the weight connecting the sin(x) neurons to the output neurons), we corrected it in the new version.\\n\\n* \\u201cFourier network\\u201d: \\nYes, with \\u201cFourier network\\u201d we just mean using sin(x) as the activation function in the last layer. We added this as a definition.\\n\\n* Choice of initialization:\\nThe general recipe is to use the largest initialization that still gives a good accuracy of the classifier.\\nFor one hidden layer you can see the sensitivity with respect to the initialization in Figure 5 - the Fourier networks outperform the ReLU networks for any initialization as long as they still train reliably, and the gain seems to be highest the larger the initialization is (i.e. shortly before the performance drops because the networks do no longer train stably). So this setting can be found using only the training data, and Figures 5, 16, and 17 show that indeed the optimum does not differ significantly for different out of sample sets.\\nFor two layers we have two parameters we can tune, we can fix one and again choose the other as large as possible. So again you could use this recipe on the training set, appendix I.4 has empirical results which show that the results using this recipe with different choices for the fixed parameter are all very similar.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a method to detect out-of-distribution or anomalous data points. They argue that Fourier networks have lower confidence and thus better estimates of uncertainty in areas far away from training data. They also argue for using \\u201clarge\\u201d initializations in the first layers and sin(x) as the activation function for the final hidden layer.\\n\\nThe paper does not seem to have any significant logical reasoning on why their specific architecture works, but \\\"describes\\\" what they did. It is not clear what the novelty is, besides that they found an architecture that seems to work. Additionally while Fourier networks have lower confidence, that does not necessarily mean they are more accurate estimates of uncertainty. However the reviewer does acknowledge that the estimates are mostly likely better than ReLU networks that are well known for having terrible estimates of uncertainty.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I have read the reviews and the comments. Overall I am still positive about the paper and I have confirmed the rating.\\n\\n======================\\n\\nThis paper proposes a method for the uncertainty estimates for Neural Network classifiers, specifically out-of-distribution detection. Previous methods use an ensemble of independently trained networks and average the softmax outputs. The authors investigate this method (ensembles of ReLU networks) and observe three fundamental limitations:\\n\\u201cUnreasonable\\u201d extrapolation, \\u201cunreasonable\\u201d agreement between the networks in an ensemble, and the filtering out of features that distinguish the training distribution from some out\\u2013of\\u2013distribution inputs, but do not contribute to the classification (CONSTANT FUNCTIONS ON THE TRAINING MANIFOLD).\", \"to_mitigate_these_problems_the_authors_proposed_the_following\": \"- Changing the activation function of the last hidden layer to the sin(x) function, and they claimed that this is going to guard against overgeneralization.\\n- Use larger than usual initialization to increase the chances of obtaining more diverse networks for an ensemble.\\n- They claimed that this combines the out-of-distribution behavior from nearest neighbor methods with the generalization capabilities of neural networks, and achieves greatly improved out-of-distribution detection on standard data sets.\\n\\nThe paper addresses an important problem, out of distribution detection, by proposing a Fourier network which is somewhere between a ReLU network (small initialization) and a nearest neighbor classifier (large initialization). The authors claimed that this leads to an out-of-distribution detection which is better than either of them.\\n\\nThe paper is well written and easy to follow. The authors did an interesting and precise investigation in how to force the confidence score to decay like a Gauss function by proposing to use the Fourier transform of such a Gauss function. By doing so they get the advantage of ReLU (ability to generalize) and prevent the network to become arbitrarily certain of its classification for all points. However, the authors claimed that when they switch the activation function to sin(x) the increase of |x| will usually stop at the first maximum or minimum of sin which is (around \\\\pi/2). However, the authors did not explain how they get this result (i.e the value \\\\pi/2). It would be interesting if the authors could show results for the case greater than or less than (\\\\pi/2) to show the difference.\\nIn addition, Figure 3 shows that the ensemble of ReLU networks is overconfident in most of the area, whereas the ensemble of Fourier networks is only confident close to the input, and in the discussion of constant function of their training manifold the authors discuss some example.\", \"i_would_like_to_ask_the_authors\": \"Did the Fourier networks learn the input distribution?\", \"how_are_defined\": \"usual initialization, small initialization and large initialization?\\n\\nThe experiment section is adequate. However, it would strengthen the paper if the authors compared against other approaches such as: \\n- Predictive uncertainty estimation via prior networks, NeurIPS 2018.\\n- Generative probabilistic novelty detection with adversarial autoencoders, NeurIPS 2018.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"** Updates after rebuttal **\\n\\nI thank the authors for the response, though I am still skeptical about the evaluation of the method, which might be a result of heavy tuning and overfit to the chosen test sets. The proposed approach also requires more theoretical justification.\\n\\n------------------------------------------------\\n\\nI'm not an expert in this area but I do find this paper interesting. Though the name \\\"Fourier networks\\\" is a bit arbitrary because the proposed approach also applies to multi-layer networks where only the last hidden layer has the proposed change.\\n\\nThe extrapolation problem of ReLU networks is an interesting point. I don't know previous works that point out this for out-of-distribution detection but it's worth figuring out if this observation has been made in the adversarial robustness community.\\n\\nI do have several concerns, summarized below:\\n* On page 3 the fourier transform is motivated by that RBF networks \\\"do not generalize as well as ReLU networks\\\". I doubt if there is any evidence for this argument.\\n* I have some issue understanding proposition 1: what is w_i'? Only w_i is mentioned before.\\n* The \\\"Fourier network\\\" is not defined explicitly in the paper, which makes it hard to understand the architecture/algorithm details. If I understand it correctly, it is only about changing the activation function of the last hidden layer and large initialization, with everything else the same as the ReLU networks?\\n* How does the magic number \\\"\\\\sigma_1 = 0.75\\\" and \\\"\\\\sigma_2 = 0.0002\\\" come from? Did you search it by looking at the test performance? Is the performance sensitive w.r.t. the two parameters?\\n\\nI'm willing to increase my score if the authors addressed my concerns.\"}",
"{\"comment\": \"As mentioned above in 3), there are two different meanings of adversarial:\\na) the one you used in your first question (far away point, same label), and \\nb) the usual adversarial attack and defense (near point, different label).\\n\\nIndeed b) is sometimes (but not always) related to uncertainty and out-of-distribution detection.\\nActually, this work started with a defense against adversarial attacks, but it used a (both mathematically and computationally) more complicated method. We then noticed that it also can be used for out-of-distribution detection, and for out-of-distribution detection (but not for the defense against adversarial attacks) the method presented here performs as well as the more advanced method. So while there is a connection between the two problems also for this method, we think for this type of methods it may make sense to treat both goals differently.\", \"title\": \"Relationship to defending against Adversarial Attacks\"}",
"{\"comment\": \"The reason for my strong criticism was because most works focusing on uncertainty, also tend to show benefits of their methods in case of Adverial Robustness or detecting Adversarial Attacks e.g. Evidential Deep Learning, Alpha-Divergence Dropouts etc.\", \"title\": \"Indeed Adverserial Attack and Defence wasn't the point of the paper\"}",
"{\"comment\": \"1) In the simplest case 1-dim input x, 1 hidden layer with N neurons, the output of neuron j would be sin(w_j*x+b_j) with the weights w_1,...,w_N between input and hidden layer (in general) different numbers.\\nSo when you choose a neuron j, the points x+2*pi*n/w_j for intergers n would give the same output for this neuron, but (in general) not for the other neurons (which would be needed to guarantee the same result as for x in the output layer).\\n\\n2) You could still use the idea of your construction to create points that give approximately the same outputs because they are at most an epsilon away from x+2*pi*n/w_j for all j=1,2,...,N. However, their density decreases (in general) exponentially with N, as O(epsilon^N), for a single network with a single hidden layer of N neurons, so you would not encounter these points just by chance.\\n\\n3) The aim of this paper is to reduce as far as possible the cases in which \\\"random\\\" out-of-distribution inputs are treated as in-distribution. It is not about defending against \\\"carefully crafted\\\" adversarial inputs. (In this case, \\\"adversaries\\\" are sort of the opposite of the usual \\\"adversarial inputs\\\": Instead of a point close to a training point x which gets a different output label than x, I assume you mean a point that is far away from the training set, but gets the same output as x).\", \"title\": \"Multiple frequencies\"}",
"{\"comment\": \"Large Initializations will obviously add more diversity because of the highly non-convex NN optimization so that part is pretty obvious.\\n\\nThe other more important issue- Let's take an ensemble of NN with a single layer, so the last layer which is the first layer will have sin(x) as activation fn.\\nNow if all ur networks have learned to classify a particular training point x correctly, then I can create infinite adversaries that are out of training distribution as x+n*pi, n is any Integer.\\nI am sure as we go deeper it will not be so trivial to create adversaries, but it's clearly not that hard either.\\nHave I misunderstood something??\\nPossibly normalizing the last layer o/p between [-pi,pi] could be a soln. are you doing the same\", \"title\": \"Interesting, but how is this even working ??\"}"
]
} |
Syxp-1HtvB | Semantic Hierarchy Emerges in the Deep Generative Representations for Scene Synthesis | [
"Ceyuan Yang",
"Yujun Shen",
"Bolei Zhou"
] | Despite the success of Generative Adversarial Networks (GANs) in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises. In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes. By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes. The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme. Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation. | [
"Feature visualization",
"feature interpretation",
"generative models"
] | Reject | https://openreview.net/pdf?id=Syxp-1HtvB | https://openreview.net/forum?id=Syxp-1HtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hEnlPzmiLk",
"B1gSrKSEoB",
"B1xx4YHNir",
"SkgTE_BVsS",
"rJljzdBNsr",
"BylzFBrNor",
"r1gTx_WOcH",
"rJegBk1SqB",
"BylIovgxcr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726507,
1573308733036,
1573308711774,
1573308468606,
1573308434989,
1573307770244,
1572505588628,
1572298552301,
1571977117722
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1561/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1561/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1561/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes to study what information is encoded in different layers of StyleGAN. The authors do so by training classifiers for different layers of latent codes and investigating whether changing the latent code changes the generated output in the expected fashion.\\n\\nThe paper received borderline reviews with two weak accepts and one weak reject. Initially, the reviewers were more negative (with one reject, one weak reject, and one weak accept). After the rebuttal, the authors addressed most of the reviewer questions/concerns. \\n\\nOverall, the reviewers thought the results were interesting and appreciated the care the authors took in their investigations. The main concern of the reviewers is that the analysis is limited to only StyleGAN. It would be more interesting and informative if the authors applied their methodology to different GANs. Then they can analyze whether the methodology and findings holds for other types of GANs as well. R1 notes that given the wide interest in StyleGAN-like models, the work maybe of interest to the community despite the limited investigation. The reviewers also point out the writing can be improved to be more precise.\\n\\nThe AC agrees that the paper is mostly well written and well presented. However, there are limitations in what is achieved in the paper and it would be of limited interest to the community. The AC recommends that the authors consider improving their work, potentially broadening their investigation to other GAN architectures, and resubmit to an appropriate venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1 (Continued)\", \"comment\": \"\", \"q5\": \"\\\"show the results if the initial latent code w was modified directly\\\"\", \"a5\": \"Following the advice, we do experiments by (a) varying semantic from a particular level (i.e., indoor lighting) on all layers, and (b) varying semantics from different levels on layout-relevant layers. From Fig.18 of the updated submission, we can tell that when changing latent codes at all layers, other levels of semantics (such as objects inside the room) are changed. From Fig.19 of the updated submission, we can tell that when changing other semantics (e.g., category and indoor lighting) on layout-relevant latent codes, only layout but not the desired semantics varies. These two experiments further demonstrate our discovery that the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for the valuable comments.\", \"q1\": \"\\\"what is actually investigated is the layer-wise latent code (NOT \\u2018representation\\u2019 which is typically defined to mean the responses of filters/outputs of each layer)\\\"\", \"a1\": \"Unlike the classification networks, where the output of each layer can be considered as the abstract feature (or say, representation) of the original input image, deep generative model learns to map the pre-defined latent distribution to observed image distribution. In the image generation process, the deep representation at each layer (especially for StyleGAN and BigGAN) is actually directly derived from the projected latent code at each layer. Therefore, we consider the latent code as the \\\"generative representation\\\", which may be slightly different from the conventional definition in the classification networks. Furthermore, since the GAN model is fixed, the input latent code and the output response of filters are implicitly causal. From this perspective, studying the latent code is equivalent to studying the layer output to some extent. We clarify this definition in the updated submission.\", \"q2\": \"\\\"All the initial text in the paper\\u2019s abstract, introduction etc. leads the reader to believe that the findings here are generally applicable\\\"\", \"a2\": \"Thanks for the suggestion. We tone down the claim in abstract and introduction in the updated submission. We conduct the layer-wise analysis on StyleGAN and BigGAN, but the proposed probing and manipulation technique can be generalized to other GANs with single latent code as well, such as our experiment on PGGAN.\", \"q3\": \"\\\"this paper only shows some sample results other models e.g. BIGGAN\\\"\", \"a3\": \"First, to our knowledge, StyleGAN is currently the best deep generative model for high-resolution scene synthesis. That is why we mainly conduct experiments on StyleGAN structure. According to the experimental results, we believe the reason that StyleGAN achieves such good generation quality is due to the design of layer-wise latent code. Based on this design, generator can learn different levels of semantics on different layers instead of only the first layer seeing the latent code. Besides, nowadays, more and more latest GAN models inherit this design of using layer-wise latent codes, such as recent ICCV\\u201919 work SinGAN [1] and HoloGAN [2]. As \\\"multi-layered stochasticity\\\" design becomes widely adopted in GANs, our layer-wise analysis sheds light on why it is effective. Also, there may be a misunderstanding about the experiments on BigGAN. According to [3], BigGAN also employs layer-wise latent code, and the BigGAN experiments shown in Fig.9 of the original submission are the results by only modifying the attribute-level latent codes (i.e., upper layers). To make this clear, we update Fig.9 by adding a comparison experiment between modifying all latent codes and modifying only the relevant latent codes, as well as adding a layer-wise analysis on BigGAN. Hope this can address your concern. In addition, the proposed re-scoring technique can also be applied to conventional GAN structures, like PGGAN, which is another state-of-the-art GAN model for scene synthesis yet with single latent code. Sec.4 of the submission shows the results. Indeed, we cannot make layer-wise analysis on PGGAN to show which layer learns which semantic, but we do convincingly find manipulatable semantics in the input latent space. Feel free to name other GAN architectures for high-resolution image synthesis and we are happy to do analysis on them as well.\\n\\n[1] SinGAN: Learning a Generative Model from a Single Natural Image. Shaham et al., ICCV'19.\\n[2] HoloGAN: Unsupervised Learning of 3D Representations From Natural Images. Nguyen-Phuoc et al., ICCV'19.\\n[3] Large Scale GAN Training for High Fidelity Natural Image Synthesis. Brock el al., ICLR'19.\", \"q4\": \"\\\"these results are essentially backing up the insights that led to the design of StyleGAN\\\"\", \"a4\": \"The design of StyleGAN indeed allows the model to learn a more disentangled representation compared to using a single-level latent variable. However, this fact does not weaken the contributions of this work. First, we propose re-scoring method to identity manipulatable semantics of a given GAN model. Second, we classify scene representation into layout, category (object), attribute, and color scheme four levels to align with human perception, and further study how StyleGAN learns these semantics layer by layer. Such interpretation and understanding of the deep generative representations is beyond the original design of StyleGAN.\"}",
"{\"title\": \"Response to Review #2 (Continued)\", \"comment\": \"\", \"q8\": \"\\\"This is repeated at every layer of the GAN generator and the same lambda is used to perturb the resulting output code from the separation boundary\\\"\", \"a8\": \"We would like to reaffirm that the boundary is both semantic-specified and layer-specified. Suppose we have a 14-layers StyleGAN and 100 semantic candidates, we totally predict 14 * 100 = 1400 boundaries, meaning that for each layer, we have a specific boundary for each semantic candidate. We first probe 1400 boundaries using the re-scoring technique (with same lambda), then the relative values of the scores in Eq.(1) are able to tell which concepts are most manipulatable at which layer. Based on this, when we want to manipulate a particular semantic (e.g., change scene category), we vary the latent code towards the corresponding boundary at the most relevant layers.\", \"q9\": \"Required Experiment 1).\", \"a9\": \"Results are included in Fig.19 of the updated submission, where we can tell that when changing other semantics (e.g., category and indoor lighting) on layout-relevant codes, only layout instead of the target semantics varies. This demonstrates that lower layers only controls layout.\", \"q10\": \"Required Experiment 2).\", \"a10\": \"Please refer to the Ablation Study in Sec.B of the submission, which reports the SVM accuracies. Experimental results turn out that almost all SVM classifiers achieve high performance, which is also the reason why they cannot be used to identify the most relevant semantics. Instead, the proposed re-scoring method can achieve this goal.\", \"q11\": \"\\\"how do we know the desired output apriori?\\\"\", \"a11\": \"The four-level semantic abstractions are pre-defined according to human perception. However, which layer of GAN controls these abstractions are obtained via the proposed re-scoring technique.\", \"q12\": \"\\\"Layout variation is just view-point variation\\\"\", \"a12\": \"\\\"layout\\\" means 3D room structure instead of renovation structure or object placements.\", \"q13\": \"\\\"What is so special about StyleGAN\\\"\", \"a13\": \"To our knowledge, StyleGAN, PGGAN, BigGAN are currently the state-of-the-art deep generative models for high-resolution image synthesis. If you have other recommendations, we are glad to analyze them as well. By the way, we would like to refer a concurrent work [2] (also submitted to ICLR 2020) which analyzes BigGAN, StyleGAN, DCGAN from single level (the earliest latent space). Compared to that concurrent work, we analyze deep generative representations at multiple layers from four abstraction levels, and do experiments on StyleGAN, BIGGAN, and PGGAN, which are trained for high-resolution scene synthesis. We also propose re-scoring technique to quantitatively identify the most relevant semantics to a well-trained GAN model.\\n\\n[2] On the \\\"Steerability\\\" of Generative Adversarial Networks. In submission, ICLR'20. https://openreview.net/forum?id=HylsTT4FvB\", \"q14\": \"Typos.\", \"a14\": \"Thanks, we fix the typos and grammatical errors in the updated submission.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks for the valuable comments.\", \"q1\": \"Details of StyleGAN.\", \"a1\": \"We include the structure of StyleGAN in the appendix (Fig.17) of the updated submission. Concretely, 14 layers are used in total and \\\"its correspondence with the layer levels (bottom, lower, middle, top)\\\" is also discussed in Sec.E of the updated submission.\", \"q2\": \"\\\"Dataset used to train the StyleGAN\\\".\", \"a2\": \"As introduced in Sec.3 \\\"Experimental Setting\\\" of the submission, most models are trained on a particular scene category of LSUN (e.g., bedroom model is trained only on bedroom images of LSUN). These models are used to analyze what kinds of semantics have been captured by GAN for a certain scene category, as shown in Fig.6 and Fig.7 of the submission. A mixed model is further trained on the combined set of bedroom, living room, and dining room. This model is used for category analysis, as shown in Fig.3, Fig.4 and Fig.5 of the submission. About the training time, \\\"M\\\" is inherited from the original paper [1], which means how many \\\"millions\\\" of real images seen by the discriminator. We just follow the standard. As it is not important, we remove it in the updated submission to avoid confusion.\\n\\n[1] A Style-Based Generator Architecture for Generative Adversarial Networks. Kerras et al., CVPR'19.\", \"q3\": \"\\\"a range of datasets are used to produce the results, especially for the effect where the transition of Semantic Category results is studied\\\".\", \"a3\": \"For category transition, all images are generated from the same model, which is trained on the combined set of bedroom, living room, and dining room. We simply control the latent codes at category-relevant layers to make the image transfer from one category to another. This turns an unconditionally trained GAN into a GAN conditioned on image class, which is surprising compared to BigGAN that is designed as conditional GAN.\", \"q4\": \"\\\"use a consistent terminology\\\"\", \"a4\": \"Thanks for the suggestion. We revise the submission accordingly to follow a more consistent terminology. Just to clarify, \\\"Variation Factors\\\", \\\"Visual concepts\\\", and \\\"Semantics\\\", all stand for human-understandable semantics, while \\\"Candidate concepts\\\" means the entire set of semantics, including layout prediction, 365 scene categories, 102 scene attributes, and color scheme (see Sec.A.2 of the submission). They are all employed for analysis because we don't know what kinds of semantics have been actually encoded in the latent space. The purpose of the proposed re-scoring technique is to identity the most relevant (i.e., most manipulatable) semantics from all candidates.\", \"q5\": \"\\\"separation boundary is only obtained once, and not after every layer of the generator\\\"\", \"a5\": \"Separation boundary is obtained once at every layer of the generator respectively. When training the SVM boundary, the labels are obtained from the classifiers (i.e., for each factor, 2000 top positive examples are labeled as 1 and 2000 top negative samples are labeled as 0), but the training data for each layer is different. As mentioned in Sec.3.1 of the submission, StyleGAN employs different style codes for different layers. We use these style codes to train different boundaries at different layers. When manipulating images, we use the layer-specified boundaries on proper layers (e.g., if layer 2-6 are most relevant to category, when doing category transition, we simultaneously move latent code of layer 2 towards layer-2 category boundary, move latent code of layer 3 towards layer-3 category boundary, and so on and so forth).\", \"q6\": \"\\\"Does this mean that initially, a large set of semantics is used to observe whether the output of GAN is manipulated by probing each of them\\\"\", \"a6\": \"Yes. See A4.\", \"q7\": \"\\\"And what happens when there is a tie? And, what value of K makes this metric more accurate?\\\"\", \"a7\": \"If two semantics have the same score, we assume they are equally manipulatable. Obviously, the larger $K$ is, the more accurate the score will be. However, we care about the relative value instead of the absolute value, which means that as long as the score of one candidate is higher than another, the former one is more manipulatable than the latter. Accordingly, we just need to make sure using same $K$ and lambda for all candidates.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks for the valuable comments.\", \"q1\": \"About the \\\"fairly superficial\\\" insight.\", \"a1\": \"There are two main insights from this work. The proposed re-scoring method allows to identify the cause-effect variation factors disentangled by GANs when trained for synthesizing scenes. It reveals the synthesis mechanism of GANs and helps understand the interpretability of deep generative model. Second, we found out that GAN learns to synthesize a scene similar to how human does, i.e., first set up the layout, then add corresponding objects relevant to the scene category, such as sofa and tv to a living room, and finally render the overall style (attribute and color scheme). To the best of our knowledge, this is the first work on understanding deep generative representation from the perspective of a semantic hierarchical composition. We also conduct extensive experiments to verify this discovery.\", \"q2\": \"About the typos and unclear captions.\", \"a2\": \"Thanks. We revise the typos and make the captions of all figures clearer in the updated submission.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes an approach to analyze the latent space learned by recent GAN approaches into semantically meaningful directions of variation, thus allowing for interpretable manipulation of latent space vectors and subsequent generated images. The approach is based on using pre-trained classifiers for semantic attributes of the images at a variety of levels, including indoor room layout, objects present, illumination (indoor lightining, outdoor lighting), etc. By forming a decision boundary in the latent space for each of these classifiers, the latent code is then manipulated along the boundary normal direction, and re-scored by the classifiers to determine the extent to which the boundary is coupled to the semantic attribute.\\n\\nBy taking advantage of the structured composition of the latent space into per-layer contributions in the StyleGAN approach, experiments are performed to show that different levels of semantics are captured at different layers: layout being localized in lower layers, object categories in middle layers, followed by other scene attribute, and lastly the color scheme of the image in the highest layers. A user study shows that human judgments of the coupling between layers and semantic attribute being manipulated are consistent with this observation. A set of qualitative experiments demonstrate manipulation along several axes. Another set of experiments demonstrate that the importance of different semantic attribute dimensions for different scene categories varies in an interpretable way, and also that certain attribute dimensions influence each other strongly (e.g. \\\"indoor lighting\\\" and \\\"natural lighting\\\"), whereas other ones are decoupled (e.g. \\\"layout\\\" and other dimensions).\\n\\nI am somewhat positive with respect to acceptance of the paper. On the one hand, the key idea is simple, and has been demonstrated compellingly with a broad set of experiments. On the other hand, the insight gained is fairly superficial, boiling down to the statement that the learned latent code has structure that corresponds to semantically meaningful axes of variation, and that such structure is localized to particular levels of the layer hierarchy for particular semantic axes.\", \"there_are_a_few_small_issues_with_the_clarity_of_the_paper_that_would_be_good_to_fix\": [\"Fig 3a: the interpretation of the vertical axis here was not clearly described in the caption or the main text\", \"Fig 4 caption: typo \\\"while lindoor\\\" -> \\\"while indoor\\\"\", \"Fig 5: the construction of the pixel area flow visualization is not explained in the caption, and needs a bit more clarity in the main text (e.g., how are multiple instances of the same class handled?)\", \"Fig 6: the caption could use a bit more explanation for making these plots interpretable: e.g. say what value the vertical axis is reporting\", \"Fig 8: same issue as above\", \"p8 typo: \\\"that contacts the latent vector\\\" -> \\\"that concatenates the latent vector\\\"\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a visually-guided interpretation of activations of the convolution layers in the generator of StyleGAN on four semantic abstractions (Layout, Scene Category, Scene Attributes and Color), which are referred to as the \\\"Variation Factors\\\" and validates/corroborates these interpretations quantitatively using a re-scoring function. The claim of the paper is that there is a hierarchical encoding in the layers of the StyleGAN generator with respect to the aforementioned \\\"Variation Factors\\\". Figure 3(a) illustrates how these \\\"Variation Factors\\\" emerge in the layers of the StyleGAN generator.\\n\\nThe basic GAN architecture used in this work is that of StyleGAN. However, details on the architecture of this particular GAN are missing, including in the Appendix. How many Convolution layers are present in its generator? Not everyone is aware of StyleGAN architecture -- A better illustration of their architecture in the main paper and its correspondence with the layer levels (bottom, lower, middle, top) is desired, mainly because the paper is built upon this. The dataset used to train the StyleGAN model is not clear either. In Appendix, Table 1 tabulates the training details, but nowhere is it clearly mentioned if the N=500,000 latent codes are sampled from a GAN model that was trained on a mixture of datasets (i.e., bedroom, living rooms, kitchen etc.) or individual datasets. As well, the unit of training time in Table 1 of the Appendix seems to be M; is it Million or Minutes? Both of them seem unrealistic units for training a GAN.\\n\\nSince the training dataset is not clear, my understanding of the method is that a range of datasets are used to produce the results, especially for the effect where the transition of Semantic Category results is studied. As a first step, StyleGAN model trained on \\\"bedroom\\\" scenes from LSUN dataset is used to randomly sample codes from the learned distribution, which are further passed through the generator to obtain the respective image mappings. Off-the-shelf image classifiers are employed on each of the images to classify them to one of the four \\\"Visual concepts\\\", which is nothing but the aforementioned four semantic abstractions. Here, I would like to encourage the authors to use a consistent terminology -- The four semantic abstractions have been referred to as \\\"Variation Factors\\\" (page 3), \\\"Candidate concepts\\\" (page 6), \\\"Visual concepts\\\"(page 12), \\\"Semantics\\\" (page 8) interchangeably throughout the literature, which is confusing. Then, 2000 top positive examples and 2000 top negative examples identified by the image classifiers are used to train a linear SVM, i.e., a binary-SVM all the four scene abstractions (\\\"Varying Factors\\\"), and the separation boundary is obtained. I assume the separation boundary is only obtained once, and not after every layer of the generator. Otherwise, it would not make much sense. \\nWith the separation boundary (in the form of a normal vector) known for each of the four scene semantics, different feature activations are obtained by moving the latent code towards/away from the separation boundary. A scoring function is obtained to quantify (Equation 1) how the corresponding images vary in a particular semantic aspect when the latent code is moved from the separation boundary. As per the last line of Paragraph 2 on page 4, a ranking of such scores using this function is used to understand the most relevant latent semantics. Does this mean that initially, a large set of semantics is used to observe whether the output of GAN is manipulated by probing each of them? Or are only four scene semantics chosen to begin with? And what happens when there is a tie? And, what value of K makes this metric more accurate? Any lower bound? Please explain. More questions on the effect of lamda later below.\\n\\nIn the next step, the authors sample a latent code from the learned distribution and pass it through\\nevery layer of the GAN generator. The output code y is varied along the boundary of the SVM classifier. This is repeated at every layer of the GAN generator and the same lamda is used to perturb the resulting output code from the separation boundary. The results are visualized in Fig 3(c). The claim here is that with the same perturbation of the resulting codes (lambda=2) at the output of different GAN layers, the change in the visualized output demonstrates what kind of, if any, semantic is being captured by different layers of GAN. This is also claimed to have been validated through the \\\"re-scoring\\\" function. I am not very clear on this.\", \"i_request_the_following_experiment\": \"1) Within just a single layer (be it bottom, lower, middle or top), how does the output change when the output code of that layer is perturbed in all directions? This is to see the effect (by visualizing) of the range of lamda values on the output at all the layers. Do you discover any changes weakening your claim?\\n2) I would like to see the visualizations of the latent codes at the separation boundaries, just to see how well the binary-SVM performs and whether or not, non-binary information is lost/unaccounted for.\\n\\nOn Page-5, see the fourth line from the bottom (going up): how do we know the desired output apriori? Are the four semantic abstractions decided based on the desired output? This takes us back to a question I asked earlier.\\n\\nMoreover, Layout variation is just view-point variation. So I think it will be appropriate to call it \\\"view-Point Variation\\\" rather than \\\"Layout Variation\\\". This is because Layout is associated with spatial arrangement of objects in a scene, with functionality goals.\", \"one_last_question_i_have_is\": \"What is so special about StyleGAN that it was used as the guiding architecture in this work? How generalizable is this approach to other kinds of GANs other than PGGAN and BigGAN (or rather, why is this approach relatable to StyleGAN, PGGAN and BigGAN alone)?\\n\\nThe paper has grammatical errors (sentences are not well written), typos (ex; \\\"manipulabe\\\" on page-8 which should be \\\"manipulatable\\\") and is not polished. I also suggest the authors to change the title of the paper, which right now, is a bit odd; if you decide to keep it, there should not be a \\\"the\\\" before \\\"Deep\\\" in the title.\\n\\nAll in all, the paper is interesting but lacks persuasiveness. \\nI may jump my score if the authors address all the aforementioned questions and concerns convincingly, and work on the presentation.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Updates after author response:\\nI'd like to thank the authors for their detailed responses. Some of my primary concerns were regarding the presentation, and I feel they have been mostly addressed with the changes to the introduction and abstract (I'd still recommend using 'layerwise latent code' instead of 'layerwise representation' everywhere in the text). The additional qualitative results showing the benefits of manipulating 'z' vs y_l were also helpful. Finally, I agree that given the popularity of StyleGAN like models, the investigation methodology proposed, and the insights presented might be useful to a broad audience. Overall, I am inclined to update my rating to lean towards acceptance.\\n\\n---------------------------\\nThis paper investigates the aspects encoded by the latent variables input to different layers in StyleGAN (Karras et. al.), and demonstrates that these correspond to encoding different aspects of the scene across layers e.g. initial ones correspond to layout, final ones to lighting.\\n\\nThe \\u2019StyleGAN\\u2019 work first-generates a per-layer latent code y_l (from a global latent variable w), and uses these in a generative model. This paper investigates which layer\\u2019s latent codes best explain certain variations in scenes. To formalize the notion of how a latent vector is causally related to a scene property, the approach here is to use an off-the-shelf classifier for the property, and a) find a linear decision boundary in the latent space, and b) quantifying whether changing the latent code indeed affects the predicted score.\", \"positives\": \"1. The analysis presented in the work is thorough and results interesting. The paper analyzes the relation of various scene properties w.r.t the latent variables across layers, and does convincingly show that aspects like layout, category, attribute etc, are related to different layers.\\n\\n2. The visual results depicting manipulation of specific properties of scenes by changing specific variables in the latent space, and the ones in Sec 3.2 studying transitions across scene types, are also impressive and interesting.\\n\\n3. The proposed way of measuring the \\u2018manipulability\\u2019 of an aspect of a scene w.r.t a latent variable is simple and elegant, thought I have some concerns regarding its general applicability (see below).\\n\\nDespite these positives, I am not sure about accepting the paper because I feel the investigation methods and the results are both very specific to a particular sort of GAN, and the writing (introduction, abstract, related work etc.) pitch the paper as being more general than it is, and claim the insights to be more applicable. More specifically:\\n\\n1) The text claims the approach \\u2018probes the layer-wise representations\\u2019. However, what is actually investigated is the layer-wise latent code (NOT \\u2018representation\\u2019 which is typically defined to mean the responses of filters/outputs of each layer). In fact, I do not think this work is directly applicable to probing \\u2018representations\\u2019 as the term is normally used because it may be too high-dimensional to infer meaningful linear decision boundaries, or directly manipulate it.\\n \\n2) All the initial text in the paper\\u2019s abstract, introduction etc. leads the reader to believe that the findings here are generally applicable e.g. the sentence \\u201cthe generative representations learned by GAN are specialized to synthesize different hierarchical semantics\\u201d should actually be something like \\u201cthe per-layer latent variables for StyleGAN affect different levels of scene semantics\\u201c. Independent of any other concerns, I would be hesitant to accept the paper with the current writing given the very general nature of assertions made despite experiments in far more specific settings.\\n\\n3) In Sec 4, this paper only shows some sample results other models e.g. BIGGAN, but no \\u2019semantic hierarchy in deep generative representation\\u2019 is shown (not surprising given only a global latent code). As the discussion also alludes to, I do not think this approach would yield any insights if a GAN does not have a multi-layered latent code.\\n\\n4) Finally, while the results obtained for StyleGAN do convincingly show the causal relations claimed, these results are essentially backing up the insights that led to the design of StyleGAN i.e. having a single-level latent variable capture all source of variation is sub-optimal.\\n\\n5) This is not a really weakness, but perhaps an ablation that may help. The results showing scene property manipulation e.g. in Fig 4 are obtained by varying a certain y_l, and it\\u2019d help to also show the results if the initial latent code w was modified directly (therefore affecting all layers!). It would be interesting to know if this adversely affects constancy of some aspects e.g. maybe objects also change in addition to layout.\\n\\nOverall, while the results are interesting, they are only in context of a specific GAN, and using an approach that is applicable to generative models having a multi-layer code. I feel the paper should also be written better to be more precise regarding the claims. While the rating here only allows me to give a \\u20183\\u2019 as a weak reject, I am perhaps a bit more towards borderline (though leaning towards reject) than that indicates.\"}"
]
} |
Hygab1rKDS | Quantum Algorithms for Deep Convolutional Neural Networks | [
"Iordanis Kerenidis",
"Jonas Landman",
"Anupam Prakash"
] | Quantum computing is a powerful computational paradigm with applications in several fields, including machine learning. In the last decade, deep learning, and in particular Convolutional Neural Networks (CNN), have become essential for applications in signal processing and image recognition. Quantum deep learning, however, remains a challenging problem, as it is difficult to implement non linearities with quantum unitaries. In this paper we propose a quantum algorithm for evaluating and training deep convolutional neural networks with potential speedups over classical CNNs for both the forward and backward passes. The quantum CNN (QCNN) reproduces completely the outputs of the classical CNN and allows for non linearities and pooling operations. The QCNN is in particular interesting for deep networks and could allow new frontiers in the image recognition domain, by allowing for many more convolution kernels, larger kernels, high dimensional inputs and high depth input channels. We also present numerical simulations for the classification of the MNIST dataset to provide practical evidence for the efficiency of the QCNN. | [
"quantum computing",
"quantum machine learning",
"convolutional neural network",
"theory",
"algorithm"
] | Accept (Poster) | https://openreview.net/pdf?id=Hygab1rKDS | https://openreview.net/forum?id=Hygab1rKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ABEOL8rgIy",
"rkg0c4VhiB",
"HyxNzfvjor",
"BJe0H1wKjB",
"ByxRZyvKoB",
"rygvnCLtsS",
"Hketr08Yjr",
"SyeOQqnPqB",
"SylabRM85H",
"HklnChipKr",
"S1g6EebaKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726478,
1573827733884,
1573773836362,
1573642053520,
1573641990266,
1573641902556,
1573641793138,
1572485664299,
1572380165010,
1571826900184,
1571782708547
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1560/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1560/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1560/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1560/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1560/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Four reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission. Especially, the authors should take care to make this paper accessible (understandable) to the ML community as ICLR is a ML venue (rather than quantum physics one). Failure to do so will likely discourage the generosity of reviewers toward this type of submissions in the future.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer\", \"comment\": \"We thank the reviewer for the insightful comments.\\n\\nWe would like to clarify that our work provides a rigorous theoretical analysis showing that our quantum CNN is a faster and noise-robust adaptation of the classical CNN. As the quantum algorithm performs the same operations as the classical CNN (convolution, pooling, non-linearity), one expects that properties of classical CNN\\u00a0like invariance and weight sharing to be preserved. This is also evidenced so far by the experiments that show that even with the added noise the CNN converges fast and to a high accuracy model. We are performing further experiments in bigger data sets (which has been made possible through a comment of another referee) in order to further validate these properties. Nevertheless, both the theoretical analysis and the preliminary experiments strongly suggest that these properties will continue to hold. We want to be very careful not to over-interpret these results as they don\\u2019t fully prove or disprove the capabilities of our quantum algorithm (which can only be done when the quantum hardware arrives), however they provide the best possible way of benchmarking the quantum algorithms in the present time.\\n\\nConcerning the error bars in the simulations, we have repeated these experiments many times and observed similar convergence. We will add the error bars in the final version.\"}",
"{\"title\": \"Answer to authors\", \"comment\": \"Concerning the addition of quantum noise, I'm a bit worried that there is no error bars on the learning curves since the noise is by definition stochastic. So, are these results really significant?\\n\\nI'm still not super convinced that the testing procedure is really demonstrating the announced performance gain of the method. I think it is a nice first step but it misses some of the important features of CNNs (invariance, weight sharing etc.) and this is actually acknowledged in the answer. Without these features, my feeling is that the impact is quite limited.\"}",
"{\"title\": \"Answer\", \"comment\": [\"We thank the reviewer for the appreciation of the paper and insightful comments, in particular concerning the experimental results.\", \"The resulting noise is indeed added in the image and in the gradient. The different noises come from the non deterministic nature of quantum procedures such as estimating the amplitude of a quantum state, or random outcomes of a measurement. Part of our work was to map these quantum errors to a resulting classically interpretable noise in the neural network itself (layers and gradients), in order to perform classical simulations. The fact that we apply a \\u00ab\\u00a0Normal CNN\\u00a0\\u00bb afterwards is the main goal of our algorithm, which is to reproduce the classical algorithm with quantum procedures to gain speedup.\", \"The reviewer\\u2019s remarks concerning weight sharing and invariant representation is very interesting and will certainly be one of our focus for the future simulations. We believe that, in principle, properties of invariant representation, due to weight sharing in the convolution layer, are preserved since our quantum algorithm performs the same operations (convolution product, pooling). It is a good question to see if the noise could have a negative impact on this. We will look for a specific dataset and a relevant metric to quantify this property in comparison to classical CNN.\", \"We are currently simulating the QCNN on larger and different datasets (e.g. CIFAR-10). A good advice from another reviewer will surely help us to perform these simulations more efficiently.\"]}",
"{\"title\": \"Answer\", \"comment\": [\"We thank the reviewer for the appreciation of the paper and insightful comments.\", \"The reviewer is right concerning the meaning of the quantum state $|i>$ being the $i^{th}$ vector in the standard basis. If accepted, we will make the effort of introducing basic concepts of quantum computing to allow a clearer understanding of our work to the audience. As well, we will introduce more intuitively the concept of quantum tomography, namely the family of procedures that allow to retrieve a classical description of a quantum state by repeated measurements (and infer the values of the quantum amplitudes from the resulting distribution).\", \"Our work in quantum deep learning is indeed related to previous works in quantum neural network cited in our paper, in particular the fully connected quantum neural network of Allcock et al. (2018). Their layer method is similar to ours and indeed could be explained in the appendix.\", \"The speedup \\u00ab\\u00a0in certain cases\\u00a0\\u00bb concerns indeed both the forward pass and the whole training. We will change this sentence and be more explicit.\", \"Applying a non linearity (as ReLu activation function) in a quantum circuit is a difficult challenge. In our solution, once the value is encoded as a bit string in a quantum register, we can apply a non linearity on it. The circuit that modifies the value accordingly depends on the non linearity considered. For ReLu or other positive simple rules (piecewise linear functions, indicator functions), one could imagine a simple circuit involving few gates to act on the bit strings. Most importantly, the size of such circuits will have a constant depth that doesn\\u2019t depend on the algorithm parameters. In our sentence, \\u00ab\\u00a0boolean\\u00a0\\u00bb refers to a classical and explicit series of gates. Note however that implementing more complex non linearities such as tanh could imply a taylor decomposition of the function in order to approximate it with a small number of gates. This explains our choice of the ReLu function.\", \"Our \\u00ab\\u00a0quantum importance sampling\\u00a0\\u00bb can be parametrized in two manners: $\\\\eta$ that relates to the precision of the tomography, or $\\\\sigma$ that corresponds to the ratio of elements sampled (the others being set to zero). The relation between the two approaches is given in Appendix, Section C.1.5, namely we have $\\\\sigma = N/\\\\eta^2$ where $N$ is the size of the output image. We agree that the explanations could be clearer and we will make the effort to present it better. In our opinion, the Sigma perspective is more intuitive when considering image processing (as shown in Figure 1), and more explainable than Eta which implicitly depends on the size $N$.\", \"This particular sampling described above is purely a quantum effect and has no known classical usage or reason to be. In a way, it can be seen to a non deterministic activation function. It is reducing the number of non zero values in the layers themselves, and not (directly) in the weights of the kernels. Therefore it might, or not, be related to pruning, drop out or sparse NN. We appreciate this comment and will research on that analogy.\", \"We thank the reviewer for the advice concerning the PyTorch implementation. We will certainly use this to perform further simulations on different and larger datasets. This could help us save a lot of time.\", \"All remarks concerning typos and formatting will be taken into account in the final version.\"]}",
"{\"title\": \"Answer\", \"comment\": [\"We thank the reviewer for the appreciation of the paper and insightful comments.\", \"If accepted, we will do our best effort to ensure an appropriate and clear presentation of quantum machine learning. In particular we will communicate to the audience the core principles of quantum computing and main methods to \\u00ab\\u00a0quantize\\u00a0\\u00bb machine learning algorithms (quantum vectors, quantum linear algebra, quantum distance estimation). As well, we will indicate clearly the benefit and limitations of such methods.\", \"Concerning the remark on the potentiel limitations of a capped ReLu activation function, we agree with the reviewer and will pursue further simulations to quantify the implications on the accuracy and training time.\"]}",
"{\"title\": \"Answer\", \"comment\": [\"We thank the reviewer for the appreciation of the paper and good remarks.\", \"If accepted, we will make our best effort to introduce the concept of QRAM, briefly in the main paper and in details in the appendix. The reviewer is right to request this as it is a very important aspect of quantum machine learning and quantum computing in general.\", \"The previous paper of Cong et al. \\u00abQuantum Convolution Neural Network\\u00bb published in Nature Physics is an excellent contribution, but is conceptually different from our submission for the following reasons: Cong et al. have defined a new quantum circuit model (inspired from quantum physics), with properties that can be applied for signal processing (eventually image if generalized to 2D inputs). Their algorithm can be used for learning or classifying phase of quantum physical systems, and shares similar aspects with a CNN, hence the naming. However, they are not reproducing the precise operations that a classical CNN is performing (convolution product, non linearities, pooling, backpropagation by gradient descent, etc.) layer by layer, which is the topic of our work. In other words, we are defining the first quantum circuit simulating a classical CNN. The reviewer is right to notice this and we will explain the specificity of each paper to avoid any ambiguity.\", \"The quantum algorithm described in our work is \\u00ab\\u00a0hardware agnostic\\u00a0\\u00bb in the sense that it is built on a theoretical and universal set of quantum gates (e.g. Control-NOT, Hadamard, Rotations) that should be implementable by any hardware (superconducting circuits, photonics, ions etc.). If accepted, we will precise and explain this core fact in the final version.\", \"We will promptly remove the undesired \\u00ab\\u00a0conference submission\\u00a0\\u00bb in the title.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This submission proposed a quantum convolutional neural network (QCNN). The theoretical results in section 3 state the existence of the QCNN satisfying certain conditions. The QCNN is given by sections 4 and 5, with empirical evaluates in section 6. This subject is out of my usual area. However, I tend to think this subject is interesting to the ICLR audience due to the recent advancement in quantum computing.\\n\\ntitle, remove \\\"conference submissions\\\"\\n\\nSection 2, introduce QRAM.\\n\\nSomewhere around section 2-3, and in section 6, it has to be mentioned whether the proposed QCNN requires special hardware, and what is the hardware, and why it is required.\\n\\nNote the cited Cong et al. (2018) has been published in nature physics. As you both used the term \\\"QCNN\\\", it is better to explain more clearly what is the main difference in the main text.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors provide a comprehensive study, with theory and classical simulation of the quantum system, on how to increaese the speed of CNN inference and training using qubits. They proved an intrigued compilation of a quantized convolutional system.\\n\\nFor an audiance who are not quantum experts one could clarify a few properties of the described \\\"quantization\\\". It is not able to take multiple images in at the same time as quantum superpositions. Sometimes the expectence of a quantum machine leanring is that it would train the system at a single instance. Here the increase in effciency comes from being able to perform marix multiplication in quantum realm.\\n\\n The manuscripd describes a creative way to bring bolean operartions-defined non-linearities into quantum neural networks, at the cost of having to force the system back to classical domain at each layer - and then encoding it back to qubits for the next layer operations. The price has to payed as unitary operators (the ones that preserve entaglement) are inherently linear.\", \"remarks\": \"The authors should point out how this is specific to convolutional neural networks. It looks to me that the same algorithm could be used for fully connected or even attention based systems, as it is just a matrix multiplication as well. Anyway, the manuscript provides the take into account the steps required specifically for a CNN.\", \"some_minor_remarks\": \"For classical systems the capped Relu is inferior as it reduces the range of values of activations where there is driving force. Sometimes one is using a parametrized version of ReLU that has a small positive slope for negative values.\\nIt is not clear to me how this would not be the case with a quantum implementions. In your simulations, does the value of the saturation constant C, change the speed of convergence to a good solutions. \\n\\nThe capped Relu reminds me a lot of a Tanh non-linearity that works well for LSTMs but are not very good for CNNs.\\n\\nI would suggest that for the conference presentation the authors try to bring out the essential within a less formal setting to open it to a wider ML audience. For the manuscript the level is good with a proper use of appendix to shorten the main narrative.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present a quantum algorithm for approximating the forward pass and gradient computation of a classical convolutional neural network layer with pooling and a bounded rectifier activation. This algorithm has complexity bounds that would open up (for instance) the possibility of exponentially large filter banks, and the authors show through a simple, classical simulation approach that the resulting network is also likely to be trainable.\", \"feedback\": [\"A few typos/formatting issues:\", \"The title accidentally includes \\\"Conference Submissions\\\"\", \"The in-text citation format frequently has the parentheses in the wrong place; this is surprisingly distracting!\"], \"preliminaries\": [\"Maybe explain what the ith vector in the standard basis is in terms of |0> and |1>? I assume the answer is along the lines of |000>, |001>, |010>, etc.?\"], \"main_results\": [\"The sentence \\\"a speedup compared to the classical CNN for both the forward pass and for training using backpropagation in certain cases\\\" is ambiguous; does \\\"in certain cases\\\" qualify only training speed or also forward pass speed?\", \"There's a clear separation of background (which is concise and well explained) and contributions, but maybe it would be worth connecting the introduced algorithm more closely to existing work in non-convolutional quantum neural networks?\", \"Can you briefly justify (or cite) the claim that \\\"most of the non linear functions in the machine learning literature can be implemented using small sized boolean circuits\\\"?\", \"I'm a little confused about the discussion of quantum importance sampling on page 4. Could you give some intuition for the relationship between eta and the fraction of output values that are on average flushed to zero (is this 1 minus sigma?), and perhaps connect this to the literature about activation pruning and sparse NNs?\", \"Maybe define what you mean by \\\"tomography\\\" for ML folks without the quantum background?\", \"I'm convinced by the simulations, even though I shouldn't really be convinced by anything on MNIST... It just seems like the perturbations you're applying are all things that modern neural networks take in stride.\", \"The discussion of using a sigma-based classical sampling rather than the eta-based quantum importance sampling mentions a \\\"Section C.1.15\\\" which does not exist (I think you mean the end of Section C.1.5).\", \"Re: \\\"We will use this analogy in the numerical simulations (Section 6) to estimate, for a particular QCNN architecture and a particular dataset of images, which values of \\u03c3 are enough to allow the neural network to learn.\\\" My understanding is that you're getting empirical estimates of which values of sigma are enough; it would be valuable to convert those to estimates of which values of eta would be enough (given quantum networks of the size used in the classical simulation experiment, or given larger networks).\", \"The sampling procedure based on sigma might be inefficient in your PyTorch implementation, but it's certainly something that GPUs are fairly well suited to computing. There might be other PyTorch operators that would help here (perhaps Bernoulli sampling?) or if nothing else you could write a small custom CUDA kernel.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a quantum version of the convolutional neural networks. They derive equivalent versions the computation of both the convolution operation and the back-propagation in the quantum computing framework. The authors claim a potential exponential speed up in the computation in the size of the kernel which would make possible to process much bigger inputs or just speed up current tasks involving images mainly. They exemplify the method on MNIST using quantum artifacts simulation using PyTorch and show their method is competitive with SoA.\\n\\nI think the paper is well written and provides a nice discussion about how quantum computing can be applied to CNNs. I appreciate that both the forward and backward passes are studied although most of the technical details are in appendices. So, I felt the paper itself was a bit optimistic about the impact of quantum computing on CNNs.\\n\\nEspecially, I found the experimental section was missing details. I am not a quantum computing expert but I was a bit surprised that, in the context of CNNs, the introduction of quantum noise was just considered as introducing noise in the image and the gradient and then applying normal CNNs to the resulting image. Standard CNNs can probably deal with such noise and noisy gradient descent is not a big issue as such (it can even avoid local minima). But CNNs are notoriously known to reduce the number of weights in a network because of weight sharing. So, it is not all about making one convolution faster but also to compute invariant representations by sharing weights over different parts of the inputs. I would have been interested by a discussion about how quantum noise may impact this property and I didn't find this in the paper nor the appendices. \\n\\nAlso the author confess that the learning is not stable and results on MNIST to be the best they could get. I think it would be worth testing on large scale problems and see whether larger kernels with such noisy conditions would really improve the performance.\"}"
]
} |
SyxTZ1HYwB | TWO-STEP UNCERTAINTY NETWORK FOR TASKDRIVEN SENSOR PLACEMENT | [
"Yangyang Sun",
"Yang Zhang",
"Hassan Foroosh",
"Shuo Pang"
] | Optimal sensor placement achieves the minimal cost of sensors while obtaining the prespecified objectives. In this work, we propose a framework for sensor placement to maximize the information gain called Two-step Uncertainty Network(TUN). TUN encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at un-observed locations. Experiments on the synthetic data show that TUN outperforms the random sampling strategy and Gaussian Process-based strategy consistently. | [
"Uncertainty Estimation",
"Sensor Placement",
"Sequential Control",
"Adaptive Sensing"
] | Reject | https://openreview.net/pdf?id=SyxTZ1HYwB | https://openreview.net/forum?id=SyxTZ1HYwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2oj_M5PBi8",
"BJeSdnb1qr",
"BJgf-t36YB"
],
"note_type": [
"decision",
"official_review",
"official_review"
],
"note_created": [
1576798726433,
1571916909441,
1571830010490
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1559/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1559/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a sensor placement strategy based on maximising the information gain. Instead of using Gaussian process, the authors apply neural nets as function approximators. A limited empirical evaluation is performed to assess the performance of the proposed strategy.\\nThe reviewers have raised several major issues, including the lack of novelty, clarity, and missing critical details in the exposition. The authors didn\\u2019t address any of the raised concerns in the rebuttal. I will hence recommend rejection of this paper.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper addresses the issue of how to optimize sensor placement. The authors propose a framework for sensor placement called Two-step Uncertainty Network (TUN) based on the idea of information gain maximization. More concretely, the proposed method encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at unobserved locations. Experimental results on the synthetic data clearly show that TUN outperforms current state-of-the-art methods, such as random sampling strategy and Gaussian Process-based strategy.\", \"comments\": [\"On page 1, the phrase \\u201c\\u2026 on high dimensional data such as images as generative models\\u201d seems unclear.\", \"The lhs of Eq 3. should be MI(y,x_k|v_k, Obs)?\", \"In Fig. 1a, the \\u201cred arrows\\u201d for indicating TUM look like brown?\", \"In Fig. 1b, only the variable x_k is mentioned in the caption.\", \"If I understand correctly, on page 3 in Sect. 2.1, the imagination step is basically the same as VAE? After all, there does not seem to be any discussion on the choice of variational approximation q_{\\\\phi} and prior p_{\\\\theta}(z), where is crucial for performing variational inference.\", \"Though I am not an expert in this domain, I find the basic idea is simple and easy to understand. However, my major concern is about the novelty of this work, given the fact that the theoretical contribution is quite limited.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper describes a sensor placement strategy based on information gain on an unknown quantity of interest, which already exists in the active learning literature. As is well-known in the literature, this is equivalent to minimizing the expected remaining entropy. What the authors have done differently is to consider the use of neural nets (as opposed to the widely-used Gaussian process) as the learning models in this sensor placement problem, specifically to (a) approximate the expectation using a set of samples generated from a generator neural net and to (b) estimate the probability term in the entropy by a deterministic/inspector neural net. The authors have performed some simple synthetic experiments to elucidate the behavior and performance of their proposed strategy.\\n\\nConventionally, the sensor placement strategy is tasked to gather the most informative observations (given a limited sensing budget) for maximimally improving the model(s) of choice (in the context of this paper, the neural networks) so as to maximize the information gain. The authors seem to have adopted a different paradigm in this paper: Large training datasets are needed for the prior training of both neural nets (in the order of thousands as reported in the experiments). This seems to be defeating the original aim/objective of sensor placement, as described above. Consequently, it is not clear to me whether their proposed strategy would be general enough for use in sensor placement for a wide variety of environmental monitoring applications. Random sampling and GP-based sensor placement strategies do not face such a severe practical limitation. \\n\\nThe paper is also missing several important technical details and clarity of presentation is poor. For example,\\n\\n(a) The configurations and training procedure of generator NN G and deterministic NN D for the experiments are not sufficiently described for each experiment.\\n\\n(b) What do the authors do with the new observations obtained from placing the sensors in the last experiment? Do they adopt an open-loop sensor placement strategy?\\n\\n(c) The setup for the last experiment is not clear. Is it still the same object classification task? Is the GP receiving an exclusive set of 4D features that are different from the other two methods? I get the impression that the classifiers are trained a priori. For the GP classifier, isn't it the case that one should gather the most informative observations to maximally improve its classification accuracy?\\n\\n\\nThough I like the authors' motivation of the setup of the x-ray baggage scanning system in security screening, what has really been done in their experiments appears to be still quite far from this real-world setup. Furthermore, their proposed strategy has been used to gather only 1 to 4 observations. More extensive empirical evaluation with real-world datasets (inspired by realistic problem motivation) is necessary.\\n\\nFig. 2: I find it surprising that with a single observation, it is possible to generate the instance/imagined spectrum in orange that resembles that of the true spectrum. Similarly, with 3 observations, all 10 instances/imagined spectrums can exhibit the first spike (without observations on it). Can the authors explain this phenomena?\\n\\n\\n\\nMinor issues\\nThe authors need to put a space in front of all opening round bracket. Other formatting issues exist.\", \"equation_3\": \"v_k is missing from the conditioned part on the lefthand side of the equation.\", \"page_3\": \"evidence lower bond?\\nFigure 2 appears on page 3 and is only referenced on page 4.\", \"algorithm_1\": \"The use of subscript j in x^m_j to represent an unobserved location confuses with that of subscript k in x^m_k to denote the time step.\", \"figure_3_captions\": \"adapted on the measurements?\", \"figure_4_captions\": \"corner feature occurred?\"}"
]
} |
Skxn-JSYwr | EXPLOITING SEMANTIC COHERENCE TO IMPROVE PREDICTION IN SATELLITE SCENE IMAGE ANALYSIS: APPLICATION TO DISEASE DENSITY ESTIMATION | [
"Rahman Sanya",
"Gilbert Maiga",
"Ernest Mwebaze"
] | High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks. To improve accuracy, post-classification
methods have been proposed for smoothing results of model predictions. However, those approaches require an additional neural network to perform the smoothing operation, which adds overhead to the task. We propose an approach that involves learning deep features directly over neighboring scene images without requiring use of a cleanup model. Our approach utilizes a siamese network to improve the discriminative power of convolutional neural networks on a pair
of neighboring scene images. It then exploits semantic coherence between this pair to enrich the feature vector of the image for which we want to predict a label.
Empirical results show that this approach provides a viable alternative to existing methods. For example, our model improved prediction accuracy by 1 percentage point and dropped the mean squared error value by 0.02 over the baseline, on a disease density estimation task. These performance gains are comparable with results from existing post-classification methods, moreover without implementation overheads. | [
"semantic coherence",
"satellite scene image analysis",
"convolutional neural networks",
"disease density"
] | Reject | https://openreview.net/pdf?id=Skxn-JSYwr | https://openreview.net/forum?id=Skxn-JSYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Ai-5rY9iPU",
"B1gzVVJsjS",
"rklgjG6Zjr",
"ByewhdO65r",
"HylZ2Rw6FH"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726403,
1573741609803,
1573143192164,
1572862126583,
1571810985206
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1558/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1558/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1558/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1558/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This papers proposed a solution to the problem of disease density estimation using satellite scene images. The method combines a classification and regression task. The reviewers were unanimous in their recommendation that the submission not be accepted to ICLR. The main concern was a lack of methodological novelty. The authors responded to reviewer comments, and indicated a list of improvements that still remain to be done indicating that the paper should at least go through another review cycle.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We agree with many of the reviewer concerns\", \"comment\": \"Summary of reviewer concerns:\\n\\n1. Our claim of having developed an approach for improving prediction accuracy for satellite scene image analysis that has greater efficiency than post-classification approaches is not validated with experiments (reviewer #2, #5).\\n2. We should compare performance of our model against a fair baseline, like the post-classification methods cited in the paper (reviewer #2, #4).\\n\\nResponse to concerns 1, 2 \\u2013 we acknowledge this weakness. We will conduct validation experiments.\\n\\n3. Our method has not been generalized to other application domains, thus limited in scope (reviewer #2, #5).\\n\\nResponse \\u2013 we will consider the possibility of generalizing to other application domains\\n\\n4. Use of a hard threshold for similarity metric seems arbitrary. Suggest to take \\u201call geographical neighborhood of a patch into account when making a prediction e.g., with a coarse-to-fine prediction approach.\\u201d That the aggregation of features can be learned and more sophisticated than average pooling (reviewer #4).\\n\\nResponse \\u2013 aggregating features through learning will be considered in the next phase of our work. Taking all neighbors of a patch is the main idea we have, though not stated in our paper. However, the analysis in our current paper is scoped for only one neighbor. We are currently designing further experiments that consider all neighbors.\\n\\n5. Discussion of the paper is said to be weak (reviewer #4).\\n\\nResponse \\u2013 We will improve the discussion upon addressing concerns 1, 2, 3, and 4.\\n\\nWe thank all reviewers of our paper for their generous feedback.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #5\", \"review\": \"This papers proposed a solution to the problem of disease density estimation using satellite scene images. One common challenge in this type of applications is having a high intra-class diversity and a high inter-class similarity. The solution proposed by the authors is based on the use of siamese networks to extract features from pairs of neighbouring images, and merge the features only if they are similar. The authors claim that this approach alleviate the need of a post-classification smoothing.\", \"advantages\": \"The idea of merging siamese features for similar tiles only is sound. The paper is clearly written and structured. The shown results seem to outperform the baseline.\", \"drawbacks\": \"The paper seems of fairly limited novelty. Moreover, it is centered around one particular application. Although the task is approached with both a classification and a regression model, the classification dataset is obtained by a simple binning which makes the two tasks highly related. It would be interesting to have different settings to test the consistency of improvement with the proposed method. Finally, the authors claim that the method alleviates the need to post-classification smoothing, but this cannot be straightforwardly concluded from the conducted experiments. It would be interesting to have a more thorough comparison to other methods that use post-classification processing.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes to do a coupled inference over pairs of geographically close images instead of a single image for satellite imagery. The coupling is done with an average pooling of the feature vectors when the neighbouring patches are detected to be similar enough based on a threshold on the L2 distance of these features. The method is applied to tasks of estimating crowding population, and diseases density, from satellite images.\\n\\nThe paper have little novelty. The approach reduces to a smoothing method over pairs of neighbouring patches, that is only activated sometimes based on a hard threshold. This seems arbitrary and there are many competing approaches that could be applied. \\nOne could think about taking all the geograpical neighbourhood of a patch into account when making a prediction, e.g. with a coarse-to-fine prediction approach; the aggregation of features can be learned and more sophisticated than average pooling. Using a single-image baseline is not fair. The discussion is not up to the level of ICLR and offers mostly guesswork.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a method to extract features utilizing the adjacency between patches, for better classification/regression of satellite image patches. The proposed method achieves better results compared to a straightforward baseline method.\", \"i_have_several_significant_concerns\": \"- In the abstract, the authors claim that existing approaches such as post-classification add computational overhead to the task, whereas the proposed method does not add significant overhead. However, to me, post-classification can be very simple and straightforward, whereas the proposed method adds a series of computations: the proposed method not only extracts features from the input image, but also for another neighboring image; then features are combined (if two images are similar), before feeding into the network. The authors need to validate the claim that their method is more efficient.\\n\\n- The baseline the authors compare to is weak. There are existing works on satellite image classification/regression. Many of them also use semantic/contextual information, or aim to improve the robustness of features. For example:\\n\\n[1] Derksen et al. Spatially Precise Contextual Features Based on Superpixel Neighborhoods for Land Cover Mapping with High Resolution Satellite Image Time Series. IGARSS 2018.\\n\\n[2] Ghassemi et al. Learning and Adapting Robust Features for Satellite Image Segmentation on Heterogeneous Data Sets. Geoscience and Remote Sensing 2019.\\n\\nI understand that the authors cannot compare to everything. But the authors should compare to representative baseline methods. Methods mentioned in the related work section (Section 2.1) can also be compared to.\\n\\n- The proposed method is very application specific. The author only discussed the remote sensing application. Given the ICLR community's interest in general methods that can be applied to (or already been tested on) multiple applications, the paper would have been stronger if the methods applicabilityto other domains was discussed (and even better demonstrated).\"}"
]
} |
ryghZJBKPS | Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds | [
"Jordan T. Ash",
"Chicheng Zhang",
"Akshay Krishnamurthy",
"John Langford",
"Alekh Agarwal"
] | We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems. | [
"deep learning",
"active learning",
"batch active learning"
] | Accept (Talk) | https://openreview.net/pdf?id=ryghZJBKPS | https://openreview.net/forum?id=ryghZJBKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5FLyA-lhT",
"SJloeGs3oS",
"S1lUYBYoiB",
"S1xcbNFijB",
"HJxiKftooH",
"HJeqfZW0YB",
"HJgcG9o2FB",
"B1loQbi2YB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726374,
1573855731183,
1573782909715,
1573782530284,
1573782146626,
1571848465974,
1571760657851,
1571758371260
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1557/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1557/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1557/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"The paper provides a simple method of active learning for classification using deep nets. The method is motivated by choosing examples based on an embedding computed that represents the last layer gradients, which is shown to have a connection to a lower bound of model change if labeled. The algorithm is simple and easy to implement. The method is justified by convincing experiments.\\n\\nThe reviewers agree that the rebuttal and revisions cleared up any misunderstandings.\\n\\nThis is a solid empirical work on an active learning technique that seems to have a lot of promise. Accept.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"To All Reviewers\", \"comment\": \"Thank you all for your review. We've changed some notation, fixed typos, and moved Proposition 1 to the main text as per your recommendations. We respond to your individual comments below.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for your feedback.\\n\\n1. Yes, it is standard in the literature to use the entire unlabeled dataset. To deal with large pool sizes, one can perform subsampling and use our approach to select examples for label queries within the subsample.\\n\\n2. The earliest citation we can find for MARG is from \\u201cMargin-based Active Learning for Structured Output Spaces\\u201d by Roth and Small. We\\u2019ve added the citation.\\n\\n3. Shaded regions are standard error. We added that to figure captions.\\n\\n4. Yes, we agree that a t-test would be more appropriate here (although both are not ideal, as the distributions of the error differences can be far from Gaussian). We have updated our comparison matrices in light of this discussion.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for your effort.\\n\\n1. We believe you may have missed the main contribution of this paper - the nature of the embedding used. We do not use the hallucinated gradient norm as the embedding, we use the gradient vector for the parameters at the last layer of the network. Once samples are embedded in this space, which is of dimension equal to (number of classes x number of penultimate layer nodes), we use the k-means++ algorithm to select a batch of k representative samples. We note that the norm of this vector is a lower-bound on the true norm of the last-layer gradient, given the corresponding label to a given sample. \\n\\n2. Again, we are not using the gradient norm as our embedding. The gradient representation is not interchangeable with an uncertainty metric like entropy. \\n\\nWe do use random sampling as a baseline. Random sampling is not conditioned on any representation.\\n\\n3, 6. The centroids are the output of the k-means++ algorithm. Again, this is run in the space of the gradient embeddings, not the space of gradient norms. Each embedding has both direction and magnitude.\\n\\n4. We do have this in the main text. Representative k-DPP plots are shown in Figure 1 in comparison to k-means++. The reason we do not present results for k-DPP in Section 4 (Experiments) is that sampling from k-DPP is much more time consuming than all other methods considered, while its performance is similar to kmeans++. This is why we use kmeans++ in BADGE.\\n\\n5. We added this proof to the main text.\\n\\n7. Our explanations of Figures 1-5 describe experimental trends. We describe experimental results in terms of the behavior of plots, especially in the Experiments section. Some explanation was added in the most recent article update.\\n\\n8. Unfortunately, besides examining learning curves like those in Figure 2, there are no widely-used metrics for evaluating batch active learning in the literature. We choose this metric because we are interested in which algorithms significantly outperform other algorithms for various labeling budgets. In the current version of the paper, the comparison comes from a t-test.\\n\\n9. Diversity-based approaches often perform worse-than-random when the penultimate layer representation is not meaningful. That is, because random sampling is not conditioned on any representation, it can actually induce a more diverse batch. We also sometimes see random outperforming confidence-based approaches, which is evidence that selecting on diversity is better than selecting on uncertainty for those situations. \\n\\nNone of the baseline acquisition functions have tunable parameters. \\n\\n10. We\\u2019ve weakened the claim in the first sentence.\\n\\n11. The version space is the space of all models that are plausible given the labeled example seen so far. We\\u2019ve included that in the introduction. \\n\\n12. Each line in figure 2 is averaged over five runs. The shadow for each line describes the standard error over those runs. We\\u2019ve added this to the text.\\n\\n13. We changed the intermediate activation function to z(x; V) to avoid confusion.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thanks for your review.\\n\\n1-2. The motivation for using a k-DPP is that it will select a batch of samples that are both high magnitude and diverse. K-means++ has this property too - in particular, if initialized with a high-magnitude point, proceeding samples are likely to be high-magnitude as well. We show this phenomenon in Figure 2 of the newly-updated paper, and compare it to another method for clustering data. We also added appendix figures to appendix F, showing that simple uncertainty sampling can lead to batches with Gram determinant zero.\\n\\n3. We used three architectures (ResNet, VGG, and MLP), seven datasets (MNIST, SVHN, CIFAR-10, and four OpenML datasets), and three different batch sizes (100, 1k, and 10k). We didn\\u2019t use any convolutional architectures with MNIST or non-image datasets, leading to 33 unique combinations of dataset, batch size, and architecture. As each (dataset, batch size, architecture) combination only contribute to at least a penalty of 1 in the penalty matrix, the largest entry in the penalty matrix is at most 33. We made that clear in the newly-updated copy.\\n\\n4. Thank you for pointing out this article. We\\u2019ve added it to the related work section.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Batch active:\\nThis paper proposes a novel approach to active learning in batches. Assuming a neural-network architecture, they compute the gradients of each unlabeled example using the last layer of the network (and assuming the label given by the network) and then choose an appropriately diverse subset of these using the initialization step of kmeans++. The authors provide intuitive motivation for this procedure, along with extensive empirical comparisons. \\n\\nOverall I thought the paper was well written and proposed a new practical method for active learning. There were a few concerns and places where the paper could be clearer.\\n\\n1. The authors keep emphasizing a connection to k-dpp for the sampling procedure emphasizing diversity. They provide a compelling argument for the kmeans++ but in Figure 1 it is unclear why k-DPP is the right comparison point. For example, you could imagine building a set cover of the data using balls at various radii and then choosing their centers.\\n2. The paper emphasizes choosing samples in a way to eliminate pathological batches. Considering this is a main motivation, none of the figures really demonstrate that this is what BADGE is doing compared to the uncertainty sampling-based methods tested against. Perhaps the determinant of the gram matrix of the batch could be reported for both algorithms? \\n3. While reading the paper, the set of architectures used was hard to find. Maybe I just missed it, but it would be useful to have this information. In particular, in Figure 3, there are absolute counts, but I wasn\\u2019t sure how many (D,B,A,L) combinations there were. \\n4. Finally, recent work in Computer Vision has shown that uncertainty sampling with ensemble-based methods in active learning tends to work well. I understand that it is hard to compare to the myriads of active learning algorithms out there, but they deserve a mention. See [1] below.\\n\\nOverall I think this paper is a good empirical effort that I recommend for acceptance.\\n\\n[1] Beluch, William H., Tim Genewein, Andreas N\\u00fcrnberger, and Jan M. K\\u00f6hler. \\\"The power of ensembles for active learning in image classification.\\\" In\\u00a0Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9368-9377. 2018.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new method for active learning, which picks the samples to be labeled by sampling the elements of the dataset with highest gradient norm, under some constraint of diversity. The aforementioned gradient is computed w.r.t. the predicted label (rather than the true label, that is unknown) and diversity is achieved by sampling via the k-MEANS++ algorithm.\\nThe paper is well written and while the experiments look thorough, the motivation to support the proposed method seem too weak and unconvincing as does the discussion of the results, which is why I am leaning toward rejection. \\nI am willing to amend my vote if the authors provide stronger (not empirical) motivations on why using the gradient norm w.r.t. the predicted label is a better metric than those in the literature, and More comments below.\", \"detailed_feedback\": \"1) The paper lacks a proper motivation as to why using the norm of the gradient is a better metric than the many others already present in the literature. In particular, I cannot think of any case where it would be best to use that than the entropy of the network\\u2019s output distribution, even though the empirical results seem to suggest otherwise. Specifically, while I believe that in many cases it will be similarly good, if we consider the case when the network is able to rule out most of the classes but is unsure on a small fraction of them, the entropy will better reflect this uncertainty than the norm of the gradient of the predicted class. \\n\\nGenerally speaking, I believe that the use of the norm of the gradient of the predicted class should be much better motivated, being the core idea of the paper. Stating that it is cheap to compute and empirically performs as well as k-DPP in two experiments is not convincing enough in my opinion.\\n2) I wonder how much of the performance of BADGE is due to k-MEANS++ and how much to the choice of using the gradient norm. Please perform an ablation study where you can e.g., replace the gradient norm with the entropy, or replace k-MEANS++ with random sampling, and discuss the results. \\n3) How is the embedding \\u201cground set\\u201d space determined for k-MEANS++? How are the centroids determined? In which space? It is unclear to me how k-MEANS++ is used in the context of the norm of the gradients. Please improve the explanation in the main text.\\n4) Please add a curve for k-DPP to the plots in the main text, rather than having separate plots for it in the appendix. Also, it would be interesting to compare against Derezinski, 2018 as well, if that\\u2019s the current state of the art (which is what I infer from your text, but I might be wrong).\\n5) The paper builds on the claim that the gradient norm w.r.t. the prediction is a lower bound for the gradient norm induced by any other label, yet Proposition 1 that proves it is in Appendix B. This prove is central to the proposed idea and should be in the main text.\\n6) The authors claim that to capture diversity they collect a batch of examples where the gradients span a diverse set of directions, but it\\u2019s unclear to me that k-means++ actually accomplishes that. Where is the *direction* of the gradient taken into account in the algorithm?\\n7) The \\u201cdiscussion\\u201d section is really a \\u201cconclusion\\u201d one, and indeed a proper in-depth discussion of the experiments is missing. Please expand the comments on the experimental results. \\n8) The metric to compute the \\u201cpairwise comparison\\u201d looks quite convoluted. Is it common in the literature? If so, please add a reference. If not, can you motivate the use of this specific formula?\\n9) The random baseline seems to be very competitive. Why is that? Please provide your intuition. Could this be indicative that the baselines have not been tuned properly?\\n10) Introduction: the sentence \\u201c[deep neural networks] successes have been limited to domains where large amounts of labeled data are available\\u201d is incorrect. Indeed, neural networks have been used successfully in many domains where labelled data is scarce, such as the medical images domain for example. Please remove the sentence.\\n11) Introduction: please add a sentence to explain what a version-space-based approach is.\\n\\n12) Is Figure 2 the average over multiple runs or a single run?\\n\\n13) Notation: please do not use g for the gradient (g^y_x) and for the intermediate activations (g(x; V)).\\n\\n14) The lower margin seem too wide. Please make sure you respect the formatting style of the conference.\", \"minor\": [\"Notation: if you must shorten g^{\\\\hat{y}}_{x} please do so with \\\\hat{g}_{x} and equivalently shorten g^{y}_{x} as g_{x}\", \"Notation: in the pairwise comparison, please don\\u2019t reuse i to denote an algorithm (it is used a few lines before to compute the labeling budget)\", \"Please add reference to Appendix A when k-MEANS++ is first referred to in page 2.\", \"Page 3, when Proposition 1 is mentioned add reference to the location where it\\u2019s defined.\"], \"typos\": [\"Page 2: expenive -> expensive\", \"Page 5: Learning curves. \\u201cHere we show ..\\u201d -> Remove \\u201chere\\u201d\", \"Figure 3: pariwise -> pairwise\", \"Page 7: Apppendx E\", \"----------------------\"], \"updated_review\": \"I thank the authors for for taking the time to address all my comments, and clarifying some of the misunderstandings I had. I am happy to revise my score accordingly.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces an algorithm for active learning in deep neural networks named BADGE. It consists basically of two steps: (1) computing how uncertain the model is about the examples in the dataset (by looking at the gradients of the loss with respect to the parameters of the last layer of the network), and (2) sampling the examples that would maximize the diversity through k-means++. The empirical results show that BADGE is able to get the best of two worlds (sampling to maximize diversity/to minimize uncertainty), consistently outperforming other approaches in a wide-rage of classification tasks.\\n\\nThis is a very well-written paper that seems to make a meaningful contribution to the field with a very good justification for the proposed method and with convincing empirical results. Active learning is not my main area of expertise so I can\\u2019t judge how novel the proposed idea is, but from an outsider\\u2019s perspective, this is a great paper. It is clear, it does a good job explaining the problem, the different approaches people have used to tackle the problem, and how it fits in this literature. Below I have a couple of (minor) comments and questions:\\n\\n1. Out of curiosity, it seems that it is standard in the literature, but isn\\u2019t the assumption that one can go over the whole dataset, U, at each iteration of the active learning algorithm, limiting? It is not that cheap to go over large datasets (e.g., ImageNet).\\n2. MARG seems to often outperform the other baselines but it doesn\\u2019t have a reference attached to it (bullet points on page 5). Is this a case that a \\u201ctrivial\\u201d baseline outperforms existing methods or is there a reference missing?\\n3. In some figures, such as Figure 2, there are shaded regions in the plots. It is not clear what they are though. Are they representing confidence intervals? Standard deviation? They are quite tight for a sample size of 5.\\n4. In the section \\u201cPairwise comparisons\\u201d it reads \\u201cAlgorithm i is said to beat algorithm j in this setting if z > 1.96, and similarly \\u2026 z < -1.96\\u201d. It seems to me that the number 1.96 comes from the z-score table for 95% confidence. However, if that\\u2019s the case, it seems z should be much bigger in this context. With a sample-size of 5 (if this is still the sample size, maybe I missed something here), the normal assumptions do not hold and the t-score should\\u2019ve been used here. What did I miss?\\n\\nIn terms of presentation, Proposition 1 seems to be a very interesting result. I would move it to the main paper instead of leaving it in the Appendix. I also think the paper would read better if it didn\\u2019t use references as nouns (e.g., \\u201calgorithm of (Derezinski, 2018)\\u201d). Finally, there\\u2019s also a typo on page 7 (Apppendx).\\n\\n\\n--- \\n\\n>>> Update after rebuttal: I stand by my score after the rebuttal. This is a really strong paper in my opinion. I appreciate the fact that the authors took my feedback into consideration.\"}"
]
} |
B1eibJrtwr | Abstractive Dialog Summarization with Semantic Scaffolds | [
"Lin Yuan",
"Zhou Yu"
] | The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet) to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics. | [
"Abstractive Summarization",
"Dialog",
"Multi-task Learning"
] | Reject | https://openreview.net/pdf?id=B1eibJrtwr | https://openreview.net/forum?id=B1eibJrtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"70w3ljNMpb",
"BkeK_KTlcH",
"HJWCJKttH",
"S1e6F8NOtB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726345,
1572030833222,
1571553224515,
1571468933206
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1556/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1556/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1556/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes an approach for abstractive summarization of multi-domain dialogs, called SPNet, that incrementally builds on previous approaches such as pointer-generator networks. SPNet also separately includes speaker role, slot and domain labels, and is evaluated against a new metric, Critical Information Completeness (CIC), to tackle issues with ROUGE. The reviewers suggested a set of issues, including the meaningfulness of the task, incremental nature of the work and lack of novelty, and consistency issues in the write up. Unfortunately authors did not respond to the reviewer comments. I suggest rejecting the paper.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"=== Summary ===\\n\\nThe authors propose a new abstractive dialog summarization dataset and task based on the MultiWOZ dataset. Unlike previous work which targets very short descriptions of dialog transcripts (e.g. 'industrial designer presentation'), this paper looks to generate long descriptions of the entire dialog using the prompts in the MultiWOZ task. The authors also extend the pointer generator network of See et al. (2018) to use speaker, semantic slot and domain information. They show that this new model (SPNet) outperforms the baseline on existing automatic metrics, on a new metric tuned to measure recall on slots (dubbed CIC), and a thorough human evaluation.\\n\\n=== Decision ===\\n\\nThe task of abstractive dialog summarization is well motivated and the field sorely needs new datasets to make progress on this task. This paper is well written and executed, but unfortunately, I lean towards rejecting this paper because of a fundamental flaw in the nature of the proposed dataset that limits its applicability to the task of abstractive dialog summarization (more below).\\n\\nMy key concern is that the references in the dataset are generated from a small number of templates (Budzianowski et. al ,2018), which suggests this task is mostly one of slot detection and less about summarization. The significant impact of including semantic slot information seems to be strong evidence this is the case. It is possible to rebut this concern with an analysis of how the generated summaries differ from the reference summaries. For example, Table 2 shows that sometimes the ordering of arguments is swapped: how often does this sort of behavior occur and how often do models identify information not in the reference?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"Authors proposed an enhanced Pointer-Generator model called SPNet. The key difference between SPNet and PG are the separate handling or using of speaker role, semantic slot and domain labels. Authors also proposed a new metrics called Critical Information Completeness (CIC) to address ROUGE's weakness in assessing if key information is missing in the output.\\n\\nSPNet considers speak role by using separate encoders for each speaker in the dialog. The hidden state vectors of all speakers are concatenated for next layer. \\n\\nSemantic slot is modeled by delexicalizing the input, i.e. replacing values (18:00) with their semantic category (time). The actual value is later recovered from input text by copying over the corresponding raw tokens according to the attention layer. The domain labels are incorporated by combining categorization task loss into the final training loss.\\n\\nAuthors used the MultiWoz dataset to evaluate the model and compared it with state-of-the-art Pointer-Generator and Transformer models. ROUGE and proposed CIC metrics all show clear improvements in SPNet. The best performance was observed when all three improvements over SPNet are leveraged. Authors also provided example generated summary and discussed the difference between SPNet PG and baseline. An additional human evaluation was conducted which confirmed the quality gain.\\n\\nThe main concern of Reviewer is the inconsistency in the paper.\\n\\n1) Authors claimed to \\\"propose an abstractive dialog summarization dataset based on MultiWOZ (Budzianowski et al., 2018)\\\" in the abstract and introduction, which sounds like part of their contribution is creating a new dataset, but in experiment section there's no discussion about how the dataset was created or used at all. The same claim reappeared as the first sentence in the conclusion section.\\n\\n2) Authors emphasized two drawbacks in the beginning of the paper, but didn't discuss or show any evidence of those drawbacks from data later.\\n\\nThe above inconsistency suggests the paper may not be quite ready for publication.\", \"other_issues_found_by_reviewer\": \"1) In equation (7), value() seems to be the word while on the right hand side it's a numerical value (max a_i^t). Did Authors mean argmax?\\n\\n2) In Table 1, dialog domain seems to provide very marginal improvement, does it justify the complexity added?\\n\\n3) In Section 4.3, why do we need to train a customized embedding? The process and parameter for the embedding training was not described.\\n\\n4) In Section 4.3 \\\"batch size to eight\\\" better be consistent as \\\"batch size to 8\\\" (minor issue).\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper \\\"Abstractive Dialog Summarization with Semantic Scaffolds\\\" presents a new architecture that the authors claim is more suited for summarizing dialogues. The dataset for summarization was synthesized from an existing conversation dataset.\\n\\nThe new architecture is a minor variation of an existing pointer generator network presented by See et al. First the authors used two different sets of parameters to encode the user and the system responses. The authors also pre-process the dialog by replacing the slot values by their slot keys. Finally, the authors use an auxiliary task of detecting the domain of the dialog. \\n\\nThese three different enhancements are all called \\\"scaffolds\\\" by the authors, hance the title of the paper.\\n\\nThis paper is not suited for ICLR because of its limited novelty. The three enhancements proposed by the authors are long known and incremental.\"}"
]
} |
SklibJBFDB | Evaluating Semantic Representations of Source Code | [
"Yaza Wainakh",
"Moiz Rauf",
"Michael Pradel"
] | Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties. At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information. Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks. This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity. The benchmark is based on thousands of ratings gathered by surveying 500 software developers. We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools. Our results show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness. On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools. IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations.
| [
"embeddings",
"representation",
"source code",
"identifiers"
] | Reject | https://openreview.net/pdf?id=SklibJBFDB | https://openreview.net/forum?id=SklibJBFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AZc4_CBOt",
"SyljNtrNoB",
"BJxZMKBEiS",
"B1xgSdrVoH",
"rygm5dnr9S",
"Skxrit96Kr",
"HJeSIqF6KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726315,
1573308722775,
1573308681320,
1573308472231,
1572354186648,
1571821980720,
1571818060529
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1555/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1555/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1555/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1555/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1555/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1555/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a dataset to evaluate the quality of embeddings learnt for source code. The dataset consists of three different subtasks: relatedness, similarity, and contextual similarity. The main contribution of the paper is the construction of these datasets which should be useful to the community. However, there are valid concerns raised about the size of the datasets (which is pretty small) and the baselines used to evaluate the embeddings -- there should be a baselines using a contextual embeddings model like BERT which could have been fine-tuned on the source code data. If these comments are addressed, the paper can be a good contribution in an NLP conference. As of now, I recommend a Rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thanks a lot for your insightful review! We are happy to see that the motivation for our work and the contributions of our paper have been made clear.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thanks for your review. Please let us address your three concerns:\\n\\n1) Importance of identifier embeddings:\\nThe first four paragraphs of the paper try to answer this question. In short: There are various code-related tasks that recent work has started to address through learning-based techniques, including bug detection, predicting names of methods, predicting types, finding similar code, and automatically fixing bugs. All these techniques rely on a representation of code, which typically is built from representations of identifiers. Simply reusing pre-trained word embeddings from NLP is insufficient, because the vocabulary of source code differs significantly from natural language. \\n\\nWe\\u2019d love to receive more specific feedback or suggestions for improving the description of our motivation. \\n\\n2) Requirements for humans who contributed the labels:\\nOur survey targeted software developers, but did not require knowledge of a specific programming language. To filter uninformed and incorrect ratings, we performed multiple data cleaning steps (Section 2.2). For setMinutes and setSeconds, the reason for a non-zero similarity score presumably is that both functions expect a unit of time as their argument and then store it. The fact that one cannot substitute the other is nicely illustrated by the very low contextual similarity score of 0.06.\\n\\n3) Evaluating other models/embeddings, e.g., GPT2 (TabNine):\\nUnfortunately, the model (or an embedding derived from it) of TabNine isn\\u2019t publicly available (see https://github.com/zxqfl/TabNine). However, if available, then this and any future embeddings can be easily evaluated using or benchmark. If you have concrete pointers to publicly available identifier embeddings, then we'll very happy to include them.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thanks for your review. Here are answers to your two concerns.\\n\\n1) Size of dataset:\\nThe size of the dataset is similar to popular datasets in NLP (Rubenstein & Goodenough RG: 65 pairs, Miller & Charles MC: 30 pairs, Simlex: 999 pairs, WordSim: 353 pairs, MEN: 3000 pairs, but they used a different rating strategy, which made it possible to collect ratings for such a large number of pairs). Since the dataset is gathered from human ratings, obtaining ratings for many more pairs is difficult. Our contribution is not about the size, but about the quality of a benchmark created by human ratings.\\n\\nComparison the number of pairs and total identifiers in a corpus is misleading. Large code corpora may have hundreds of thousands of unique identifiers, i.e., using this argument, any number of pairs is \\u201csmall\\u201d. The reason why we sample pairs of identifiers from a large corpus is to cover different domains and different degrees of similarity/relatedness.\\n\\n2) Importance of contribution:\\nSimilar efforts in NLP have served as a catalyst for improved embeddings techniques. Data collection and cleaning is at the heart of creating such benchmarks. As also pointed out by Reviewer 2, having a benchmark is important for the community, and we do not see why an important contribution should be described in a technical report only.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presented a crowdsourced dataset for evaluating the semantic relatedness, similarity, and contextual similarity of source code identifiers. Motivated by similar tasks in evaluating the quality of word embeddings in the natural language, authors collected the smalls-scale evaluation datasets (less than three hundreds of code identifier pairs) and further evaluated the performance of several current embeddings techniques for codes or computing the similarity scores of code identifiers. Although I appreciated the efforts to build such evaluation dataset, I think the contributions are limited in terms of scientific contribution.\", \"pros\": \"1) They collected a new small-scale evaluation dataset for evaluating the quality of semantic representations of various existing embedding techniques for code identifiers. \\n\\n2) They performed evaluations on these embeddings techniques for code and provided a few interesting findings based on these tasks.\", \"cons\": \"1) The proposed datasets are very small. The total number of code pairs are less than 300 pairs out of total code identifiers 17,000, which is a very small set of the total pairs (17000 x 17000). Therefore, it is hard to fully evaluate the embeddings quality of various methods with high confidence. \\n\\n2) The whole paper is mainly about the data collection as well as a few of evaluations of several existing code embedding techniques. The scientific contributions are quite limited. It would be nice to put these efforts to have a competition of code embedding techniques and this paper could be served as a technical report on this direction.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a benchmark dataset for evaluating different embedding methods for identifiers (like variables, functions) in programs. The groudtruth evaluation (similar/related) are labeled via Amazon MTurk. Experiments are carried out on several word embedding methods, and it seems these methods didn\\u2019t get good enough correlation with human scores.\\n\\nOverall I appreciate the effort paid by the authors and human labors. However I have sevarl concerns:\\n\\n1) why the identifier embedding is important? As pre-trained word embeddings are useful for many NLP downstream tasks, what is the scenario of identifier embedding usage?\\n\\n2) To collect the human labels, are there any requirements? e.g., experiences in javascript. Especially, I\\u2019m curious why in Table 3, setMinutes and setSeconds get score of 0.22 (which is too high).\\n\\n3) It would make more sense to compare with state of the art language pretraining methods, like bert, xlnet, etc. People have trained the language model with GPT2 (TabNine) that works well with code. So to make the work more convincing, I would suggest to include these.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces a new dataset that includes manually labelled identifier name pairs. The labels determine how much the corresponding embeddings are useful for determining their meaning in context of code, a setting that has sufficient differences from a natural language.\\n\\nA significant part of the paper is devoted on data cleaning and relating the computed metrics to similar efforts in natural language processing. While there is not much novelty in this part of the paper, it is doing a good job at addressing many possible questions on the validity of their dataset. Another important aspect that is covered by the work is different kinds of similarities of identifier names - similarity corresponding to having the same or similar type in a precise type system or similarity corresponding to being synonyms. Having several of these dimensions would make the results applicable for a wide range of applications of identifier name embeddings.\\n\\nWhile not introducing new concepts, this paper is important for the community, because it has the potential to change the way embedding computation is done for \\u201cBig Code\\u201d problems. Right now, most papers either introduce their own embeddings, or use non-optimal ones like Code2Vec.\\n\\nThe paper also has a surprising finding that even techniques designed for code are in some cases not as good as the FastText embeddings. This is an interesting result, because few other works include this kind of embedding in their experiments. Furthermore, the paper deep dives into the strong and weak sides of several solutions and shows that there is a large opportunity to improve on the existing results.\"}"
]
} |
rJxqZkSFDB | Searching to Exploit Memorization Effect in Learning from Corrupted Labels | [
"Hansi Yang",
"Quanming Yao",
"Bo Han",
"Gang Niu"
] | Sample-selection approaches, which attempt to pick up clean instances from the training data set, have become one promising direction to robust learning from corrupted labels. These methods all build on the memorization effect, which means deep networks learn easy patterns first and then gradually over-fit the training data set. In this paper, we show how to properly select instances so that the training process can benefit the most from the memorization effect is a hard problem. Specifically, memorization can heavily depend on many factors, e.g., data set and network architecture. Nonetheless, there still exists general patterns of how memorization can occur. These facts motivate us to exploit memorization by automated machine learning (AutoML) techniques. First, we designed an expressive but compact search space based on observed general patterns. Then, we propose to use the natural gradient-based search algorithm to efficiently search through space. Finally, extensive experiments on both synthetic data sets and benchmark data sets demonstrate that the proposed method can not only be much efficient than existing AutoML algorithms but can also achieve much better performance than the state-of-the-art approaches for learning from corrupted labels. | [
"Noisy Label",
"Deep Learning",
"Automated Machine Learning"
] | Reject | https://openreview.net/pdf?id=rJxqZkSFDB | https://openreview.net/forum?id=rJxqZkSFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"t1euIZwFz2",
"rJeIh5CUiB",
"HylWH90Uor",
"HkgpJ90UiS",
"rkeYaKRIiB",
"HJx3cO08jB",
"BkxT0H5DcB",
"Skg9z1HZcH",
"ryl61yYAtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726287,
1573477038336,
1573476920952,
1573476837357,
1573476801317,
1573476500225,
1572476372988,
1572060946405,
1571880677379
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1554/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1554/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1554/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper develops a method for sample selection that exploits the memorization effect. While the paper has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR in terms of presentation of the results and experimental validation. The paper will benefit from a revision and resubmission to another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Common Questions to All Reviewers\", \"comment\": \"We thank all reviewers' efforts in this paper, here we summarize three common questions.\\n\\nQ1. About the presentation of this paper.\\n\\nWe have significantly revised the paper in the updated version, and please check the uploaded PDF. Changes are all highlighted in blue, briefly,\\n1) make the definition of the memorization effect in the introduction more clear and show this in Figure 1.\\n2) re-write the first contribution, make it clear why it is hard to design R(T).\\n3) discuss the connection with active learning in Section 2.1.\\n4) emphasize the main concepts in AutoML and remove unnecessary ones, and clarify connections of the proposed method with AutoML in Section 2.2.\\n5) re-write and re-drawn all figures to make legend and axis more clear\\n6) remove the explanation on Taylor expansion, and re-write paragraphs around equation (1)\\n7) add an explanation of why the held-out curve is used as a measurement in Section 4.2.\\n8) explanation the needs of synthetic noise in Section 4.2.\\n9) add experiments on real applications on face recognition in Section 4.3.\\n\\nQ2. The importance of searching R(T).\\n\\nFirst, see (Han et al., 2018b, Jiang et al.,2018, Yu et al., 2019), the performance of sample-selection methods heavily depends on R(T). In this revised version, we have emphasized this point above Algorithm 1.\\n\\nNext, please see Figure 1 in the revised version, since the memorization effect and R(T) correlates with the effect, it is difficult to design R(T) by hand. Such difficulties motivate us to solve this problem by AutoML.\\n\\nBesides, in the revised version, we have fully updated our experiments, see Figure 5. R(t) represents a great diversity. Simply dropping more samples does not work, and the proposed method can efficiently search a proper R(t) for each problem (Figure 6). \\n\\nWe also add results on label precision to further explain why the seared curve can be better in Appendix B.3. We can see the searched R(T) can significantly improve the number of clean labels used for training. \\n\\nFinally, in this way, Co-teaching with searched R(T) can even beat methods that use better criterions to find clean samples, i.e., Proposed v.s. Co-teaching+ (Yu et al., 2019) in Figure 4 on benchmark data sets and Proposed in Co-Mining (Wang et al., 2019) in Table 1 on the real data set.\\n\\nQ3. Technical contributions.\\n\\nThe first technical contribution is\\n1) Show why R(T) is difficult to design (Figure 1): R(T) correlates with the memorization effect, which depends on many factors and hard to quantize (also see Q2). We also clarify this point in the first contribution of the revised version. \\n\\nBased on the above observations, we\\n2) design a domain-specific search space to exploit the memorization effect.\\n3) propose a new method for hyperparameter optimization based on the analysis of problems on existing first order and derivative-free algorithms (Section 3.3.1).\\n\\nFor a summary of 2) and 3), please also see Reviewer 2 comments: \\\"The proposed method is based upon natural gradient-based updates to the hparams (which was really the only feasible way to tackle this problem given the complex dependence on the hparams and a good choice)\\\".\\n\\nBesides, we also make the 2) and 3) clearer in this revised version. Please see the difference with existing AutoML techniques at the end of Section 2.2.\"}",
"{\"title\": \"Reply to Reviewer#2\", \"comment\": \"Thanks for your comments.\\n\\nQ1. Why not demonstrate performance on real datasets?\\n\\nThanks for the suggestion. We are also aware of this potential problem, and we have done this part after the submission. Please check Section 4.3 in the revised PDF.\\n\\nFollowing (Wang et al., 2019), which is the new state-of-the-art deep face recognition method, we have tested the proposed method on face data sets. We train with VggFace2-R (Cao et al., 2018) data set, which is a noisy data set collected from the Google image search.\\n\\nTable 1 shows that the proposed method can consistently achieve the best performance on such real data sets. Thus, our method is not only useful with synthetic noise but also works well on real applications.\\n\\nQ2. I am uncertain as to how much findings on these simulated noise patterns carry over to real datasets and their associated noise patterns. If there is existing evidence indicating a strong correlation, then perhaps my review may have varied.\\n\\nThanks for the question. Yes, they are strongly correlated. We add the explanation of the synthetic noise in Section 4.2 of the revised version. Specifically,\\n1) The controlling variable can justify the effectiveness of the proposed method under specific conditions. In the context of learning with corrupted labels, the noise pattern is regarded as the key controlling variable. There are several common noise patterns (Patrini et al., 2017, Han et al., 2018), such as symmetric-flip and pair-flip. \\n\\nAll these noise patterns correspond to real-world scenarios. For example, on the macro-level, class cat flipping to the class dog makes sense, while class dog flipping to class cat also makes sense. Such flipping yields a noise pattern called symmetric-flip (Patrini et al., 2017). On the micro-level, for dogs, class Norfolk terrier flipping to class Norwich terrier makes sense, while class Norfolk terrier flipping to class Australian terrier not. This flipping yields a noise pattern called pair-flip (Han et al., 2018a), which depicts the fine-grained classification case.\\n\\n2) Since the noise pattern of real-world datasets can be the combination of simple noise patterns, we should first verify whether our proposed method works well on several common noise patterns before delving into complex real-world datasets. This is quite common in the area of learning with corrupted labels.\\n\\n3) Please also see Q1 above, the performance of the proposed method is therefore consistent on both synthetic and real data sets (Section 4.3).\"}",
"{\"title\": \"Reply to Reviewer#3 (part 2)\", \"comment\": \"Q6. Are all of the basic functions in Fig 2 necessary for the performance of the proposed method? How were they selected?\\n\\nWe do not select, all of them (in Figure 2) are used, and they are all necessary. A comparison is in Appendix B.2, which shows a simple decay function is not good enough (please also see Q9). \\n\\nQ7. Why is this motivated by the Taylor expansion?\\n\\nThanks for the suggestion. We have updated our explanation in the revised version. We want to show R(t) can be approximated by a group of basis functions.\\n\\nQ8. All of the curves look very similar.\\n\\nThanks for the comments. In the revised version, we have updated Figure 5. The searched R(T) enjoys much diversity now.\\n\\nQ9. A reasonable baseline motivated by these results is to apply a simple decay function to R(t) with a single hyperparameter controlling the rate of decay.\\n\\nThanks for the comments. We add such experiments in Appendix B.2 of the revised version.\\nWe can see that performance obtained from the proposed method is much better than that from a simple decay function. This again demonstrates the needs of approximating R(T) by a linear combination of some basic functions.\\n\\nQ10. All of the gains associated with this method could just be due to co-teaching dropping far fewer examples as training progresses, as its decay rule isn't optimal.\\n\\nThanks for pointing this out. Yes, the decay rule in origin Co-teaching is not optimal, but it is hard to design such R(T):\\n1) Searching R(T) is not an easy problem as it correlates with the memorization effect, which is hard to quantize (see Figure 1).\\n2) Please see Figure 5 in the updated version. We have updated experiments now, and we can see that the behavior of R(T) diverse and simply dropping more (also see Q9) does not work. For example, in the last row of Figure 5, all curves first decrease and then increase.\\n3) Please also check Appendix B.3, label precision is significantly increased by the searched R(T), which means the quality of samples used for training is greatly improved.\"}",
"{\"title\": \"Reply to Reviewer#3 (part 1)\", \"comment\": \"Thanks for your comments.\\n\\nQ1. Section 3.1 isn't very compelling to me. Experiments done on just CIFAR with two architectures and optimizers are certainly not sufficient to make any broad claims. I don't think this qualifies as a \\\"contribution\\\" of the paper.\\n\\nPlease check the new Figure 1 in the revised PDF. We have enumerated more perspectives there, i.e.,\\n1) three datasets (i.e., CIFAR-10, CIFAR-100, MNIST) with three noisy types (i.e., symmetric 20%, symmetric 50%, pair-flip 45%)\\n2) three models in learning from noisy labels\\n3) three optimizers (i.e., SGD, RMSProp, Adam)\\n4) two more important hyperparameters for optimizers (i.e., batch size, learning rate)\\n5) STD for each learning curve (resulting from 5 different runs)\\nThese datasets, models, and optimizers are all popularly used in the noisy label literature (Jiang et al., 2018; Han et al., 2018; Chen et al., 2019; Yu et al., 2019).\\n\\nWe have also clarified the first contribution to the revised version. The point is that: we want to show why R(T) is hard to design, as it correlates with the memorization effects, which is hard to quantize.\\n\\nQ2. Most practitioners use early stopping to halt training after the performance on the validation set drops. \\n\\nThanks for the suggestion, \\\"early stopping\\\" is indeed a choice for practical usage.\\n1) Please see Q3, \\\"held-out curve\\\" is a better measurement than \\\"early stopping\\\" to evaluate the robustness of a method. \\n2) We also reported the performance with \\\"early stopping\\\" in Appendix B.1 of the revised version. As can be seen, the proposed method not only has a better \\\"held-out curve,\\\" but also a better performance than \\\"early stopping.\\\"\\n\\nQ3. Why should we care about the held-out curve after the maximum is reached?\\n\\nThanks for the suggestion. This is a standard practice in the noisy label literature. We add an explanation in Section 4.2 of the revised version. The \\\"held-out curve\\\" is a better measurement than \\\"early stop\\\" to evaluate how a method is robust to noisy labels (Zhang et al., 2016; Arpit et al., 2017). \\n1) Ideally, if a method is robust to noisy labels, then its performance will increase with more training epochs (not to memorize noisy labels). Thus, if a method's held-out curve quick falls after reaching the maximum, then it means the method is NOT robust intrinsically.\\n2) If a method has a good \\\"held-out curve,\\\" it is more likely to have better performance than \\\"early stopping.\\\" This is also the case for the proposed approach.\\nFinally, we also report results with the early stop in Appendix B.1 of the revised version.\\n\\nQ4. Shouldn't we care more about the training curve, as at some point during training, the noisy labels will also be memorized? Isn't this the definition of the \\\"memorization effect\\\"? \\n\\nNo, memorization cannot be seen from the training loss.\\n1) Please see our introduction, and (Zhang et al., 2016; Arpit et al., 2017). Memorization means: \\\"learn easy patterns first and then over-fit on (possibly noisy) training data set.\\\" This means the training loss with always gets smaller with more epochs, no matter there are noisy labels or not. Thus, we cannot see memorization from the training curve. \\n2) Memorization must be seen from the \\\"held-out curve,\\\" which will increase first and then significantly decrease resulting from the memorization of noisy labels. This is also why the \\\"held-out curve\\\" is a good measurement (please see Q3).\\nWe have also shown what is the memorization effect in the revised version, i.e., top of page 2 (Section 1 Introduction) and Figure 1(a-b).\\n\\nQ5. What about standard baseline methods, e.g., active learning to help with this problem? Active learning seems highly relevant, yet it is not mentioned anywhere in this paper. \\n\\nThanks for the suggestion. We have added such a discussion in the revised version in Section 2.1. Active learning is not applicable here (see Active Learning Literature Survey, Burr Settles):\\n1) To do active learning, we need to obtain a classifier of which the performance is good enough to generate confidence predictions.\\n2) Active learning is sensitive to noisy labels and outliers. \\nThus, active learning is a choice to get more labeled data when there are only a few high-quality ones, not applicable for directly learning from noisy labels here.\"}",
"{\"title\": \"Reply to Reviewer#1\", \"comment\": \"Thanks for your comments. Please note that you are \\\"ICLR 2020 Conference Paper1554 AnonReviewer1\\\".\\n\\nQ1. It introduces too many basic concepts in autoML\\n\\nThanks for the suggestion. In the revised version, we have changed the outline of Section 2.2 and removed unnecessary concepts, e.g., supernet and one-shot. Briefly,\\n1) search space and algorithm are the most two important components in AutoML\\n2) derivative-free and gradient-based are two types of the popular optimization algorithm used\\n3) add a paragraph to clarify the difference between existing AutoML works and the proposed one.\\n\\nIn summary, domain-specific search space and efficient search algorithms are keys to a successful AutoML application (Feurer et al., 2015; Zoph & Le, 2017; Xie & Yuille, 2017; Bender et al., 2018, Hutter et al., 2018).\\n\\nQ2. The learned curvature (in Fig 5) does not follow the curvature. Does this mean this paper is contradicting itself self? \\n\\nPlease check the revised PDF. The paper is NOT contradicting itself.\\n1) In practice, R(T) correlates with the memorization effect, which heavily depends on many factors (see Figure 1). Thus, \\\"Target\\\" in Figure 2 only represents one possible curvature of R(T), and it does not mean every R(T) should look similar to \\\"Target\\\".\\n2) In the revised version, we have updated Figure 5. The searched R(T) enjoys much diversity, and some look similar to \\\"Target\\\" now, i.e., those on CIFAR-100 (the last row in Figure 5).\\n\\nQ3. The major difference between this paper and (Han et al. 2018) is how R(t) is defined and learned. The technical contribution of this paper is limited. & The curvature of defined R(t) is not needed?\\n\\nThanks for pointing this out. Please see our reply to the Q2 and Q3 for all reviewers. Briefly,\\n1) Identifying that why R(T) is hard to be searched is the first contribution.\\n2) After that, indeed, the difference is only on R(T) compared with Co-teaching. However, how to find a proper R(T) is a non-trivial problem. Please see Figure 1 and 5 in the updated version, R(T) depends on many factors and can exhibit a diverse pattern. \\n3) It is the proposed approach that can boost Co-teaching and then get consistently better performance on synthetic (Section 4.1), benchmark (Section 4.2), and real (Section 4.3) data sets. Specifically, Co-teaching with searched R(T) can even beat methods that use better criterions to find clean samples, i.e., Proposed v.s. Co-teaching+ (Yu et al., 2019) in Figure 4 and Proposed v.s. Co-Mining (Wang et al., 2019) in Table 1.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper focuses on the topic of learning from noisy -- or as they call it \\\"corrupted\\\" -- labels. Specifically this focuses on an approach where data selection -- ideally of cleaner/less noisy examples -- can help the learn model overcome data noise, akin to the approaches this builds upon (i.e, the Co-Teaching and MentorNet approaches). The specific idea here is to take an AutoML style approach to the problem in particular to determine how many examples are selected in each mini-batch. The proposed method is based upon natural gradient based updates to the hparams (which was really the only feasible way to tackle this problem given the complex dependence on the hparams and a good choice). The experimental results using synthetic noise corruption are indicative of improved performance compared to the baseline techniques.\\n\\nOverall while I thought the paper made for a very interesting read and showed some great promise I had some significant concerns as well.\", \"on_the_plus_side\": [\"The empirical results on the simulated noisy data are quite positive/\", \"The proposed method makes sense as does the search algorithm in the hparam space.\", \"My main concerns with the work stem from the empirical study and choices made there. While I understand that other existing techniques like the Co-teaching and MentorNet approaches have used simulated noise to study the impact of performance of these robustness techniques, at some point I question their validity on real datasets. Noise patterns in real datasets hardly follow some set pattern and thus I hesitate to read much into results derived solely on synthetic datasets. Given that the goal of these techniques is to improve performance when training with real, noisy labeled data why not actually demonstrate performance on such benchmarks? For example, there are numerous datasets from domains like crowdsourcing that allow you to get \\\"noisy\\\" ratings for datapoints. Wouldn't a more compelling argument be derived by showing improved performance on such datasets?\"], \"thus_to_summarize\": \"I worry that the results derived solely on simulated noise may not be very indicative of performance in more realistic settings and would request the authors to consider providing evidence on more realistic datasets.\\n\\nI also wanted to note that the paper exposition is lacking in some aspects and I needed to reread certain sections to make sure I understood them correctly. I think the paper would benefit from a good proofread not just from the grammar/spelling perspective (which there are multiple instances which could be improved) but also from the overall presentation and legibility perspective.\", \"all_this_said\": \"I want to clarify that this topic is not my research focus and hence I am uncertain as to how much findings on these simulated noise patterns carry over to real datasets and their associated noise patterns. If there is existing evidence indicating strong correlation, then perhaps my review may have varied.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper develops a method for sample selection that exploits the memorization effect. In essence, the authors adopt the co-teaching (Han et al. NeurIPS 2018) and MentorNet (Jiang et al., ICML 2018) framework, which selects some fraction of examples per minibatch that are hopefully \\\"cleaner\\\" than noisier examples to compute updates from. While in Han et al. the number of instances R selected depends on the number of epochs that have been completed, this paper instead seeks to learn R by approximating it as a linear combination of different types of basis functions and using natural gradient as the search algorithm. The search space proposed by the authors seem comprehensive: it encompasses the search space of co-teaching, the prior state of the art. Results on synthetic tasks as well as MNIST/CIFAR appear to show the superiority of the proposed method over random search, co-teaching, and other baselines, although the results don't seem conclusive. Overall, I have concerns with some of the contributions, experiments, and presentation, which leaves me at a weak reject.\", \"comments\": [\"Section 3.1 isn't very compelling to me. Experiments done on just CIFAR with two architectures and optimizers are certainly not sufficient to make any broad claims. I don't think this qualifies as a \\\"contribution\\\" of the paper.\", \"The paper is difficult to understand, and much of this difficulty stems from poor writing / presentation. The plots depicting experimental results are especially hard to follow.\", \"I'm a little confused with the setup here. Most practitioners use early stopping to halt training after performance on the validation set drops. As such, why should we care about the held-out curve after the maximum is reached? Shouldn't we care more about the training curve, as at some point during training the noisy labels will also be memorized? Isn't this the definition of the \\\"memorization effect\\\"?\", \"What about standard baseline methods e.g., active learning to help with this problem? Active learning seems highly relevant yet is not mentioned anywhere in this paper.\", \"Are all of the basis functions in Fig 2 necessary for the performance of the proposed method? How were they selected? Why is this motivated by the Taylor expansion?\", \"Figure 5 shows a bunch of R(t) curves learned by the proposed approach across a variety of datasets / noise levels. All of the curves look very similar! A reasonable baseline motivated by these results is to just apply a simple decay function to R(t) with a single hyperparameter controlling the rate of decay. I suspect this would also work better than the co-teaching approach, and perhaps render the more complex method here unnecessary. In fact, all of the gains associated with this method could just be due to co-teaching dropping far less examples as training progresses, as its decay rule isn't optimal.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the problem of learning from corrupted labels via picking up clean instances from training dataset. The sample selection mainly based on function R(t), which controls how many instances are kept. This paper proposes a unique curvature of R(t) based on intuition and presents how R(t) can be learned via combination of some existing functions. Natural gradient is presented to optimize the parameters in the autoML framework. Experimental results on both synthetic data and real-world data demonstrate the effectiveness of the proposed method.\", \"a_few_comments_on_this_paper\": \"1. The paper is very verbose and hard to follow. It introduces too many basic concepts in autoML.\\n2. A key part of the paper is the curvature of R(t), which is based on intuition. Meanwhile, the learned curvature (in Fig 5) doesn't follow the curvature. Does this mean this paper is contradicting its self? The curvature of defined R(t) is not needed?\\n3. The major difference between this paper and (Han et al. 2018) is how R(t) is defined and learned. The technical contribution of this paper is limited.\", \"minor_comments\": \"1. For all the figures, it is difficult to view the y-axis (or the y-axis is missing).\"}"
]
} |
SJeY-1BKDS | Understanding l4-based Dictionary Learning: Interpretation, Stability, and Robustness | [
"Yuexiang Zhai",
"Hermish Mehta",
"Zhengyuan Zhou",
"Yi Ma"
] | Recently, the $\ell^4$-norm maximization has been proposed to solve the sparse dictionary learning (SDL) problem. The simple MSP (matching, stretching, and projection) algorithm proposed by \cite{zhai2019a} has proved surprisingly efficient and effective. This paper aims to better understand this algorithm from its strong geometric and statistical connections with the classic PCA and ICA, as well as their associated fixed-point style algorithms. Such connections provide a unified way of viewing problems that pursue {\em principal}, {\em independent}, or {\em sparse} components of high-dimensional data. Our studies reveal additional good properties of $\ell^4$-maximization: not only is the MSP algorithm for sparse coding insensitive to small noise, but it is also robust to outliers and resilient to sparse corruptions. We provide statistical justification for such inherently nice properties. To corroborate the theoretical analysis, we also provide extensive and compelling experimental evidence with both synthetic data and real images. | [
"L4-norm Maximization",
"Robust Dictionary Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=SJeY-1BKDS | https://openreview.net/forum?id=SJeY-1BKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"B7ZbWZM_af",
"r1lCqgAWjH",
"H1e1hkCWoH",
"H1gO_IImcB",
"B1x_np7Ctr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798726259,
1573146773633,
1573146535474,
1572197999969,
1571859887716
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1552/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1552/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1552/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1552/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Main content:\\n\\nBlind review #3 summarizes it well:\\n\\nThis paper presents results on Dictionary Learning through l4 maximization. The authors base this paper heavily off of the formulation and algorithm in Zhai et. al. (2019) \\\"Complete dictionary learning via l4-norm maximization over the orthogonal group\\\". The paper draws connections between complete dictionary learning, PCA, and ICA by pointing out similarities between the objectives functions that are optimized as well as the algorithms used. The paper further presents results on dictionary learning in the presence of different types of noise (AWGN, sparse corruptions, outliers) and show that the l4 objective is robust to different types of noise. Finally the authors apply different types of noise to synthetic and real images and show that the dictionaries that they learn are robust to the noise applied.\\n\\n--\", \"discussion\": \"Reviews agree about the interesting work, including the connections of complete dictionary learning with classic PCA and ICA (after further clarification during the rebuttal period). Additional empirical strengthening during the rebuttal period also addressed a reviewer concern.\\n\\n--\", \"recommendation_and_justification\": \"As review #3 wrote, \\\"Overall this paper makes significant contributions by extending the work in [Zhai et. al's (2019) \\\"Complete dictionary learning via l4-norm maximization over the orthogonal group\\\"] to noisy dictionary learning settings\\\".\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks & Will extend non-asymptotic analysis\", \"comment\": \"Thanks for your detailed review and for your overall positive evaluation! In what follows, we provide more detailed responses to each of your comments. We have also performed more analyses per your subsequent suggestions. And we hope you find the updated draft adequately addresses your concerns.\\n\\nWe should have made our novelty more clear and hope to clarify the point here. We are aware that Zhai et al. [1] has already pointed out the connection between $\\\\ell^4$-maximization and ICA. However, this connection is merely at the formulation level and is *not* a novelty we claim for this paper. Instead, our work goes beyond and establishes the connections more at the algorithmic level and in particular, our work provides a unified understanding on how such efficient power-iteration like algorithms (FastICA and MSP) can be established under the unified framework of maximizing a convex function over a compact set. We consider this unification as valuable for the community, since the general result for power methods in maximizing convex function over a compact set only appears recently Journee et al. [2] and we believe this unified view will encourage further research in this direction.\\n\\nThanks for the suggestions on on which our paper can be improved with non-asymptotic concentration results. We did not include such results in the first submission due to limited space (as we prepared the paper for a 8-page version), but we already know there is no technical difficulty in reaching such results (which we have mentioned in footnote 9 of our paper.) As per your suggestion, we will provide non-asymptotic measure concentration results of the MSP algorithm in the updated version.\\n \\nIn addition, we will also provide some extra clarifications in the regime of non-Gaussian outliers in the updated version, please stay tuned. \\n\\nThanks again for your thoughtful comments, we will try our best to clarify all your concerns about our paper in the coming version.\", \"references\": \"[1] Zhai, Yuexiang, Zitong Yang, Zhenyu Liao, John Wright, and Yi Ma. \\\"Complete Dictionary Learning via $\\\\ell^ 4$-Norm Maximization over the Orthogonal Group.\\\" arXiv preprint arXiv:1906.02435, 2019\\n[2] Journ\\u00e9e, Michel, Yurii Nesterov, Peter Richt\\u00e1rik, and Rodolphe Sepulchre. \\\"Generalized power method for sparse principal component analysis.\\\" Journal of Machine Learning Research 11, no. Feb (2010): 517 - 553.\"}",
"{\"title\": \"Thanks & Will update draft accordingly\", \"comment\": \"Thank you for your detailed reading and for your positive opinion! Below, we provide a point-to-point response and hope to address your concerns.\\n\\nYes, we could have made the connection point more clear. Certainly, as you pointed out, there are a lot of algorithms which follow a projected/proximal gradient descent schemes. However, there is something interesting and nontrivial here: among all projected/proximal gradient descent methods, MSP algorithms lie in the optimization regime that allows the step-size to be infinite (i.e. MSP acts like a power iteration method) and hence resulting in more efficient algorithms than the traditional gradient descent type methods. Moreover, we make the comparison between MSP, Power-iteration, and FastICA (as stated in Table 1 of the paper) to illustrate the intuition behind the efficiency of the MSP algorithm. \\n\\nWe highly appreciate the ``ICA-basis-versus-dictionary-learning\\\" comment and it is also very surprising for us to see how $\\\\ell^4$-norm maximization can be derived and justified from different generative models -- ICA and Dictionary Learning. Qualitatively, we think such similarity comes from non-Gaussian property of the sparsity assumption -- as maximizing kurtosis promotes non-Gaussianity of the data which coincides with the sparse ground truth of Dictionary Learning problem. We will include a discussion on our intuition on why this occurs, because we certainly agree with you this is an intriguing phenomenon. Characterizing the exact conditions under which they coincide may be beyond the scope of the current paper and will leave the more quantitative analysis for future work.\\n\\nThank you for pointing this out, a very good point that we need to demonstrate. In the updated draft, we will provide more comparisons between the $\\\\ell^4$ formulation and the previous $\\\\ell^1$ based methods in terms of robustness.\\n \\n It is known that $\\\\ell^1$ minimization itself is not robust to noise or outliers (hence many works in the literature on Lasso for noise measurements and error correction for $\\\\ell^1$ minimization, see paper Wright et al. [1] and references therein.) In addition, learning the dictionary column by column is less robust than learning the entire dictionary holistically, as the error may propagate while the latter can leverage global information to denoise much more effectively. \\n\\nRegarding the run-time concerns, this is another good point that we did not address in the initial submission and thank you for the suggestion. We will also include run-time comparisons in the updated draft.\\n\\nThank you again for your review! Per your comments, we will update our paper according to your advice, please stay tuned.\\n\\nWe hope we have addressed all your concerns.\", \"references\": \"[1] Wright, John, and Yi Ma. \\\"Dense Error Correction Via $\\\\ell^ 1$-Minimization.\\\" IEEE Transactions on Information Theory 56.7 (2010): 3540-3560.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents results on Dictionary Learning through l4 maximization. The authors base this paper heavily off of the formulation and algorithm in Zhai et. al. (2019) \\\"Complete dictionary learning via l4-norm maximization over the orthogonal group\\\". The paper draws connections between complete dictionary learning, PCA, and ICA by pointing out similarities between the objectives functions that are optimized as well as the algorithms used. The paper further presents results on dictionary learning in the presence of different types of noise (AWGN, sparse corruptions, outliers) and show that the l4 objective is robust to different types of noise. Finally the authors apply different types of noise to synthetic and real images and show that the dictionaries that they learn are robust to the noise applied.\\n\\nOverall this paper makes significant contributions by extending the work in the paper referenced above to noisy dictionary learning settings and I would vote to accept based on these results.\\n\\nThe connections between Complete Dictionary Learning, PCA and ICA are interesting, but the algorithmic analogies seem superficial in my opinion. There are a lot of algorithms which follow a projected/proximal gradient descent scheme. If there are any deeper connections between the specific algorithms discussed, they should be spelled out more clearly. One point of clarification that I would like to raise is the similarity between the kurtosis and l4 objectives. This paper could be strengthened by delineating the conditions under which one would learn an ICA basis vs a Complete Dictionary. It seems to me that the only difference is in the generative model, and that maximizing the same objective under different data conditions could return an ICA basis or a Complete Dictionary. \\n\\nThe robustness theory and experiments on synthetic data are reasonable and demonstrate that complete dictionary learning is robust to the different noise conditions. I would like to how this technique compares to other complete dictionary learning algorithms (ER-SpUD, Complete dictionary learning over the sphere - Sun, Qu, Wright 2015) and whether the l4 objective is unique in providing this robustness. Another central claim of Zhai et. al. 2019 seems to be that l4 maximization is able to recover the entire dictionary at once, vs other algorithms that recover the dictionary one column at a time. To test this, I would like to see runtime evaluations and comparisons to other algorithms. While the claim of recovering the entire dictionary is true, it seems to me that requiring an SVD at each iteration would be very expensive. I am not completely convinced that the approach of estimating the entire dictionary would indeed be faster.\\n\\nTo summarize, I believe this paper would be a good addition to the literature on l4 maximization algorithms for dictionary learning. I am willing to adjust my score based on responses to the above concerns.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper explores the recently proposed $\\\\ell^4$-norm maximization approach for solving the sparse dictionary learning (SDL) problem. Unlike other previously proposed methods that recover the dictionary one row/column at a time, for an orthonormal dictionary, the $\\\\ell^4$-norm maximization approach is known to recover the entire dictionary once for all.\\n\\nThis paper shows that $\\\\ell^4$-norm maximization has close connections with the PCA and ICA problem. Furthermore, focusing on the MSP algorithm for solving the $\\\\ell^4$-norm maximization formulation, the paper highlights the connections of this fixed-point style algorithm with such algorithms for PCA and ICA. Subsequently, the paper studies the behavior of the MSP algorithm in the presence of noise, outliers, and sparse corruption. Unlike PCA, surprisingly, the MSP algorithm is shown to be robust to outliers and sparse corruption.\\n\\nOverall, the paper makes a nice effort towards better understanding the relatively new $\\\\ell^4$-norm maximization approach and its connection with other well-understood problems in the literature. Moreover, the paper takes the right step by studying the effect of non-ideal signal measurements on the underlying goal of dictionary learning. That said, the reviewer feels that, in the current form, the results in the paper are not novel enough to warrant an acceptance to ICLR. The connection of the $\\\\ell^4$-norm maximization formulation with ICA have been previously noted in other paper, so this would hardly qualify as a novel contribution. The analysis of the MSP algorithm in the presence of noise, outlier, and sparse corruption is not comprehensive enough. It would have been nice if the authors had provided a non-asymptotic analysis of the MSP algorithm in the presence of non-ideal measurements. Also, it is not clear how interesting the outlier formulation presented in the paper is. Shouldn't one consider outliers that go beyond the Gaussian distribution, ideally arbitrary outliers?\"}"
]
} |
BygKZkBtDH | Balancing Cost and Benefit with Tied-Multi Transformers | [
"Raj Dabre",
"Raphael Rubino",
"Atsushi Fujita"
] | This paper proposes a novel procedure for training multiple Transformers with tied parameters which compresses multiple models into one enabling the dynamic choice of the number of encoder and decoder layers during decoding. In sequence-to-sequence modeling, typically, the output of the last layer of the N-layer encoder is fed to the M-layer decoder, and the output of the last decoder layer is used to compute loss. Instead, our method computes a single loss consisting of NxM losses, where each loss is computed from the output of one of the M decoder layers connected to one of the N encoder layers. A single model trained by our method subsumes multiple models with different number of encoder and decoder layers, and can be used for decoding with fewer than the maximum number of encoder and decoder layers. We then propose a mechanism to choose a priori the number of encoder and decoder layers for faster decoding, and also explore recurrent stacking of layers and knowledge distillation to enable further parameter reduction. In a case study of neural machine translation, we present a cost-benefit analysis of the proposed approaches and empirically show that they greatly reduce decoding costs while preserving translation quality. | [
"tied models",
"encoder-decoder",
"multi-layer softmaxing",
"depth prediction",
"model compression"
] | Reject | https://openreview.net/pdf?id=BygKZkBtDH | https://openreview.net/forum?id=BygKZkBtDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4SVNjdLizJ",
"S1ewpvg_sr",
"Byx3UPgOiB",
"HJl57DgdjH",
"B1liTs2TYS",
"H1gErESpKr",
"Syl23DhcFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726229,
1573550015119,
1573549907848,
1573549858210,
1571830722997,
1571800123720,
1571633076493
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1551/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1551/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1551/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposed a method for training multiple transformers with tied parameters and enabling dynamic choice of the number of encoder and decoder layers. The method is evaluated in neural machine translation and shown to reduce decoding costs without compromising translation quality. The reviewers generally agreed that the proposed method is interesting, but raised issues regarding the significance of the claimed benefits and the quality of overall presentation of the paper. Based on a consensus reached in a post rebuttal discussion with the reviewers, I am recommending rejecting this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Efficiency is why our work is important. Our work can be a starting point.\", \"comment\": \"We thank you for your review and for taking the time to read our paper thoroughly.\", \"our_responses_to_your_questions_are_as_follows\": \"1. You are right that a decoding speed gain of 0.07s per sentence is limited, but this gain is obtained with a system as performant as the multi-tied model without routing at the corpus level (BLEU 35.0). With non-significant loss in BLEU, i.e. 34.7 for instance, a larger decoding speed gain is measured. Additionally, for a large pool of data to decode, the gain in decoding time is worthwhile. The main purpose of the dynamic layer selection approach is to draw attention to the concept of flexible decoding. One of the results is that, using source sentences only as information given to the classifier, the task is difficult. Regarding the training time of the tied-multi model, flexibility and hence faster decoding is not possible with a standard transformer. We will have to train several transformer models with different layer configurations to achieve the level of flexibility reached by the tied-multi model. While training 36 models takes needs 25.5 times the training time of a 6-6 model, training our flexible model takes only 9.5 times the training time of a 6-6 model. As such our work does have merit.\\n\\n2. We understand that working on multiple datasets would make our work more convincing but we do not expect our results to be dataset dependent as we make no assumption about the type of data used. Results on other datasets will be included in future manuscripts.\\n\\n3. We trained the RS+KD model but did not include it in the paper because we considered it fair to compare multilayer softmaxed models only. Nevertheless the BLEU score of RS+KD is 36.1 which is not statistically significantly different from the best BLEU of 36.3 of the multilayer model.\"}",
"{\"title\": \"Thank you. The reorganization will be done.\", \"comment\": \"We thank you for your review and for taking the time to read our paper thoroughly.\", \"our_responses_to_your_questions_are_as_follows\": \"1. Thank you for your suggestion regarding the reorganization of the paper. The basic model, which includes the multi-softmax functions, is the main point of the paper. Dynamic layer selection and distillation are two extensions of the basic model which allow to improve decoding speed and further parameters reduction.\\n\\n2. Although we did not report it, we found that it is actually quite hard to obtain gains as well as speed. We tried a large number of what we believed to be promising approaches and most of them failed to give gains. What we reported was the most promising one. Our hypothesis is that there is too much randomness in the behavior of our NMT models and while there are few gains, it should be worthwhile to take as much as we can get. Given that our routing method is simple we decided that it should be reported.\\n\\n3. Sorry for totally messing up section 4.2. Indeed the variable names are wrong in Eq. 2.\\n\\n4. The use of distillation in the context of flexibly decodable models that also use recurrently stacked layers is new but we do understand why it should be incorporated into another section.\\n\\n5. We thank you for pointing us to the other related work but the motivation is quite different as you say and our claim regarding novelty is relevant only to NMT.\\n\\n6. Regarding the training time we are comparing the total GPU computation hours.\"}",
"{\"title\": \"Thank you\", \"comment\": \"We thank you for your review and for taking the time to read our paper thoroughly. We are happy to read that you think that our work is promising.\", \"our_responses_to_your_questions_are_as_follows\": \"1. In the case of the implementation we used, our 6-6 model was trained for 300k iterations and we do agree that a 1-1 model should need fewer than 300k iterations. However, we observed that training a 1-1 model for much longer does not run into problems such as overfitting. The BLEU scores do not vary statistically significantly. We do agree that it might not give us the best 1-1 model and thereby limit fairness.\\n\\n2. Sorry for the confusion. In Eq. 3, x refers to the logits prior to the sigmoid layer produced by the neural network and y_k is the reference class for a given input sentence.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a novel procedure for training multiple Transformers with tied parameters which compresses multiple models into one enabling the dynamic choice of the number of encoder and decoder layers during decoding. The idea is simple and reasonable and the results are promising.\", \"i_have_several_questions_about_the_paper\": \"1. \\\"This enables a fair comparison, because it ensures that each model sees roughly the same number of training examples.\\\" This is not a fair comparison. Note that those models are of very different size, and thus they may need different numbers of samples for training. For example, a 1-1 model should need much less data for training than a 6-6 model. If the number of training samples is ok for the 1-1 model, it might be insufficient for the 6-6 model. Therefore, I think development set is necessary for a fair comparison.\\n\\n\\t2. I don't understand Eq. (3). What do x and y_k mean in this equation? Are they corresponding to x^I and y^i_k in Eq (2)? However, y^i_k in Eq. (2) is a translation, i.e., a text sentence, while y_k in Eq. (3) looks like a number in [0,1].\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This work proposes a way to reduce the latencies incurred in inference for neural machine translation. Basic idea is to train a model with softmax attached to each output of decoder layers, and computes a loss by aggregating the cross entropy losses over the softmaxes. During inference, it could either use one of the softmax or train an additional model which dynamically selects softmaxes given an input string. Experimental results show that it is possible to reduce latencies by trading off the translation qualities measured by BLEU. Dynamically selection did not show any gains in latencies, though, this work empirically shows potential gains in oracle studies. This work further shows that the model could be compressed further by knowledge distillation.\", \"I have several concerns to this work and I'd recommend rejecting this submission.\", \"One of the problems of this paper is presentation. This work basically combines three work together as a single paper, i.e., section 3 for the basic model, section 4 for dynamic selection and section 5 for distillation, with each section describing a separate experiment. I'd strongly suggest the author to focus on the main point, e.g., dynamic selection, and present the basic model and dynamic selection. Experiments should be presented in a single section for brevity.\", \"Similarly, this work should have been submitted when meaningful gains were observed in the dynamic selection method, given that the proposal is somewhat new. Otherwise, I don't find any merits to see this accepted in ICLR, given the rather negative results in section 4.\", \"The description in section 4.2 is totally messed up. x^i and y^i_k are strings since they are an input sentence and an output sentence, respectively,. However, they are treated as scalars in Equation 3 by multiplied with \\\\delta_k, subtracted from 1 and taking sigmoid through \\\\sigma. I strongly suggest authors to carefully check variables used in the equations and the description in the section.\", \"The authors claim that the use of knowledge distillation is novel. However, it is already widely known in the research community and I don't think it is worthy to keep it as a single section. It could have been described as a yet another experiment in a single experimental section.\"], \"other_comment\": [\"Although this paper claims that attaching a softmax for each output layer is new, there is a similar work in language modeling, though the motivation is totally different.\", \"Direct Output Connection for a High-Rank Language Model, Sho Takase, Jun Suzuki and Masaaki Nagata, EMNLP 2019.\", \"In section 3.4, this paper claims that the training of all 36 models took 25.5 more time, but took 9.5 more time for a tied-model when compared with a basic 6-layer Transformer. It is not clear to me whether this comparison is meaningful given that it might be possible to employ multiple machines to train 36 models.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"** Summary **\\nIn this paper, the authors propose a new variant of Transformer called Tied-multi Transformer. Given such a model with an N-layer encoder and an M-layer decoder, it is trained with M*N loss functions, where each combination of the nth-layer of the encoder and the mth-layer of the decoder is used to train an NMT model. The authors propose a way to dynamically select which layers to be used when a specific sentence comes. At last, the authors also try recurrent stack and knowledge to further compress the models.\\n\\n** Details **\\n1.\\tThe first question is \\u201cwhy this work\\u201d:\\na.\\tIn terms of performance improvement, in Table 2, we can see that dynamic layer selection does not bring any improvement compared to baseline (Tied(6,6)). When compared Tied(6,6) to standard Transformer, as shown in Table 1, there is no improvement. Both are 35.0.\\nb.\\tIn terms of inference speed, in Table 2, the method can achieve at most (2773-2563)/2998 = 0.07s improvement per sentence, which is very limited.\\nc.\\tIn terms of training speed, compared to standard Transformer, the proposed method takes 9.5 time of the standard Transformer (see section 3.4, training time).\\nTherefore, I think that compared to standard Transformer, there is not a significant difference.\\n2.\\t The authors only work on a single dataset, which is not convincing.\\n3.\\tIn Section 5, what is the baseline of standard RS + knowledge distillation?\"}"
]
} |
ryxPbkrtvr | BOSH: An Efficient Meta Algorithm for Decision-based Attacks | [
"Zhenxin Xiao",
"Puyudi Yang",
"Yuchen Jiang",
"Kai-Wei Chang",
"Cho-Jui Hsieh"
] | Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model. In this paper, we consider hard-label black- box attacks (a.k.a. decision-based attacks), which is a challenging setting that generates adversarial examples based on only a series of black-box hard-label queries. This type of attacks can be used to attack discrete and complex models, such as Gradient Boosting Decision Tree (GBDT) and detection-based defense models. Existing decision-based attacks based on iterative local updates often get stuck in a local minimum and fail to generate the optimal adversarial example with the smallest distortion. To remedy this issue, we propose an efficient meta algorithm called BOSH-attack, which tremendously improves existing algorithms through Bayesian Optimization (BO) and Successive Halving (SH). In particular, instead of traversing a single solution path when searching an adversarial example, we maintain a pool of solution paths to explore important regions. We show empirically that the proposed algorithm converges to a better solution than existing approaches, while the query count is smaller than applying multiple random initializations by a factor of 10. | [
"efficient meta algorithm",
"attacks",
"bosh",
"attacks bosh",
"viable",
"robustness",
"machine",
"model",
"box attacks"
] | Reject | https://openreview.net/pdf?id=ryxPbkrtvr | https://openreview.net/forum?id=ryxPbkrtvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"RhnCK9BuCm",
"HygDXsL2jS",
"rylxbjI3sr",
"HJlHc5UnsS",
"HylNib0yqB",
"rkxncuZQYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798726200,
1573837599001,
1573837559794,
1573837453009,
1571967387531,
1571129492465
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1547/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1547/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes BOSH-attack, a meta-algorithm for decision-based attack, where a model that can be accessed only via label queries for a given input is attacked by a minimal perturbation to the input that changes the predicted label. BOSH improves over existing local update algorithms by leveraging Bayesian Optimization (BO) and Successive Halving (SH). It has valuable contributions. But various improvements as detailed in the review comments can be made to further strength the manuscript.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1 # Part 2\", \"comment\": \"[Time complexity]\\n\\nThe time complexity ( number of queries regarding the parameters mentioned in Algorithm 1 is relatively simple. We briefly discuss it in Appendix H.\\n\\n[Figure 1]\\nSorry for the confusion. In the description of Figure 1, a local minimum indicates a point on the decision boundary that has the shortest distance to the original example, compared with other nearby points on the decision boundary. Those local minimums are the points where a decision-based attack can converge to. \\n\\nIn Figure 1, we plot the decision boundary for a given model around the original example. The decision boundary is a high-dimensional surface, so we plot its projection to a 2D tangent plane. To choose which 2D hyperplane to project to, we run a decision-based attack from two random initialization points, and use their converged perturbation directions as the vector to form the 2D hyperplane. These two directions are presented as red lines in Figure 1, showing that they are pointing to local minimums on the surface (local minimum in terms of distance to the original example). We chose the hyperplane by this way since it guarantees that there are at least 2 local minimums on this hyperplane. We have revised the paper to make this more clear. \\n\\n[minimize l(.) / g(.) in Algorithm 2]\\nThere is a mistake in Algorithm 2, we should minimize $g(.) / l(.)$, sorry for the confusion! We have corrected this in the revision. \\n\\n[Theorem 1]\\nTheorem 1 in Appendix B.2 is the results of Bergstra et al.'s. We put it there so the readers can understand that more easily. \\n\\n[grammar errors]\\nThank you for the editorial comments we correct them in the revised version.\"}",
"{\"title\": \"Response to Reviewer 1 # Part 1\", \"comment\": \"We thank Reviewer 1 for the detailed comments and critiques.\\n\\n[Related work]\\nWe thank the reviewer for the references and we have added a new paragraph in the related work section to discuss combinatorial heuristics and Genetic Algorithms. \\n\\n[Other black-box optimization algorithms]\\nThank you for the suggestion. Despite there are several optimization algorithms and well-established packages for optimizing black-box functions, these approaches cannot be straightforwardly applied to derive adversarial attack due to the following reasons: \\n\\n1. It\\u2019s not clear whether these methods can be used to conduct decision-based attacks. \\nIn the decision-based attack literature, these methods have not been applied in any previous paper. Although theoretically, with the formulation of (Cheng et al., 2019), decision-based attack can be formulated as a black-box optimization problem; in practice, none of these algorithms (NOMAD/rbf-opt) has been used due to the high dimensionality of the attack objective function. For example, on ImageNet data, the input dimension is 224 * 224 *3, leading to a black-box optimization problem with 150,528\\u202c variables. This is beyond the scalability of classical black-box optimization algorithms, and it\\u2019s nontrivial to apply them for decision-based attack. We do agree it will be interesting to study whether those algorithms can be applied to decision-based attack, but applying those algorithms itself will be a separate paper and is out of the scope of this study. For example, there are papers applying genetic algorithms to soft-label black-box attack, and in fact developing such method is nontrivial even for soft-label black-box settings (see Moustafa et al., \\u201cGenattack: Practical Black-box Attacks with Gradient-free Optimization\\u201d). Given that hard-label black-box (decision-based) attack is much more complex than soft-label black-box attack, we believe applying NOMAD/rbf-opt to conduct decision-based attack is nontrivial. \\n\\n2. In addition, we want to emphasize that our goal in this paper is not to find a good optimizer to solve the decision-based attack objective proposed by (Cheng et al., 2019). Instead, our goal is to propose a meta-algorithm such that given an existing iterative local update-based attack (denoted by A), our algorithm can boost the performance of A by a mixed strategy of Successive Halving + TPE resampling. Therefore, even if NOMAD/RBF-OPT attack exists (which is not the case as we argued in 1), they can be viewed as a base attack denoted by A, and our meta-algorithm can be used to improve their performance, not competing with them. \\n\\n[Runtime comparison]\\nThank you for the suggestion. We have added the runtime comparison in Appendix F. In fact, our method is a meta-algorithm on top of a base attack, so it cannot be faster than running the base attack. However, we can get much better solutions than the base attack, where the base algorithm, even if it runs for a very long time, it cannot converge to such good solution (we stop each attack when their solution converged in Table 8). Furthermore, for decision-based attacks, people mostly care about the number of queries. Imagine we are attacking a Google Cloud image recognition system, then only the number of queries is important since Google Cloud will limit the number of queries for each user, but the computation can be done off-line using multiple servers. \\n\\n[Comparing to \\\"optimal\\\" attacks]\\n\\nIn table 1, we do include the results of C&W attack in comparison and note that it is recognized as one of the best white-box attack methods. However, white-box attacks may not always outperform the proposed decision-based attack. This is because white-box attacks are solving a non-convex optimization problem (attack objective) and gradient-based optimizers can easily be stuck at local minimums. We have also demonstrated in the first two paragraphs of section 3 that white-box attacks are also sensitive to the initial point. \\n\\nIn fact, it has been shown in (Katz et al., \\u201cReluplex: An Efficient SMT Solver for Verifying Deep Neural Networks\\u201d) that for a ReLU network, finding the optimal attack is NP-hard (see their Appendix I), and an exponential time algorithm proposed in the same paper can only scale to networks with <200 neurons. So it\\u2019s computationally impossible to find the optimal attack for the MNIST/CIFAR/ImageNet networks used in our paper. \\n\\nMoreover, it\\u2019s NP-complete to find the optimal attack for the tree-based model (see Kantchelian et al., \\u201cEvasion and Hardening of Tree Ensemble Classifiers\\u201d, Section 4.2).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for the valuable comments. We have addressed the suggestions. Please see the responses below.\\n\\n[Analysis] \\nOur algorithm aims to improve the solution of a base attack, so if the base attack can converge to a stationary point, our algorithm will inherit the same property. Beyond convergence to stationary points, since finding the global minimum of adversarial perturbation is NP-hard (see Katz et al., \\u201cReluplex: An Efficient SMT Solver for Verifying Deep Neural Networks\\u201d), it\\u2019s almost impossible to guarantee the convergence to a global minimum (point on the decision boundary with minimum distance to the original example). \\n\\n[a lot of parameters]\\nFor the completeness of the algorithm, we list all the relevant hyper-parameters. However, our algorithms do not sensitive to all of them. In fact, in the experiments, we fix parameters $k$, $s$, $T$ and $\\\\alpha$ and only tune $M$ and $m$. Also, $inf$ just means infinity and it is not a parameter. We also provide a table discussing how we choose the parameters for different datasets in Appendix D.\\n\\n[$\\\\epsilon$ different for different attack model]\\nThanks for the suggestion. We have added the curves of ASR versus $\\\\epsilon$ in Appendix G, and the results show that BOSH Sign-OPT is consistently better than Sign-OPT. Different tasks/models have different difficulties to attack, and that is why we chose different $\\\\epsilon$ in the table. We select a relatively good $\\\\epsilon$ value so we can compare different methods.\\n\\nWe also want to emphasize that the main criterion for comparing decision-based attack is the average distortion. All the decision-based attacks included in our comparisons are iterating on the decision boundary or outside decision boundary. Therefore all the iterates of these algorithms are adversarial examples, and thus it\\u2019s more important to compare their distance to the original example (Avg $L_2$). ASR is counting the ratio of adversarial examples within certain $\\\\epsilon$, which is just a way to summarize the $L_2$ distance statistics. \\n\\n\\n[Why two variables in algorithm 2]\\n$min_score = \\\\inf$ means set min_score to an infinity value and the variable min_score is used to store the minimum score in the following while loop. In algorithm 2, we miss a line after line 9 to update $min_score$ to currently minimum value: $min_score = g(u_{tk})/l(u_{tk})$. We have corrected this in the revision. Sorry for the mistake!\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a meta-algorithm for the so-called \\\"decision-based attack\\\" problem, where a model that can be accessed only via label queries for a given input is attacked by a minimal perturbation to the input that changes the predicted label. The algorithm, BOSH, augments any iterative algorithm for this problem with a diversification strategy based on bayesian optimization and throwing away bad solutions. Empirically, it is shown that BOSH can improve the performance of recently developed algorithms for this problem, by exploring more solutions and refining them intelligently.\\n\\nOverall, the decision-based attack problem is very practically relevant as it assumes minimal access to the classifier. I also really like that the authors looked into tree-based models in addition to neural networks. The algorithmic ideas that are proposed are simple and effective, as supported by the experimental results.\\n\\nHowever, I have some serious comments about the experimental evaluation that I believe can substantially improve the quality of the paper, if addressed. Whether I raise my score or not will depend on how well the authors address these questions. I also have concerns about related work in heuristic algorithms.\", \"questions\": \"- Related work: the ideas of diversifying solution paths and throwing away bad solutions are very popular in combinatorial heuristics. You should do a thorough review of work in that area and in Genetic Algorithms (GA). You could look into the following classical/survey papers as starting points, in particular the first survey's chapter 4.\\n\\nGlover, Fred, and Manuel Laguna. \\\"Tabu search.\\\" Handbook of combinatorial optimization. Springer, Boston, MA, 1998. 2093-2229.\\nFeo, Thomas A., and Mauricio GC Resende. \\\"Greedy randomized adaptive search procedures.\\\" Journal of global optimization 6.2 (1995): 109-133.\\n\\n- Other Black-Box algorithms: you should compare against well-established black-box optimization algorithms such as NOMAD and RBF-OPT. Both are based on very solid mathematical foundations and have high-quality open source implementations:\", \"nomad\": \"https://www.gerad.ca/nomad/\", \"rbf_opt\": [\"https://github.com/coin-or/rbfopt\", \"Runtime comparison: your analysis with respect to number of queries is very good and insightful. In addition, we should get a sense of the runtime performance. If you run each of the approaches with the same time limit, how do they fare?\", \"Comparing to \\\"optimal\\\" attacks: we need to know how well the solutions are compared to the best possible, or a close-enough approximation. You could run white-box attacks and compare the relative error to the quality of the white-box attack. Otherwise, it is hard to tell what gap remains to be closed algorithmically and it is difficult for other researchers to know whether it's worth trying to improve what you propose here in the future.\", \"Time complexity: please give a time complexity analysis of BOSH as a function of all its hyperparameters.\"], \"clarity\": [\"Figure 1: I don't understand what this figure shows. Where are the 2 minima? Please clarify further.\", \"You minimize l(.) / g(.) in Algorithm 2, but maximize it in Appendix B.2. Maximizing makes more sense. Which one is it?\", \"Theorem 1: Is that your result or Bergstra et al.'s?\"], \"minor_comments\": [\"\\\"Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model.\\\" --> \\\"Adversarial example generation has become a viable method for evaluating the robustness of a machine learning model.\\\"\", \"\\\"when searching an adversarial\\\" --> \\\"when searching for an adversarial\\\"\", \"\\\"Distortion\\\" is used in the literature much less than \\\"Perturbation\\\"; consider switching them.\", \"\\\"our mega algorithm\\\" --> \\\"our meta-algorithm\\\"\", \"Please use consistent notation: SignOPT or Sign-OPT.\", \"Appendix B.1: \\\"undifferentiable\\\" --> \\\"non-differentiable\\\"\", \"\\\"are the t \\u2212 x1 samples\\\" --> \\\"are the t \\u2212 1 samples\\\"\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1547\", \"review\": \"In this paper, the authors study the adversarial example generation problem, in the difficult case where the attacked model is a black box. Since the model is unknown, the approaches based on the minimization of a loss function with a gradient based optimizer do not apply. The current alternatives, known as decision-based attack, use iterative local updates from a starting point to a local minimum, where the class of the adversarial example is different from the initial example while its distance stays close to the initial one.\\n\\nFor handling the sensibility to starting points, the authors propose a meta-algorithm, which uses any iterative local update based attacks, and which maintains a set of solutions corresponding to different starting points. The proposed algorithm uses successive halving for iteratively maintaining empirical good solutions by discarding the worst half of solutions at each step, and uses Tree Parzen Estimator to explore by resampling promising area.\\n\\nIn the experiments, the meta-algorithm uses SignOPT attack. It is compared with three decision based attacks, including SignOPT. Three image datasets are used. The attacked models are neural networks and gradient boosting tree in the last experiment.\", \"pros\": [\"The paper is well-written and easy to follow.\", \"Generic algorithm.\"], \"cons\": \"- No analysis is provided. \\n- The proposed algorithm has a lot of parameters: $k, M, s, m, T, inf, \\\\alpha$. \\n- The $\\\\epsilon$ are not the same for each attacked model (table 1 and 3). Is it the result of a post-optimization? Could you plot the curves ASR versus $\\\\epsilon$ ?\\n- In algorithm 2, $min_score=inf$, whatever $t$, so why using two variables ?\\n\\n___________________________________________________________________________________________________________________________________\\nI read the rebuttal.\\nThanks you for answering my concerns.\\n\\nI think that it is possible to provide some theoretical guanrantees. For instance, may be one could show that the quality of the attacks is increasing when Algorithm 1 is run. Finding the highest increasing rate could be useful for tuning the parameters of the algorithm.\\nHowever, I understand that this could be tricky.\\n\\nI took a look to Figure 6. Good point: BOSH Sign-OPT attack outperforms Sign-OPT attack whatever $\\\\epsilon$.\"}"
]
} |
rJgDb1SFwB | MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis | [
"Margherita Rosnati",
"Vincent Fortuin"
] | With a mortality rate of 5.4 million lives worldwide every year and a healthcare cost of more than 16 billion dollars in the USA alone, sepsis is one of the leading causes of hospital mortality and an increasing concern in the ageing western world. Recently, medical and technological advances have helped re-define the illness criteria of this disease, which is otherwise poorly understood by the medical society. Together with the rise of widely accessible Electronic Health Records, the advances in data mining and complex nonlinear algorithms are a promising avenue for the early detection of sepsis. This work contributes to the research effort in the field of automated sepsis detection with an open-access labelling of the medical MIMIC-III data set. Moreover, we propose MGP-AttTCN: a joint multitask Gaussian Process and attention-based deep learning model to early predict the occurrence of sepsis in an interpretable manner. We show that our model outperforms the current state-of-the-art and present evidence that different labelling heuristics lead to discrepancies in task difficulty. | [
"time series analysis",
"interpretability",
"Gaussian Processes",
"attention neural networks"
] | Reject | https://openreview.net/pdf?id=rJgDb1SFwB | https://openreview.net/forum?id=rJgDb1SFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"y8aDfpoSc_",
"rJgYwrPKoH",
"BygirHwtir",
"rylwMBDKsr",
"H1gqUrzD5H",
"r1ge0ZnWcS",
"BylDbvpdtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726171,
1573643616600,
1573643586705,
1573643535436,
1572443473750,
1572090311642,
1571505918874
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1546/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1546/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1546/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The problem of introducing interpretability into sepsis prediction frameworks is one that I find a very important contribution, and I personally like the ideas presented in this paper. However, there are two reviewers, who have experience at the boundary of ML and HC, who are flagging this paper as currently not focusing on the technical novelty, and explaining the HC application enough to be appreciated by the ICLR audience. As such my recommendation is to edit the exposition so that it more appropriate for a general ML audience, or to submit it to an ML for HC meeting. Great work, and I hope it finds the right audience/focus soon.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for taking the time to read through our paper and share your thoughts. We are going to respond to your comments in the following.\", \"interpretability\": \"\", \"an_approach_into_the_validation_of_interpretability_of_the_results_is_given_in_figure_5\": \"the time-points closer to the sepsis onset are deemed more important by the model, which is intuitively where you would see a patient\\u2019s health worsening.\\nAs per the relevance of the attention mechanism, Wiegreffe and Pinter argued against the point made by Jain and Wallace (\\u201cAttention is not not Explanation\\u201d). We believe a fully fleshed proof of the explainability of attention is beyond the scope of this paper.\", \"baselines\": \"Regarding comparison to baselines, to the best of our knowledge, the work of Moor et al. is the state of the art. Moreover, we decided to also compare our work to Insight, as this interpretable model is the most advanced one validated by the clinical community (as it is now undergoing clinical trials). Our improvement over Moor et al. is not only in the actual performance (at least on their labels) but mostly in the interpretability of the model.\", \"minor\": \"We updated Figure 1 to make the caption more useful.\\n\\nIf you have any further suggestions on how to improve the paper, please let us know.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for taking the time to read through our paper and share your thoughts. We are going to respond to your comments in the following.\", \"new_sepsis_labels\": \"Regarding the different labelling of the data, we will make the code available once the paper will be reviewed - as it is now on a (non anonymous) github repository. As such, the labelling is part of the contributions of the paper.\\nThe decision to have zero contributions reflects that we assume a patient is healthy unless proven otherwise. This is in line with the official Sepsis-3 guidelines, which claim that \\u201cthe baseline SOFA score can be assumed to be zero in patients not known to have preexisting organ dysfunction.\\u201d (https://jamanetwork.com/journals/jama/fullarticle/2492881, Box 3). Our assumption is that the authors of the Sepsis-3 guidelines are aware of possible clinical practices and have taken them and other clinical biases into consideration when fleshing out their recommendations. \\nMoreover, as opposed to Moor et al who ignore cases that do not have complete data, our labels will also contain patients with fewer records and hence \\u2018noisier\\u2019 time series, but also a broader and more realistic use-case. On the other hand, only looking at well documented patients is not in line with the aim of the research stream: if a patient is already well attended, then doctors are already well aware of their health conditions and a diagnostic support tool would only bring marginal benefit.\", \"interpolation_in_figure_5\": \"\", \"your_point_on_figure_5_can_be_explained_by_the_multitask_nature_of_the_gaussian_process\": \"even if there is no input for that specific covariate, the model is able to infer its value from the other values it is able to record.\", \"mgp_samples\": \"Regarding the MC samples y_{MGP}, as written in the original paper, they are taken from the posterior over t\\u2019 defined by \\\\mu and \\\\Sigma in equation (6) in order to approximate its distribution. Could you be more specific about which part is unclear?\", \"dimensionality_of_the_latents\": \"z_j and z_j\\u2019 should be N x (M+Q), we amended the paper. Thank you for spotting that.\\n\\nIf you have any further suggestions on how to improve the paper, please let us know.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for taking the time to read through our paper and share your thoughts. We are going to respond to your comments in the following.\", \"numerical_performance_results\": \"We added the table with the numerical results in Appendix C.4.\", \"choice_of_covariance_times\": \"In response to the time covariances, we initially started with one covariance for all variables, which performed worse than the presented model. We also clustered the different features solely based on data sampling frequency, and found that two clusters were optimal: the clusters have low wss and are intuitive to the medical staff. It is especially given the latter point that we decided for two clusters instead of treating the number of covariances as a hyperparameter.\", \"interpolation\": \"As per the interpolation of the signal, we would like to point out that even the most frequently sampled data is sampled every 15 minutes. As such, micro-movements in data would already be missing. However, in order to limit the amount of smoothing, we decided to use a kernel that has no moments greater than two. As such, these kernels are able to capture \\u2018jumpier\\u2019 behaviours.\\n\\nIf you have any further suggestions on how to improve the paper, please let us know.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors consider a combination of an Gaussian model and neural network learned together to be able to find specific Gaussian features that would predict sepsis in a more intuitive, interpretable way for a medical experts. Also, they have provided a new labelling for the MIMIC-III dataset, which is of great value.\\n\\nUsually constraining the feature space reduces the accuracy, as one tends to miss important features. However, here, one is using a Gaussian model to generate a feature space that is easier to train with a neural network (by filling the sparse data by Gaussian process interpolation) but also augment the dataset.\\n\\nThe author report that his improves prediction. Unfortunately, I did not find tables of solutions where one could the actual impact. The authors should include a numerical table of their result comparison results. Now there is only a narrative in section 5.3 and an image showing that at different covariance times, different feature groups starts to interpret the results. Obviously, a question arises if the model would perform even better with a combination of covariance times, or is there some covariance time range that is missing that would improve the result even more.\\n\\nThe Gaussian model creates smooth interpolation of data spaces and also forces the training to look at corresponding smoothened features - that are very good for human eyes. However, there are situations (like the detecting heart beat from a video from a head moving with a recoil from the blood rushing to the brain) in where the signal is too weak for human to see, but is definitely there for a computer. I would state that this as interpretable, as an explicitly visible signal would be. Even shorter time constant signal might be valuable as well, but it would not be visible here... It seems that in sepsis, it was a good idea, as it improves the result compared to the situation of not using the Gaussian model. \\n\\nOne has to be careful to state that this would address the interpretability of the results. Gaussian process by itself is not giving understanding nor interpretability as it is too general. But it can make the provided solution \\\"teachable\\\" to a human expert by showing what visible features one can track.\\n\\nCompare this to a situation where one is using physical model to regularize detection. It has the analogous two model structure like the one in the manuscript. In https://xbpeng.github.io/projects/SFV/index.html the authors of the paper report that that one can achieve a better pose estimation by constraining the pose to only those that are achievable by a physical model based policy trained by reinforcement learning. This one is able to \\\"interpret\\\" the pose.\\n\\nAs a summary, the authors have done solid and valuable work in improving the accuracy detection. They should have a more formal way to present the results and baseline comparisions as tables. On the explainability and interpretability, there remains a lot of interpretations and one has lots of explaining to do, even after this manuscript.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors present a Sepsis-3 compliant labeling of the MIMIC-III dataset and a sepsis-prediction model largely based on MGP-TCN that uses attention mechanisms to enable explainability.\\n\\nIt is not entirely clear what the authors mean by MC samples from Y_MGP, are these simply samples from the posterior in (6)?\\n\\nIf z_j and z'_j are M-dimensional, how does one apply (8) and (9) for W_{\\\\alpha,0}, W_{\\\\alpha,1} being (M+Q)-dimensional or W_{\\\\beta,0}, W_{\\\\beta,1} being matrices?\\n\\nThe labelling of the data, largely following Johnson & Pollard (2018) and Moor et al (2019), is only different to Moor et al (2019) in the assumption that in the SOFA calculation missing values have zero contribution. Unless the authors provide evidence that this is reasonable, it is not necessarily clear whether labels resulting from the proposed scheme will be biased and affected by differences in clinical practice at different sites or data collection practices. That being said, it is not clear whether the proposed labeling is a contribution from the work.\\n\\nThe fact that the proposed labels are harder to fit does not imply that the proposed labels are better or more reasonable. This provided that is difficult to know (without ground truth) whether the difficulty originates from a broader use case (not as easy as Moor et al (2019)) or labels being noisy, imperfect proxies for sepsis diagnosis. I understand the author's motivation for doing it, however, their approach is not sufficiently justified. I also agree that predicting sepsis in a realistic setting is harder than suggested in prior work, however, the proposed labeling does not necessarily yields evidence of that being the case.\\n\\nThe interpretation of the covariance matrices of the MGP is interesting, though not surprising considering that covariates in green are measured regularly while blue covariates are ordered sparingly.\\n\\nFigure 5 is interesting, though raises questions about of interpretability of the model. How should unobserved covariates be interpreted (INR in Figure 5)?\\n\\nIn summary, the contributions of the present work are not sufficiently justified (labeling), the novelty of the proposed model is minor, relative to MGP-TCN, and the added value of the attention mechanism as a means to interpret predictions in terms of the journey of a patient is not clear.\", \"minor\": [\"Figure 1 needs a better caption. Being in page 2 makes it very difficult to understand.\", \"TCN is used before being defined.\", \"In (1) it should be t_{p,i,k} not t_{p,k,i}\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I've read the rebuttal and I'd like to keep my score as is. My main concern is the questionable role of attention in making the model more interpretable (which is the main contribution of the paper).\\n\\n###########################\\n\\nThe paper proposes a new model for automated sepsis detection using multitask GP and attention-based GP. The sepsis detection problem is of paramount importance in the clinical domain and the authors rightly emphasized that point. Also the paper tries to combine interpretability with prediction accuracy by using attention mechanism. \\n\\nThe paper is generally well written and well motivated; however, in terms of technical novelty and empirical evidence the paper can be further improved. \\n\\nThe MGP-AttnTCN model is mostly a minor modification of the model proposed by Moor et al. 2019 and has the additional attention element to be more interpretable. Unfortunately, it\\u2019s not easy for an ICLR reader without any medical background to evaluate the validity of the interpretability results provided in the paper. Furthermore, the recent works in NLP have agued against the value of attention for interpretability (see for instance \\u201cAttention is not Explanation\\u201d by Jain & Wallace 2019). That said, I believe the paper is probably a better fit for a machine learning in healthcare venue (such as MLHC). \\n\\nIn terms of empirical evidence of the prediction accuracy the paper only compares with Moor et al 2019 (which does not show a significant improvement in the realistic setting) and a much older InSight paper (2016). This would have been typically enough for a paper with major technical novelty; however, for this paper, I believe adding more recent baselines and discussing the advantages of the method over these baselines would be necessary.\", \"minor\": \"Caption in Figure 1 can be more informative and useful for the reader if you add more details on different parts of the model. \\n\\u201cGraphically, once can observe\\u201d should be \\u201cone can observe\\u201d .\"}"
]
} |
rkgIW1HKPB | Unsupervised Representation Learning by Predicting Random Distances | [
"Hu Wang",
"Guansong Pang",
"Chunhua Shen",
"Congbo Ma"
] | Deep neural networks have gained tremendous success in a broad range of machine learning tasks due to its remarkable capability to learn semantic-rich features from high-dimensional data. However, they often require large-scale labelled data to successfully learn such features, which significantly hinders their adaption into unsupervised learning tasks, such as anomaly detection and clustering, and limits their applications into critical domains where obtaining massive labelled data is prohibitively expensive. To enable downstream unsupervised learning on those domains, in this work we propose to learn features without using any labelled data by training neural networks to predict data distances in a randomly projected space. Random mapping is a highly efficient yet theoretical proven approach to obtain approximately preserved distances. To well predict these random distances, the representation learner is optimised to learn class structures that are implicitly embedded in the randomly projected space. Experimental results on 19 real-world datasets show our learned representations substantially outperform state-of-the-art competing methods in both anomaly detection and clustering tasks. | [
"representation learning",
"unsupervised learning",
"anomaly detection",
"clustering"
] | Reject | https://openreview.net/pdf?id=rkgIW1HKPB | https://openreview.net/forum?id=rkgIW1HKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5I3IDOFP6",
"Hkxfxi0jjH",
"BJl5zFAisS",
"BJlMz_AooH",
"SyejtPCosB",
"rkxisntMor",
"SyeWNBsjFr",
"SJxM3ExoFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726143,
1573804778029,
1573804305583,
1573804041728,
1573803906727,
1573194915476,
1571693865048,
1571648682167
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1545/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1545/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1545/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1545/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1545/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1545/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1545/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers agree that this is an interesting paper but it required major modifications. After rebuttal, thee paper is much improved but unfortunately not above the bar yet. We encourage the authors to iterate on this work again.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision summary\", \"comment\": \"Dear All reviewers,\\n\\nTremendous thanks for your constructive comments that have helped us substantially enhance the paper. In summary, we have made the following refinement in the updated paper.\\n\\n1. We have rewritten the last part of the introduction and reorganised the second section to better highlight the main contributions of this work.\\n\\n2. We have substantially refined Section 3 to allow readers to have a more straightforward understanding of the theoretical foundation of our method, and to understand the underlying optimisation our method does \\n\\n3. A series of new empirical results are added in Appendix E to G to address several concerns raised. Specifically, in Appendix E, our results show that our method performs stably with different representation dimensions in both unsupervised tasks. In Appendix F, we show empirical results for the testing runtime, which demonstrate that our method is generally comparably fast to the most efficient methods in both tasks. In Appendix G, we make a comparison between our method and advanced representation learning methods that are specifically designed for raw text/image data. The results show that our method can outperform these advanced representation learning methods in most cases.\\n\\n4. All the minor issues have been fixed in the updated paper.\\n\\nPlease refer to the updated paper for the detailed revision. We believe the revised paper have addressed all your major concerns. Please see our replies below for the detailed responses to your comments. Thanks.\\n\\nBest regards,\\nAuthors of Paper 1545\"}",
"{\"title\": \"Concerns addressed (the optional losses, network architectures, new comparison)\", \"comment\": \"Thanks for your positive and constructive comments, which helps substantially refine our paper. Your concerns are addressed as follows.\\n\\n1. The discussion of two optional losses. In the refined paper, we have created the Section 2.2. to focus on the discussion of the two optional losses, and Sections 1 and 2.1 have been substantially refined to discuss the relationship between the proposed random distance prediction loss and these two optional losses. Hope the paper better explains the optional losses now. \\n\\n2. The improvement in performance. In a few cases of the clustering datasets, it is true that the improvement of RDP is marginal w.r.t. Org in terms of NMI, but the improvement is very substantial in terms of F-score on datasets such as R8, Olivetti and RCV1; on most clustering datasets, RDP achieves substantial improvement over all the other four competing methods in both performance metrics. In the anomaly detection datasets, our method RDP outperforms all the five competing methods in both AUC-ROC and AUC-PR in at least 12 out of 14 datasets. This improvement is statistically significant at the 95% confidence level according to the two-tailed sign test across the 14 datasets. \\n\\n3. The contribution of the optional losses. In general, the optional auxiliary losses are designed to provide complementary supervision information for the random distance prediction loss, resulting in different learning constraints. These complementary supervision work in most cases. For example, in the anomaly detection task, as shown in Table 3, in 13 out of 14 datasets, RDP that uses the random distance prediction loss and the optional novelty loss performs better than the ablated RDP versions, RDP\\\\Lrdp and RDP\\\\Laux, which remove either the random distance prediction loss or the optional novelty loss; similarly, in at least 4 out of 5 clustering datasets, RDP that uses the random distance prediction loss and the optional reconstruction loss performs better than the ablated RDP versions, RDP\\\\Lrdp and RDP\\\\Laux, which removes either the random distance prediction loss or the optional novelty loss. Therefore, the optional losses generally have an important contribution to further improve RDP.\\n\\n4. The sensitivity of RDP w.r.t. different underlying network architectures. We have added additional experimental results to show the sensitivity of RDP w.r.t. different network architectures in Figures 2-4 in Appendix F. We performed the sensitivity test by varying a key component of the network, the number of units in the feature learning layer (i.e., the representation dimension in the new space). Our results show that RDP performs stably with a wide range of representation dimension options in both anomaly detection and clustering tasks.\\n\\n5. Comparison to t-SNE and UMAP. The t-SNE and UMAP methods are very sensitive to hyperparameters. Some best results using UMAP in clustering are provided in Table 1 below. t-SNE works less effectively than t-SNE. Similar to t-SNE and UMAP, HLLE is also one popular manifold learning methods for dimension reduction. We empirically found that HLLE works generally better than t-SNE and UMAP in our experiments. So, we reported the results of HLLE in the paper only. We believe that t-SNE and UMAP work best in preserving proximity information in extremely lower (e.g., 2-D or 3-D) dimensional space, so they are more popular in data visualisation.\", \"table_1\": \"F-score performance of K-means on the UMAP, HLLE and RDP projected spaces.\\n+---------------------------------------------------------------------------+\\n| Data | UMAP | HLLE | RDP |\\n+---------------------------------------------------------------------------+\\n| R8 | 0.085 \\u00b1 0.000| 0.085 \\u00b1 0.000 | 0.360 \\u00b1 0.055 |\\n+---------------------------------------------------------------------------+\\n| 20news| 0.054 \\u00b1 0.003 | 0.007 \\u00b1 0.000 | 0.119 \\u00b1 0.006 |\\n+---------------------------------------------------------------------------+\\n| Olivetti | 0.164 \\u00b1 0.009| 0.684 \\u00b1 0.024 | 0.638 \\u00b1 0.026 |\\n+---------------------------------------------------------------------------+\\n| Sector | 0.024 \\u00b1 0.001 | 0.062 \\u00b1 0.001 | 0.191 \\u00b1 0.007 |\\n+---------------------------------------------------------------------------+\\n| RCV1 | 0.341 \\u00b1 0.000 | 0.342 \\u00b1 0.000 | 0.572 \\u00b1 0.003 |\\n+---------------------------------------------------------------------------+\"}",
"{\"title\": \"Concerns addressed (relation to regular distance preserving methods, theoretical analysis, typos)\", \"comment\": \"Thank you for your constructive comments that help substantially improve our paper. Your concerns are addressed as follows.\\n\\n1.1: Our method actually takes one significant step further w.r.t. many regular distance preserving methods. Regular methods well preserve a large amount of local proximity information, but also often preserve misleading proximity when their underlying assumption is inexact for a given dataset. By minimising the difference between the predicted distance and the predefined distance yielded by the regular distance preserving method (e.g., random projection), our method essentially leverages the preserved data proximity and the power of neural networks to learn globally consistent local proximity (e.g., the genuine proximity information) and rectify the inconsistent proximity information (e.g., the inaccurate ones due to the inexact assumption) in a new space. Therefore, our method is equivalent to optimise the given proximity information using imperfect supervision information. Our method learns a significantly improved feature space out of the original distance preserving space when the genuine proximity information provided by the regular distance preserving method is sufficient. Therefore, our method is built upon regular distance preserving methods to learn better optimised proximity information for more expressive feature representations. This is the key driving force to enable our method to achieve substantially better performance than the counterparts that work on the original data space or the regular distance preserving methods-based spaces. We have rewritten the introduction and Section 3.3 to highlight this point.\\n\\n1.2: Yes, it is true that our method RDP requires distance information as the supervision to perform the feature learning, but, as demonstrated in the clustering experiments where text and image datasets are used, our method can work very effectively in handling text/image data. In our paper we use simple data transformation (such as bag-of-words model for text data and treating each pixel as a feature unit for image data) to convert the raw text/image data into feature vectors, and then use our random distance prediction method to perform the feature learning. Our extensive results show that this simple transformation can enable our method to effectively handle raw data that is not structured in nature.\\n\\nHow is the performance of RDP compared to those advanced representation learning methods that are specifically designed for raw text/image datasets? This is the missing part of our paper. So, we perform an empirical comparison between RDP and these advanced representation learning methods to answer this question. The results are now added in Tables 12 and 13 in Appendix G in the refined version of our paper. We brief our empirical findings as follows. In general, the advanced representation methods Doc2Vec (Le & Mikolov, ICML 2014) and RotNet (Gidaris et al., ICLR 2018) are respectively used as the competing method for learning representations of raw texts and images. Our empirical results show that RDP with simple transformation can perform very comparable to, or substantially better than, Doc2Vec and RotNet. Also, we derive Doc2Vec+RPD (RotNet+RDP) that works on the dense vectors yielded by Doc2Vec (RotNet). Our results show that Doc2Vec+RPD (RotNet+RDP) can also achieve significant improvement over Doc2Vec (RotNet). Doc2Vec and RotNet may perform better if they are properly pretrained. In such cases, Doc2Vec+RPD (RotNet+RDP) may obtain further improvement, as we demonstrated in Tables 12 and 13. Therefore, we believe that RDP can also be an important approach to learn representations of those raw data.\\n\\n2. The theoretical analysis in Sections 3.1 and 3.2 is to show the proven distance preserving properties of random projection. We agree that, as we stated in the paper, the used properties are indeed from the previous works, for which we put clear references and statements. However, these proven properties provide critical theoretical foundation to the proposed method. We have refined Section 3 to highlight this point. Particularly, Since the random distance information is used as the supervisory signal in our proposed method RDP, our method can work only if these random distances need to preserve original proximity information. The proven properties in Sections 3.1 and 3.2 show that either linear or non-linear random projection methods can be used to obtain such preserved distances efficiently. So, they provide strong theoretical motivation and proven support for the use of random distances as a reliable supervisory signal in RDP. In Section 3.3, we unfold RDP and link it to supervised learning with imperfect supervision information, which provides an aspect for understanding why RDP can learn better representation space out of its input space.\\n\\n3. Thank you for pointing out the typo. We have fixed this typo and a number of other writing issues.\"}",
"{\"title\": \"Concerns addressed (contribution statement, relation between the main and auxiliary losses, computational efficiency, selection of auxiliary loss, framework description, typos)\", \"comment\": \"Thank you for the positive and constructive comments, which helps substantially enhance the paper. We addressed your concerns/suggestions as follows.\\n\\n1. Systematic description of the authors\\u2019 major contribution. We have rewritten the last part of Section 1 to provide a more systematic description of our main contributions in this work. The three main contributions are now read as: (1) We propose a random distance prediction formulation, which is very simple yet offers a highly effective supervisory signal for learning expressive feature representations that optimise the distance preserving in random projection. The learned features are sufficiently generic and work well in enabling different downstream learning tasks. (2) Our formulation is flexible to incorporate task-dependent auxiliary losses that are complementary to random distance prediction to further enhance the learned features, i.e., features that are specifically optimised for a downstream task while at the same time preserving the generic proximity as much as possible. (3) As a result, we show that our instantiated model termed RDP enables substantially better performance than state-of-the-art competing methods in two key unsupervised tasks, anomaly detection and clustering, on 19 real-world high-dimensional tabular datasets.\\n\\nWe also reorganised Section 2 into two subsections, with one subsection to introduce the proposed random distance prediction idea and another subsection to introduce the incorporation of the optional loss. This is aligned to our stated contributions. \\n\\n2. The relation between \\u201crandom distance prediction loss\\u201d and \\u201ctask-dependent auxiliary loss\\u201d. The task-dependent auxiliary loss provides complementary information to the random distance prediction loss. Specifically, random distance prediction optimises the preserved local proximity information, while the task-dependent auxiliary losses, including reconstruction loss and the novelty loss, learns global features that are important to a given unsupervised task. We explicitly highlight this complementary relation in both Sections 1 and 2 in the refined paper.\\n\\n3. Solutions and references about how to choose the task-dependent loss L_aux. We didn\\u2019t find useful references for guiding the choice of the task-dependent loss. Our intuition of choosing these auxiliary losses are as follows. The reconstruction loss is an effective loss to learn globally consistent features which are generally important to clustering; the novelty loss is devised to learn the frequency of underlying patterns in the data, so it is critical to anomaly detection as anomalies often correspond to rare events. It is interesting and of great importance in real-world applications if we could have some methods to learn how to choose auxiliary losses to further enhance particular models. We will study this problem in our future work.\\n\\n4. Several problems in Figure 1: more description; shadow part; legend size. We have rewritten the caption of Figure 1 to provide sufficient description for readers to have a quick sense of the insight and process of our model. The previous shadow part was to highlight the random distance. We now remove the shadow to avoid distraction. The figure in the lower right position is now resized with readable axes and legend. We also provide a description in the figure caption to explain how the figure was created.\\n\\n5. Computational efficiency of the proposed method. The training of our method is generally comparably fast to the two recently proposed deep methods REPEN and DAGMM. For example in the high-dimensional and large-scale anomaly detection dataset Census, our method takes 3780 seconds, REPEN takes 3093 seconds, DAGMM takes 7891 seconds, and AE and RND take about 1400 seconds. So, the training efficiency of our method is around the middle among the competing deep methods. The traditional method iForest generally has better efficiency, e.g., it does not involve any optimisation and takes about 155 seconds to build their model on Census. Nevertheless, since training time can vary significantly using different training strategies in deep learning-based methods, it is difficult to have a fair comparison of the training time. Moreover, the models can often be trained offline. Thus, we focus on comparing the runtime at the testing stage, which is more important in real-world applications. The comparison of the testing runtime for both tasks is provided in Tables 10 and 11 in Appendix F. Our results show that our method is generally comparably fast to the most efficient competing method.\\n\\n6. Typos in Page 4. We have fixed the typos.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"###### Overall Recommendation\\nI vote for the \\u201cWeak Accept\\u201d decision for this paper. \\n\\n### Summary\\nThis paper introduces a novel model termed Random Distance Prediction model, it can predict data distances in randomly projected space. The distance in projected space is used as the supervisory signal to learn representations without any manually labeled data, avoiding the concentration and inefficiency problem when dealing with high-dimensional tabular data. Their main contribution is extending the random distance in projected spaces to approximate the original distance information of the hight-dimensional tabular data effectively. Overall, the idea in this paper is interesting and effective, the experiment results in two typical unsupervised tasks (anomaly detection and clustering) also look very promising. However, the writing sometimes has unclear descriptions, given these clarifications in an author's response, I would be willing to increase the score.\\n\\n### Strengths\\n1. The illustration of the authors' idea is clear and concise.\\n2. The theoretical analysis of the proposed method is solid and systematic, the validity of the subparts have been proven previously.\\n3. The experiment part is well organized. The RDP model are compared with several state-of-the-art unsupervised learning methods in 19 real-world datasets of various domains. The experimental setup is solid with realistic considerations, the results are very convincing and promising.\\n4. This paper provides sufficient detail for reproducing the results.\\n\\n### Weaknesses\\nLack of systematic description of the authors\\u2019 major contribution. The proposed model looks more like a combination of previous conclusions, which makes readers feel the core parts of this paper build heavily on previous work.\\n\\n### Questions\\n1. What is the relation between \\u201crandom distance prediction loss\\u201d and \\u201ctask-dependent auxiliary loss\\u201d?\\n2. Are there any solutions and references about how to choose the task-dependent loss L_aux? \\n3. Why you shade the second part of the loss function in Figure 1; \\n4. How long is it for training the proposed model and getting the experiment results? Does the RDP model still outperform the other algorithms?\\n\\n### Suggestions to improve the paper\\n1. It would be better to reorganize Section 1 and Section 2, please describe the contribution in a more systematic way.\\n2. Add details for the architecture of the model, please give more descriptions about Figure 1.\\n3. It might also help to add an algorithm comparison box for the test time for the proposed method.\\n\\n### Minor Edit Suggestions\\n1. It would be better to give more descriptions about Figure 1; the lower right part in Figure 1 is not explained in the caption; the shadow part in Figure 1 is not precise.\\n2. Figure 1 was bad organized, please make the legend readable size. \\n3. I don't think there exist the proofs of Eqns. (2)-(4) in the reference paper (Vempala, 1998), which was written in Page 4. The number should be revised.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a method of unsupervised representation by transforming a set of data points into another space while maintaining the pairwise distance as good as possible. The paper is well structured with background literatures, formula, as well as experiments to show the advantage of the proposed method. I find it generally interesting, with the following major concerns.\\n\\n1. Representation or dimension reduction? If the original space is a structured space like Euclidean space, then effectively this paper's method coincides with regular distance preserving method in dimension reduction, and Johnson-Lindenstrauss theories. If the original space is not structured or doesn't naturally have a good distance measure, then the proposed method cannot work. For example, if the original dataset is a set of documents, and the task is to do representation learning to convert each document into a compact vector. However, there's no good distance metric for the document space. If TF-IDF is used, then the representation space also inherits TF-IDF type features which is not desired. If more advanced similarity is used for the document space, then the role of representation learning is not essential anymore as that similarity measure can already help the downstream tasks.\\n\\n2. Section 3 the theoretical analysis. This part seems like a collection of previous works and contains minimal information about the proposed method.\\n\\n3. Some writing issues, like page 4 line 7 about the equation numbering.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors discuss the a novel technique, called Random Distance Prediction, to learn rich features from domains where massive data are hard to produce; in particular, they focus on the two tasks of anomaly detection and clustering.\\nThe paper is well written and understandable by a non-specialistic audience; introduction and references are adequate, and the theoretical analysis is reasonably explained, although the two optional losses should have been discussed more deeply.\\nResults are fairly supporting the authors' claim: however, the improvement in performances w.r.t. alternative approaches are limited in most cases, and the contribute of the optional losses is somehow unclear and inconsistent across different datasets. It would be interesting also to check how stable is the method (i.e. the losses) for different underlying network architectures, and also how RDP compares to very basic approaches employing dimensionality reduction algorithms such as t-SNE or UMAP.\"}"
]
} |
ryg8WJSKPr | ConQUR: Mitigating Delusional Bias in Deep Q-Learning | [
"DiJia-Andy Su",
"Jayden Ooi",
"Tyler Lu",
"Dale Schuurmans",
"Craig Boutilier"
] | Delusional bias is a fundamental source of error in approximate Q-learning. To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates. In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are "consistent" with the underlying greedy policy class. We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain (jointly) consistent with the expressible policy class. We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature (implicit) policy commitments. Experimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically. | [
"reinforcement learning",
"q-learning",
"deep reinforcement learning",
"Atari"
] | Reject | https://openreview.net/pdf?id=ryg8WJSKPr | https://openreview.net/forum?id=ryg8WJSKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SG32SGRk3R",
"ryl0605msH",
"SJlzn09QsS",
"S1lVB0cmir",
"SkxHeCc7ir",
"BklDpulB9r",
"BkxgvOhitH",
"H1gFcoFBYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726113,
1573265093698,
1573265066354,
1573264955604,
1573264876944,
1572305086882,
1571698776090,
1571294097170
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1544/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1544/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1544/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1544/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1544/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1544/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1544/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form.\\n\\nConcerns raised included the need for better motivation of the practicality of the approach, versus its computational cost. The need for improved evaluations was also raised.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3 (Part 1 of 2)\", \"comment\": \"Thank you for the constructive feedback and for the detailed questions regarding our experiments. Some brief responses to each of your numbered points in turn.\\n\\n1. [WHY ORDER OF MAGNITUDE CHANGE] The key difference between (1) the consistency-penalty experiment and (2) the full ConQUR experiments is that the former maintains a single Q-regressor, while the latter maintains multiple Q-regressors. Thus in setting (1), if one makes strong policy commitments early in training, they cannot be undone (there is no search or \\u201cbacktracking\\u201d). In such a case, we want to be less stringent in enforcing policy commitments. In setting (2), we can be more aggressive in enforcing policy commitments, since if they induce poor performance, alternative hypotheses are in play. (In principle, with \\u201cexhaustive\\u201d search, per Lu et al. 2018, this will find the optimal policy-consistent value function.) Nevertheless, Fig. 11 on p. 20 (Appendix D.3) shows lambda=1, 10 performs the best (or comparably). Larger lambdas are not necessary.\\n\\n[SELECTING LAMBDA] Selecting a reasonable fixed lambda is similar to selecting regularization parameters in supervised learning\\u2014-cross-validation or other approaches may be used.\", \"annealing\": \"lambda is gradually increased from 0 to the final value since we do not wish to over-constrain the Q-regressor with potentially bad policy commitments near the start of training. We chose a simple schedule: lambda = final_value * step / (step + 200k), which reaches half of the final value at step 200k. We will elaborate on this in the paper. Other ways of tuning are of course possible.\\n\\n2. We agree with this point and will make revisions accordingly. We will update our figures in the revised paper to include the mean score over reruns (most games are re-run with 3-5 random trials) and error bars of the 95% confidence interval. (An updated version of the main figures can be seen here: https://tinyurl.com/ryzyhrr ). Our conclusions about ConQUR are not impacted by this:\\n\\nDQN(lambda = 0.5): 10 wins, 3 losses, 6 inconclusive (see dqn_reg_0_5.pdf)\\nDDQN(lambda = 0.5): 9 wins, 2 losses, 8 inconclusive (see ddqn_reg_0_5.pdf)\\n\\nThis suggests using a single soft-penalty constant generally does not hurt performance and can improve over baseline in a non-trivial fraction of the Atari environments.\\n\\n3. As discussed in Question 2 above, lambda=0.5 works well across all tested games even without taking the max over runs). In general, Q-learning (with or without a consistency penalty) behaves differently in different environments, thus games will have their own optimal penalty constants. Practitioners often select good hyperparameters for their particular task/game (see above comment on selecting hyperparameters), and this is often seen in the literature. That said, we will make clearer the role/value of using just a single, fixed lambda.\\n\\n4. As above, we agree with this general point on statistical significance (we discuss details below on how we address this).\\n\\n[HYPERPARAMETER TUNING] The full set of ConQUR hyperparameters (fixed across all games) is shown in Table 6, Appendix D. There are 7 additional hyperparameters beyond those used in standard DQN hyperparameters (e.g. eps_train, discount factor, etc., which we match to standard values used in DQN implementations in the Atari RL literature). Five of the seven hyperparameters relate to easy-to-understand search tree parameters, including branching factor, etc. The two remaining hyperparameters (Boltzmann iteration, training calibration parameter for scoring function) will need additional insight and potential tuning (e.g., see our discussion in response to review 2 regarding parameter lambda), but should not be more cumbersome to tune than other deep learning architectures. Due to GPU resource limitations, we did not explore the full range of hyperparameter combinations.\\n\\n[DESCRIPTION OF RESULTS] First, one brief clarification: the statements such as \\u201cConQUR wins by a 10% margin\\u2026\\u201d are not intended to be statistical claims, rather they are descriptions of the larger table of results from the appendix. Per the point raised by reviewer 1, we will attempt to get the full table of results into the main text so some of this descriptive text can be condensed.\\n\\n[EVALUATION METRICS] Our evaluation metric (game score) is the standard for Atari RL benchmarks. We agree with your point about statistical validity of our conclusions. We have results evaluated using 5 random seeds per game, and will include the results in the revised paper (3 to 5 random seeds is standard across Atari RL papers.) The 5-seed results, showing mean and 95% confidence intervals can be seen at: https://tinyurl.com/yz5xy5ox . We were unable to run on multiple seeds prior to submission due to limited GPU resources, our apologies for that.\"}",
"{\"title\": \"Response to Review #3 (Part 2 of 2)\", \"comment\": \"[STATISTICAL SIGNIFICANCE TESTING] While it is not standard in the Atari RL literature to perform statistical significance tests, we ran a Welch\\u2019s t-test on performance from iteration 40 to 100, and obtained the following results:\\n\\nWith lambda=10: out of 59 games, 40 games give statistically significant difference (p-val < 0.05) between ConQUR and the baseline. In 31 games, ConQUR is significantly better, while in 9 games the baseline is.\\n\\nWith lambda=1: 31 give statistically significant difference (p-val < 0.05) between ConQUR and the baseline. In 28 games, ConQUR is significantly better, while in 3 games the baseline is.\", \"we_also_ran_a_one_sample_t_test_to_compare_against_pre_trained_dqn\": \"With lambda=10: out of 59 games, 51 games give statistically significant difference (p-val < 0.05) between ConQUR and the pre-trained DQN. In 44 games, ConQUR is significantly better, while in 7 games the pre-trained DQN is.\\n\\nWith lambda=1: 53 give statistically significant difference (p-val < 0.05) between ConQUR and the pre-trained DQN. In 43 games, ConQUR is significantly better, while in 10 games the pre-trained DQN is.\\n\\n\\n5. Thanks for raising this point about whether our methods are providing improvements because of delusion mitigation or for other reasons. We have provided a detailed response to review 2 that addresses this point: the attribution of improved performance is effectively \\u201cby definition\\u201d due to the (partial) removal of delusional bias. We acknowledge that this point should have been made more explicitly in the paper and we will clarify in revision. \\n\\nRegarding \\u201capproximations\\u201d to consistency: we assume this refers to the soft-consistency penalty only (there are no other approximations other than limiting the search to a subset of possible action assignments). In the paper we explain that the soft consistency penalty measures the degree to which consistency constraints are satisfied: full consistency incurs no penalty while the penalty increases linearly in the degree of violation (other penalty functions are possible of course). Your question also suggests that understanding how much more or less stringent consistency enforcement impacts induced policy quality is important---we agree fully. Our experiments with different values of lambda get partly at this. Evaluating soft consistency vs. *exact* consistency is more challenging in larger domains like Atari due to runtime bottlenecks (large linear programs for linear approximators and solving NP-hard classification problems for DNNs). But we can do so on smaller toy domains (see next paragraph).\\n\\nWe do have results on the simple MDP of Lu et al. with a simplified ConQUR algorithm (with exact consistency checking) and can include these in the paper (we will do so in an appendix). Your suggestion to test how (different degrees of) soft consistency impact the final result vis-a-vis exact consistency is a very nice one, and we will test and explicate this on the simple MDP of Lu et al, or another small example.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for the constructive feedback and for raising some important questions. Some brief responses to specific points/questions you raise.\\n\\n[DOES DELUSION ARISE IN PRACTICE?] The purpose of the experiments is to show that mitigating delusional bias, even with high-capacity NNs, can offer improvements. We believe the experiments show that delusional bias does occur in practice since the pre-trained Q-regressors upon which we improve are Dopamine-trained DQNs/DDQNs. Our methods differ only from (say) DQN in the use of the soft-consistency penalty (plus the use of search to explore multiple assignments against which to apply this penalty). We claim that this tackles only \\u201cpolicy inconsistency\\u201d (i.e., delusion). Because we obtain improvements over the pre-trained DQNs, our conclusion is that delusion does, indeed, arise in practice. In retrospect, we should have made this important point much more explicit in the paper---our apologies for not doing so originally---and we will do so in revision.\\n\\nOur experiments, we believe, do not demonstrate the full power of removing delusion, since we only retrain the last FC layer of the pre-trained DQN, which in fact limits our performance opportunities vs. training on all layers---this alone results in significantly better greedy policies in many instances.\\n\\n[WILL DELUSION ARISE IN POLICY ITERATION?] This is a good point, and while it may depend on the implementation, generally policy iteration will not have delusional bias. However, our contribution is focused on improving \\u201cpure\\u201d value-based methods like Q-learning (and related methods like DDQN). These are widely used algorithms, that researchers and practitioners often have strong reasons to use---our focus is to mitigate delusional bias to extract maximum value from such methods whenever they are used.\\n\\n[WHY USE PRE-TRAINED NETWORKS]? The rationale for improving with pre-trained DQNs is three-fold.\\n\\nFirst, it demonstrates that delusion actually causes problems in practice (as discussed above, we will articulate this point much more explicitly in revision). In some sense, by freezing the feature representation learned by DQN, and demonstrating that a \\u201clinear\\u201d value function over those same features can be trained in (partially) non-delusional fashion to extract improvements gives more of a focus on non-delusional training (as opposed to novel \\u201cfeature discovery\\u201d).\\n\\nThe second reason is a practical one---it allowed us to scale our experiments to cover a range of hyperparameters and run the entire Atari suite (rather than selecting just a few high-performing games). We completely agree with your broader point about experimenting with our methods with full network training (i.e., from scratch) to understand their performance. In some sense, this paper provides a (we hope, compelling) first exploration of these ideas.\\n\\nThird, from a practical point of view, this \\u201clinear tuning\\u201d approach offers a relatively inexpensive alternative to extract improvements from a model learned using classic techniques (e.g., linear tuning requires many fewer training samples).\\n\\nWe also note that, if the full application of ConQUR is too expensive in some settings, adding a our simple consistency penalty can sometimes provide a lift (and rarely hurts), as shown by the experiments in Section 4.1. This requires no major changes to standard DQN, DDQN or the like and adds no significant implementation complexity or computational cost.\\n\\n[AUXILIARY LOSSES] Thanks for making the reference to auxiliary losses, this is an interesting question. While our penalty term focuses on consistency for a single task and auxiliary losses help accelerate learning for the main task, we can imagine applying a consistency penalty for each auxiliary objective (in addition to minimizing task\\u2019s Bellman error). This direction is interesting to explore, which we will cite in the revised paper.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for the constructive feedback. Some brief responses:\\n\\nWe will try to fit in the results of Table 4 into the main text---if this is not feasible due to space constraints, we\\u2019ll include a comprehensive summary as you suggest.\\n\\nWith respect to other algorithms (A3C, PPO, TRPO, etc.) we will add a brief comparison to such algorithms in the literature, thanks for the suggestion. While we don\\u2019t make a broad claim that removing delusion will make Q-learning competitive universally---we believe some domains may be better suited to policy-based or actor-critic-style algorithms---it does in a number of the cases examined here. However, there are many instances where Q-learning is desirable for other reasons, and our primary aim is to try to extract maximum performance from Q-learning itself. We will make this part of our motivation more explicit.\\n\\nWe will make Fig. 2 more readable---apologies for that!\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper focuses on addressing the delusional bias problem in deep Q-learning, and propose a general framework (ConQUR) for integrating policy-consistent backups with regression-based function approximation for Q-learning and for managing the search through the space of possible regressors. Specifically, it proposes a soft consistency penalty to alleviate the delusional bias problem while avoiding the expensive exact consistency testing. This penalty encourages the parameters of the model to satisfy the consistency condition when solving the MSBE problem.\", \"pros\": \"The soft penalization itself is already shown to be effective in further improving any regression-based Q-learning algorithms (e.g., DQN and DDQN). When combining the soft consistency penalty and the search procedure, it is shown to significantly outperform the DQN and DDQN baselines, which is impressive. This work presents novel idea and solid supporting experiments.\", \"cons\": \"The major experimental results of the paper are in Table 4 of Appendix D. This is not a good effort to save space by moving the most important results into appendix. At least a selected set of results should be presented in the main paper rather than in appendices. One alternative approach is the authors plot a bar figure to demonstrate the performance of different algorithms on different Atari games.\", \"other_comments\": \"\\u2022\\tFig. 2 two is a bit hard to read due to too many curves.\\n\\u2022\\tStandard baseline results other than DQN and DDQN should also be listed, in order to demonstrate that solving delusional bias could make Q-learning more competitive than alternatives (e.g., A3C, PPO, TRPO).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a solution to tackling the problem of delusional bias in Deep Q-learning, building upon Lu et.al. (NeuRIPS 2018). Delusional bias arises because independently choosing maximizing actions at a state may be inconsistent as the backed-up values may not be realizable by any policy. They encourage non-delusional Q-functions by adding a penalty term that enforces that the max_a in Q-learning chooses actions that do not give rise to actions outside the realizable policy class. Further, in order to keep track of all consistent assignments, they pose a search problem and propose heuristics to approximately perform this search. The heuristics are based on sampling using exponentiated Q-values and scoring possible children using scores like Bellman error, and returns of the greedy policy. Their final algorithm is evaluated on a DQN and DDQN, where they observe some improvement from both components (consistency penalty and approximate search).\\n\\nI would lean towards being slightly negative towards accepting this paper. However, I am not sure if the paper provides enough evidence that delusional bias is a very relevant problem with DQNs, when using high-capacity neural net approximators. Further, would the problem go away, if we perform policy iteration, in the sense of performing policy iteration instead of max Q-learning (atleast in practice)? Maybe, the paper benefits with some evidence answering this question. To summarize, I am mainly concerned about the marginal benefit at the cost of added complexity and computation for this paper. I would appreciate more evidence justifying the significance of this problem in practice. \\n\\nAnother comment about experiments is that the paper uses pre-trained DQN for the ConQur results, where only the last linear layer of the Q-network is trained with ConQur. I think this setting might hide some properties which arise through the learning process without initial pre-training, which might be more interesting. Also, how would other auxilliary losses compare in practice, for example, losses explored in the Reinforcement Learning with Auxilliary Tasks (Jaderberg et.al.) paper?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"A recent paper by Lu et al introduced delusional bias in Q-learning, an error due to the max in the Bellman backup not being consistent with the policy representation implied by the greedy operator applied to the approximated value function. That work proposed a consistent algorithm for small and finite state spaces, which essentially enumerates over realizable policies. This paper proposes an algorithm for overcoming delusional bias in large state spaces. The idea is to add to the Q-learning objective a smooth penalty term that induces approximate consistency, and search over possible Q-function approximators. Several heuristic methods are proposed for this search, and results are demonstrated in Atari domains.\\n\\nI found the topic of the paper very interesting - delusional bias is an intriguing aspect of Q learning, and the approach of Lu et al is severely limited to discrete and small state spaces. Thus, tackling the large state space problem is worthy and definitely not trivial. \\n\\nThe authors\\u2019 proposed solution of combining a smooth penalty for approximate consistency and search over regressors makes sense. The implementation of the search (Sec 3.4) is not trivial, and builds on a number of heuristics, but given the difficulty of the problem, I expect that the first proposed solution will not be straightforward.\\n\\nI am, however, concerned with the evaluation of the method and its practicality, as reflected by the following issues: \\n1. The method has many hyper parameters. The most salient one, \\\\lambda, the penalty coefficient, is changed between 0.25 to 2 on the consistency penalty experiment, and between 1 to 1000 in the full ConQUR experiments. I did not understand the order of magnitude change between the experiments, and more importantly, how can one know a reasonable \\\\lambda, and an annealing schedule for it in advance. \\n2. I do not understand the statistical significance of the results. For example, with the constant \\\\lambda=0.5, the authors report beating the baseline in 11 out of 19 games. That\\u2019s probably not statistically significant enough to claim improvement. Also, only one run is performed for each game; adding more runs might make the results clearer. \\n3. The claim that with the best \\\\lambda for each game, the method outperforms the baseline in 16 out 19 games seems more significant, but testing an optimal hyper parameter for each game is not fair. Statistically speaking, *even if the parameter \\\\lambda was set to a constant zero* for the 5 runs that the method is tested on, and the best performing run was taken for evaluation against the baseline, that would have given a strong advantage to the proposed method over the baseline\\u2026.\\n4. For the full ConQUR, there are many more hyper parameters, which I did not understand the intuition how to choose. Again, I do not understand how the results establish any statistically significant claim. For example, what does: \\u201cCONQUR wins by at least a 10% margin in 20 games, while 22 games see improvements of 1\\u201310% and 8 games show little effect (plus/minus 1%) and 7 games show a decline of greater than 1% (most are 1\\u20136% with the exception of Centipede at -12% and IceHockey at -86%)\\u201d mean? How can I understand from this that ConQUR is really better? Establishing a clearer evaluation metric, and using well-accepted statistical tests would greatly help the paper. At the minimum, add error bars to the figures!\\n5. While evaluating on Atari shows applicability to large state spaces, it is hard to understand from it whether the (claimed) advantage of the method is due to the delusional bias effect, or some other factor (like implicit regularization due to the penalty term in the loss). In addition, it is hard to understand the different approximations in the method. For example, how does the proposed consistency penalty approximate the true consistency? These could all be evaluated on the simple MDP example of Lu et al. I strongly advise the authors to add such an evaluation, which is easy to implement, and will show exactly how the approximations in the approach deal with delusional bias. It will also be easier to demonstrate the effects of the different hyper parameters in a toy domain.\"}"
]
} |
BkgHWkrtPB | Where is the Information in a Deep Network? | [
"Alessandro Achille",
"Stefano Soatto"
] | Whatever information a deep neural network has gleaned from past data is encoded in its weights. How this information affects the response of the network to future data is largely an open question. In fact, even how to define and measure information in a network entails some subtleties. We measure information in the weights of a deep neural network as the optimal trade-off between accuracy of the network and complexity of the weights relative to a prior. Depending on the prior, the definition reduces to known information measures such as Shannon Mutual Information and Fisher Information, but in general it affords added flexibility that enables us to relate it to generalization, via the PAC-Bayes bound, and to invariance. For the latter, we introduce a notion of effective information in the activations, which are deterministic functions of future inputs. We relate this to the Information in the Weights, and use this result to show that models of low (information) complexity not only generalize better, but are bound to learn invariant representations of future inputs. These relations hinge not only on the architecture of the model, but also on how it is trained. | [
"Information",
"Learning Dynamics",
"PAC-Bayes",
"Deep Learning"
] | Reject | https://openreview.net/pdf?id=BkgHWkrtPB | https://openreview.net/forum?id=BkgHWkrtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6yBl0pwT7O",
"vfX6knnaa",
"Sylav-5hoH",
"BJgvM-q3jr",
"r1x4pAt2iB",
"HJgCRGf3sr",
"rker010oiH",
"BJgSc6ajsB",
"H1gk-6ajjS",
"HJlrFsTijr",
"Sye3oGRmcH",
"Hygat6E1qH",
"rJl_CN_DKr"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1579479795970,
1576798726085,
1573851492724,
1573851407001,
1573850812352,
1573819093901,
1573801932886,
1573801357248,
1573801207019,
1573800829245,
1572229796102,
1571929476599,
1571419344101
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1543/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1543/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1543/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Response to Paper Decision\", \"comment\": \"We are surprised and disappointed by the Area Chair's decision, and the process used to arrived at it, considering that all three reviews are positive, that we have responded to the Area Chair's own review \\u2013 itself an anomaly \\u2013 despite it being posted hours before the closing of the rebuttal period. Simple modifications that we could have made during the rebuttal period \\u2013 but were not allow to given the timing of the extra review \\u2013 would had addressed the objections. If the Area Chair truly still has substantive concerns, we invite him/her to reach out to us and we will be delighted to help him/her work through the logical arguments, and the assumptions they are actually based on.\\n\\nFor now, we respond to each comments in-line, which are reflected in the updated version to be posted on ArXiv.\\n\\n>> A logical argument is only as strong as its weakest link, and I believe the current paper has some weak links. For example, the attempt to tie the behavior of SGD to free energy minimization relies on unrealistic approximations.\\n\\nWe assume the Area Chair refers to the approximation of the noise in SGD as being isotropic in the proof of Proposition 3.6. Apart from the fact that the anisotropic dynamics of SGD have only been worked out recently for the one-dimensional case, as we have already mentioned in the rebuttal, the actual numbers of the escape rate will surely change, but the statement of the claim, concerning the fact that the probability of escape will be higher for high-curvature wells than for low-curvature, is still true.\\n\\n>> Second, the bounds based on limiting flat priors become trivial. \\n\\nWe are not sure of what bounds the Area Chair refers to, since we never compute any bounds that are based on limiting flat priors, and indeed argue in the rebuttal that doing so would be nonsensical. We assume the AC means the limit $\\\\lambda \\\\to \\\\infty$ in the expression of the Fisher, to which we respond next.\\n\\n>> In part, the authors argue that the logical argument they are making is not sensitive to certain issues that I raised, but this only highlights for me that the argument being made is not very precise. \\n\\nWe assume the Area Chair refers to the constant term in the Fisher Information, which becomes infinite as the prior becomes improper. For any constant value of $\\\\lambda$ this term is not present in the gradients or in the difference of the free energies, nor does it have any effect on optimization, which is what matters in the analysis. While $\\\\lambda$ surely affects the numerical value of the Information in the Weights, this is no different than what happens in defining differential entropy as a limit of the KL divergence with an improper uniform prior, which similarly leads to a diverging term which is ignored without drama in many situations.\\n\\n>> I can imagine a version of this work with sharper claims, built on clearly stated assumptions/conjectures about SGD's dynamics, RATHER THAN being framed as the consequences of clearly inaccurate approximations. \\n\\nIt appears the sticky point is again the use of isotropic noise in Proposition 3.6. Indeed, we state clearly that we assume isotropic noise, and we clearly acknowledge that SGD noise is not isotropic in deep networks. Knowingly using assumptions that are not satisfied in practice is not uncommon in analyzing real signals (e.g., the band-limited assumption in the classical Sampling Theorem), and is done because it makes the proof possible (as we already pointed out, anisotropic non-asymptotic analysis is in its infancy), and because it highlights underlying mechanisms concerning escape from sharp minima that are manifest regardless of isotropy. \\n\\n>> The behavior of diffusion can be presented as evidence that the assumptions/conjectures (that cannot be proven at the moment, but which are needed to complete the logical argument) are reasonable. \\n\\nThe logical arguments we present are complete and the statements are falsifiable. Some readers may see a gap in extending conclusions drawn for stochastic optimization with isotropic noise to the anisotropic case. We do not share such concerns but, regardless, the claims are for the isotropic case, and they are valid with no logical gaps.\\n\\n>> However, I am also not convinced that it is trivial to do this, and so the community must have a chance to review a major revision.\\n\\nThe changes needed to address these objections are simple, and do not require changing any of the claims or proofs. We do not wish to put weights on the technicalities \\u2013 for instance computing the exact escape time, which would require assuming anisotropic noise \\u2013 since this paper is about properly defining a notion of information in the weights, its relation to optimization, and to the information in the activations. However, others are welcome to conduct the analysis in the anisotropic case, which is well beyond the scope of our paper. Doing so would take the understanding of information in deep networks one further step forward.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper is full of ideas. However, a logical argument is only as strong as its weakest link, and I believe the current paper has some weak links. For example, the attempt to tie the behavior of SGD to free energy minimization relies on unrealistic approximations. Second, the bounds based on limiting flat priors become trivial. The authors in-depth response to my own review was much appreciated, especially given its last minute appearance. Unfortunately, I was not convinced by the arguments. In part, the authors argue that the logical argument they are making is not sensitive to certain issues that I raised, but this only highlights for me that the argument being made is not very precise. I can imagine a version of this work with sharper claims, built on clearly stated assumptions/conjectures about SGD's dynamics, RATHER THAN being framed as the consequences of clearly inaccurate approximations. The behavior of diffusions can be presented as evidence that the assumptions/conjectures (that cannot be proven at the moment, but which are needed to complete the logical argument) are reasonable. However, I am also not convinced that it is trivial to do this, and so the community must have a chance to review a major revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to the Area Chair (part 3)\", \"comment\": \">> 4. What is Q? Having read the paper, I'm somewhat confused what the post-distribution Q is meant to be in this story. Is Q the distribution of the weights produced by SGD?\\n\\nIt is not. As we describe after Definition 3.1, Q is an arbitrary choice, corresponding to an encoding for the weights, irrespective of how the weights are obtained (in particular, throughout most of the paper, the weights are not assumed to be a random variable, but rather a fixed vector). To add color to our description in Sect. 3, informally, we are interested in how much information we need to encode the (fixed) weights of the network. Since the weights are a continuous vector, encoding them exactly would require an infinite amount of information (if using a continuous prior P, such as a Gaussian, for the encoding). However, encoding the weights exactly is pointless, as we know that it is possible to perturb the weights (which is often referred to as \\u201cadding noise\\u2019\\u2019 even if there is no stochastic process at play) without substantially increasing the risk. On the other hand, encoding \\u201cnoisy weights\\u201d can be done with a finite amount of information (for example, one could discretize the weights using the standard deviation of the noise along each parameter as part of the quantization process). For this reason, it is natural to think of the distribution Q as the \\u201camount of noise that could be added to the weights\\u201d (even if no noise is actually added to the weights), while not increasing the loss by more than a fixed amount; then, KL(Q||P) is the coding length for that particular set of weights, using the particular (arbitrary) choice of code specified by P and Q, when allowing a lossy compression with this noise. We did not belabor this point since a more formal argument to this effect has already been given by Hinton and Van Camp (1993), as we indicate in Sect. 1 and Sec. 3. However, we concur that the role of Q can be confusing, so we will add a discussion in the appendix to clarify.\\n\\n>> 5. Relationship with Xu-Raginsky.\\n\\nWe will revise the narrative to make sure we do not give the impression that we want to distance our results from the work of Xu and Raginsky, since this is not out intention. On the contrary, we are quite intrigued by the connections, which are also described in more detail in our prior answer to reviewers. We also agree that the PAC-Bayes and information stability bounds are related. What this work is trying to do is not to introduce new information bounds (we are happy to use either PAC-Bayes or Xu et al.), but rather to show how the learning dynamics affects the information bounds: Not just by decreasing the information contained in each gradient step because of the noise (an argument often exploited in the literature), but also through the geometry of the loss landscape (flat minima) and the \\u201cpath\\u201d stability of the algorithm. Moreover, we want to show that, in a DNN, low information does not solely mean better generalization (as those bounds already show), but also better properties of the learned representation. We do so by introducing the notion of effective information of the activations, which also tries to solve some formal issues with the Information Bottleneck theory for the study of the activations. \\n\\nWe are happy to further clarify these relationships in the body of the paper, also including the relation between PAC-Bayes and the bound of Xu and Raginsky, and related works, mentioned by the AC.\"}",
"{\"title\": \"Response to the Area Chair (part 2)\", \"comment\": \">> 2. Curvature assumption in Proof of Prop 3.4\\n\\nThe fact that the quadratic approximation is valid does not imply that the curvature is constant. It simply means that higher-order terms are negligible, which can happen while the curvature changes along the path. Proposition 3.4 gives the optimal value of the information up to a second order term. If the curvature is not constant around the minimum, as it is likely to be, the value of the information computed will be wrong by a term which will go to zero as $\\\\beta \\\\to \\\\infty$.\\n \\nWe will edit the proof to clarify where the second-order approximation is used, which is simply to estimate the term $\\\\mathbb{E}_{w\\\\sim q(w)}[L_\\\\mathcal{D}(w)]$ around the mean of $q(w)$ using the gradient and Hessian of $L_\\\\mathcal{D}$, and not in any way that requires the loss to be exactly quadratic near the minimum.\\n\\n>> 3. Informal \\\"summary\\\" of Prop 3.6\\n\\nThe point of the claim, where we refer to various stochastic optimization methods collectively as \\u2018SGD\\u2019, is that the stochasticity adds a diffusion/regularization term to the loss, so what is minimized is not the original loss, but a regularized one that has the form of free energy. Connecting the free energy to the long-term behavior and time-to-escape from minima is one of the main objectives of non-equilibrium dynamics and Kramer\\u2019s theory, and beyond the scope of a single conference paper. However, the informal take-away from Prop. 3.6 is that, when sharper and flatter minima with the same loss are connected, a noisy process will tend to escape the sharp minimum towards the flat minimum. This can be interpreted as the process moving in the direction of the minimum with lower free energy, where the free energy accounts for both the loss and the curvature. This argument could be made formal, but at the cost of readability. We prefer an informal summary, leaving to other papers (including some of those cited) to prove more formally.\\n\\n>> 3. Information in the weights under uninformative prior\\n\\nIndeed, point well taken, and we will clarify. Specifically, we do not intend to claim that,for $\\\\lambda \\\\to \\\\infty$, eq. 7 provides a valid generalization bound. Rather, the correct (finite) expression, for any finite choice $\\\\lambda$, is given by:\\n$$KL(p||q) = \\\\frac{\\\\|{w^*}\\\\|^2}{\\\\lambda^2}+ \\\\frac{1}{\\\\lambda^2} \\\\operatorname{tr}(\\\\Sigma) + k \\\\log \\\\lambda^2 - \\\\log |\\\\Sigma|-k + o(1), \\\\quad(*)$$\\nwhere $\\\\Sigma=\\\\frac{\\\\beta}{2} (F + \\\\frac{\\\\beta}{2\\\\lambda^2} I)^{-1}$. The proof is identical to that in the appendix, simply without taking the limit wrt. lambda. Note that $F$ still plays the same role in this expression, which is just slightly more cluttered. What we meant to say by ignoring the constant term is that, if some variational optimization algorithm aims to minimize the KL term through a gradient descent process, the gradient it will receive will (under mild regularity assumptions) be well defined in the limit $\\\\lambda \\\\to \\\\infty$. \\n\\nAdmittedly, our choice notation and presentation aimed at simplicity was made at the expense of clarity, so we will rewrite the proposition using the expression (*), and change the statement accordingly.\"}",
"{\"title\": \"Response to the Area Chair\", \"comment\": \"We thank the Area Chair for the questions, which give us an opportunity to clarify possible misunderstandings. We give detailed responses to the points raised below, that reaffirm that the claims made are correct, but will add further color to the discussion, clarify the nomenclature for SGD and amend the statement of Prop. 3.4 to clear any possible confusion.\\n\\n>> 1. Isotropic noise assumption\\n\\nThe noise of SGD being non-isotropic is a known fact, as we describe at the end of page 3, and not a problem for our claims. There are only two claims that reference isotropy, Proposition 3.4, that has however no connection with SGD, and Proposition 3.6. Specifically, the expectation in the Proposition 3.6 is computed under the assumption of isotropic noise, and the value is only approximate if the assumption, stated clearly in the claim, is not satisfied. The Area Chair is right in that, while the claim is correct, if its assumptions are violated, the resulting approximation may be poor. We will clarify this in the discussion following the theorem, to avert possible confusion.\\n\\nThe computation of the escape time in the anisotropic case is known to be a hard problem. We deem the result in the isotropic case useful nevertheless because, even if the value of the expectation is different in the anisotropic case, the trends are the same. We should also mention that the extension of classic results in Langevin Dynamics to the case of SGD is the subject of active investigation, for instance https://arxiv.org/pdf/1907.03215.pdf.\\n\\nMore importantly, successive claims in our work do not build directly on Proposition 3.6, but rather on the fact that the minimum to which SGD converge is flat. This fact has been verified empirically by multiple independent studies, for different tasks, different architectures, and different variants of SGD. So, even if one were to dismiss Proposition 3.6 as not pertinent to SGD, all subsequent results hold if one accepts that it converges to low-curvature regions of the loss landscape. Proposition 3.6 shows that, albeit under strong assumptions, a stochastic gradient algorithm run for a finite time is more likely to settle on a flat minimum rather than a sharp one.\\n\\nRegarding the fact that SGD converges to a minimum with zero error, we think it is more appropriate to say that SGD tends to converge to areas of the loss landscape with very low loss and curvature, which it will not escape in a short time. However, the core idea of Proposition 3.6 -- and indeed the reason why we use it instead of a simpler result on the stationary distribution, which would hold only asymptotically -- is to suggest that SGD can easily escape very sharp minima (or, more generally, any area of the loss with high curvature) in its path before settling. Hence, when we stop the optimization after a finite time, a noisy gradient descent algorithm will be more biased toward converging to flatter areas of the landscape than a corresponding non-noisy gradient flow.\\n\\nThe only other claim where isotropy is mentioned is Proposition 3.4 in the choice of the prior, which is however not related to SGD. \\n\\nWe will add a section in the appendix to elaborate on this issue, add references to asymptotic analyses that would clarify potential confusion, and to discuss in detail the relation between SGD, Langevin Dynamics, and the role of the assumptions on the claims.\"}",
"{\"title\": \"A few concerns\", \"comment\": \"There seems to be only one expert reviewer on this paper, and so I've gone ahead and read the paper carefully. I have a few questions / concerns. I understand these are coming rather late, but I will make sure that I get to hear your responses.\\n\\n1. Isotropic noise assumption\\n\\nI have a concern about the isotropic assumption made of the minibatch noise, allowing the authors to link results about diffusions to the behavior of SGD. Indeed, the minibatch noise of SGD is in fact definitely not isotropic: you would expect SGD to actually settle into any minimum where it reaches zero error, and that is indeed what we see in practice in many vision problems. This problem alone seems to be rather problematic for many of the claims that build on this connection. I would argue that your results seem to be about Langevin dynamics, not SGD.\\n\\n2. Curvature assumption in Proof of Prop 3.4\\n\\nAfter the remark following the proof of Prop 3.4, there is a statement: \\\"Note that there is no assumption that the curvature of the loss be constant near convergence.\\\" However, inspecting the proof on page 14, I see the statement \\\"Assuming that a quadratic approximation holds in a sufficiently large neighborhood, ...\\\". Isn't that precisely a constant-curvature assumption? Can you elaborate on both statements and also discuss the relationship, if any?\\n\\n3. Informal \\\"summary\\\" of Prop 3.6\\n\\nAfter Prop 3.6, the authors seem to suggest that the escape time result implies something directly about the long-run behavior of the Markov chain: \\\"We can informally summarize the above statement as saying that SGD, rather than minimizing directly the loss function, minimizes a free energy\\\". Can you provide proof of any connection between the escape time and the long run behavior? Issue #1 also bears on this connection, since SGD does not have isotropic noise: this result is not about SGD.\\n\\n3. Information in the weights under uninformative prior\\n\\nIn Prop 3.4, the limit as lambda diverges is taken. This will generally take the KL divergence to infinity as well. This divergence is caused by the k/2 log lambda^2 term in Equation 7. The authors claim that this term does not depend on Q and thus can be ignored. However, Theorem 3.2 and subsequent claims about generalization, rely on the entire \\\"Information in the Weights\\\" quantity. One cannot simply discard a term that is causing the bound to race off to infinity without some careful argument. I don't see any such argument. Can you argue why later claims about generalization are meaningful despite this divergence of the bound?\\n\\n4. What is Q?\\n\\nHaving read the paper, I'm somewhat confused what the post-distribution Q is meant to be in this story. Is Q the distribution of the weights produced by SGD? \\n\\n5. Relationship with Xu-Raginsky.\\n\\nThe authors seem to want to distance themselves from the Xu-Raginsky (and subsequent Pensia et al results). I don't think this is possible. First, the authors are pointing at PAC-Bayes bounds (which are tail bounds) but then arguing through the expectation of the bound (in order to get mutual informations), and so one eventually arrives at bounds in expectation. Bounds in expectation are precisely what Xu and Raginsky provide. There is a tight connection between Xu and Raginsky and PAC Bayes bounds as well: they are both derived from Donsker Varadhan.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the comments and suggestions.\\n\\n>> I would have liked to see a bit more attachment of the theoretical formalisms to the empirical justification that follows the references.\\n\\nWe have now updated the appendix to better introduce each experiment, to hopefully better explain the connections with both theory we develop and to the previous literature.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the positive comments.\\n\\n>> In Definition 3.1 for the \\u2018Information in the Weights\\u2019, how does the complexity of the task vary with $\\\\beta$? Is the Pareto curve provided in the paper? \\n\\nThis is partially answered empirically in Fig. 2. (left), where we keep the task constant but we change the batch size of SGD, which amounts to reducing beta by Proposition 3.7. This results in the expected increase in the Fisher Information of the weights. In (Right) we keep beta constant, but we increase the complexity of the task by adding more classes. This also results in an increase in the Fisher Information.\"}",
"{\"title\": \"Response to Reviewer 2 (part 2)\", \"comment\": \">> A missing reference regarding this point: Hu et al. \\\"\\u03b2-BNN: A Rate-Distortion Perspective on Bayesian Neural Networks.\\\" 2018\\n\\nWe thank for the reference to the workshop paper, which we were not aware of and we will reference. We should notice that while Hu et al. (2018) derive their framework as an approximation of mutual information, which assumes a distribution over both the datasets and the weights, we derive our notion of information (or indeed \\\"amount of information'') for one particular given dataset and set of weights.\\n\\n>> 2. More discussions on Xu and Raginsky (2017) is expected, since it proposes to use I(w; D) as a generalization bound.\\n\\nWe updated the paper to discuss the work more at length. Specifically, Xu and Raginsky prove a bound on generalization using the \\\"information\\\" stability of the algorithm. We show that, in realistic settings for a DNN, if the optimization algorithm is \\\"stable\\\" (in the sense that the final point of the optimization does not change much for perturbations of the dataset), then, together with the minimization of the Fisher, this implies \\\"information\\\" stability (and hence generalization, by either the PAC-Bayes bound, which we use, or the bound proposed by Zu and Raginsky (2017) and related works). \\n\\nThe relaxed loss they propose is related to our loss. However, they propose the Gibbs algorithm to minimize the loss, which is not practical for a DNN. On the other hand, we show that the more practical SGD algorithm approximately minimizes a term of that form (the free energy in Proposition 3.6), thus connecting theory with common practice and emphasizing the role of the dynamics of the training process in order to get good generalization, which is not studied by Zu and Raginsky (2017).\\n\\nMoreover, the success of deep learning hinges on the fact that once trained on a (large) dataset, the representation can be used on a new dataset. This is not captured by the bound proposed by Zu and Raginsky (2017). By connecting the information in the weights with information in the activations, we get some guarantees to the invariances learned by the representation that are going to be transfered to the new dataset.\\n\\n>> It seems, in terms of generalization, minimizing I(w; D) is a sufficient condition while minimizing Fisher is a necessary condition. \\n\\nRegarding the relationship between I(w; D) and the Fisher, the exact relationship depends on the algorithm. It could be that the algorithm always picks the same point in weight space with a high Fisher, regardless of the task (notice that the Fisher depends only on the point and the input distribution, and not on the task labels). This minimizes I(w; D) since w = A(D) is constant, but maximizes the Fisher. (This example does not, of course, satisfy the hypotheses of our Prop. 3.7 as p(D|w) is uniform, rather than concentrated on a single dataset).\\n\\n>> 3. There are in fact 4 key aspects: sufficiency, minimality, invariance and generalization. It would be great to have a theorem to summarize the relationships between them. \\n\\nThat's a great suggestion, we will add the theorem as summary in the discussion in the camera-ready.\\n\\n>> 4. Could you elaborate on the footnote 3? \\n\\nThe core idea of Proposition 3.7 is to measure how perturbations of the dataset D affect the minimum, also in relation to the amount of noise in SGD, which in turn is proportional to the Fisher. If, for example, changing one single label of the dataset slightly shifts the convergence point by some amount which is neglegible with respect to the noise, then the weights are not carrying much information about that sample. Proposition 3.7 formalizes this notion; however, it is easier to derive it while considering continuous perturbations of the dataset, rather than discrete ones. One could, for example, consider $\\\\mathcal{D}_\\\\theta = \\\\{(x_i, f_\\\\theta(x_i))\\\\}_{i=1}^N$, where the label (which we assume being a soft label) is parametrized by a function $f_\\\\theta$. Perturbing theta now changes the label in a continuous way. If $f_\\\\theta$ is an expressive-enough family of functions (e.g., a DNN itself, in which case this would be similar to a teacher-student setting), then any dataset on a fixed domain can be expressed in this way.\\n\\nAn alternative way to explain this would be to assume that we have a fixed pool of data points, and that $\\\\mathcal{D}$ is constructed by sampling those data points with some categorical probability distribution theta. Changing $\\\\theta$ will now sample different datasets. Using, for example, the Gumbel-max trick, the final dataset can be considered a differentiable stochastic function of theta.\"}",
"{\"title\": \"Response to Reviewer 2 (part 1)\", \"comment\": \"We thank the reviewer for the many thoughtful suggestions. We reply to each point in order:\\n\\nConcerning relations with Achille and Soatto (JMLR 2018), that paper uses Shannon's framework and effectively considers the weights as stochastic, thus not addressing the computability of information for deterministic maps, where it is often degenerate. One interpretation of our work is to reconcile that paper with the work criticizing the use of the Information Bottleneck for deterministic networks. This requires formally connecting the Shannon Mutual Information of the weights to the Fisher (Proposition 3.7, which we also verify empirically in the appendix, and which replaces the much looser bound using the curvature suggested by Achille and Soatto (2018)) and to introduce the notion of effective mutual information of the activations, which we also connect to the Fisher Information (Proposition 4.2). Second, our aim is to formally connect the dynamics with SGD (in terms of both stability and flatness of the solution found) with the information in a DNN. This is not done in any of the references cited.\\n\\nConcerning relations with Achille et al. (2019), as we say in the opening of Sect. 3, Sections 3.1 and 3.2 are derived from that preprint and included for completeness. The main results of our paper are in Sect. 3.3 and 4, whereas Achille at al. (2019) focus on defining a distance between learning tasks, which we do not address here. \\n\\nIn terms of impact, indeed, our aim was to obtain clarity around the notion of information, both in the the weights and activations of a DNN, that has caused some confusion in the literature and occasionally contradictory or (accidentally) misleading claims. We also introduce stronger connections between information and the optimization dynamics of a DNN. This, we believe, helps paint a more complete picture of the current landscape of information in Deep Learning. We did not set out to improve current deep learning frameworks, but we hope this work will help us and others at least understand how different fundamental quantities are related in DNN, when a model can be expected to \\\"work,\\\" hopefully quantify how \\\"well\\\" it works, and to relate this to the complexity of the learning task, which is not often formalized in deep learning.\", \"regarding_the_detailed_comments\": \">> we should talk about \\\"rate\\\" or \\\"amount of information\\\" or \\\"mutual information\\\" rather than \\\"information\\\" itself\\n\\nThis is a good point, and we have updated the definition to reflect this. We originally tried to avoid naming it \\\"mutual information\\\" or \\\"rate\\\" to make it clear that the notion is valid even if the dataset is not considered a random variable (like it would in rate-distortion theory), but we are glad to change it to the less ambiguous \\\"amount of information\\\".\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary of the paper:\\nThis is a theoretical paper that builds on top of Achille and Soatto (2018), Achille et al. (2019), McAllester (2013), and Berglund (2011),. The paper attempts to answer the relationship between the inductive bias of SGD, generalization of DNNs, and the invariance of learned representation from an information theoretical point of view. \\nThe paper mentioned many interesting links. In my opinion, the contributions are the following:\\n1. Invoking the theoretical result of Berglund (2011) to justify why Fisher information is relevant -- SGD tends to avoid local minima with high Fisher information. \\n2. Relating the Fisher information and the stability of SGD to I(w; D).\\n3. Introducing the definition of effective information in the activation, and show that which is closely related to the Fisher information.\", \"about_the_rating\": \"\", \"this_is_basically_a_good_paper_but_i_have_a_few_concerns\": \"1. A large fraction of this paper are taken from Achille and Soatto (2018), Achille et al. (2019). \\n2. In terms of impact, the paper is somehow incomplete -- it only demonstrates that the Fisher information is important, but the insights didn't lead to any substantial improvement over the current deep learning framework.\", \"detailed_comments\": \"1. In my opinion, defining \\\"information in the weights for the task D\\\" by KL(Q||P) is inaccurate. \\nThe weights themselves are information, which form a representation or a lossy compression of the data (which is also an information). \\nAccording to the rate-distortion theory, what we care about is the amount of information the representation attains rather than \\\"where\\\" the information are. Therefore, we should talk about \\\"rate\\\" or \\\"amount of information\\\" or \\\"mutual information\\\" rather than \\\"information\\\" itself.\", \"a_missing_reference_regarding_this_point\": \"Hu et al. \\\"\\u03b2-BNN: A Rate-Distortion Perspective on Bayesian Neural Networks.\\\" 2018,\\nwhich derives the information Lagrangian directly from rate-distortion theory. \\n2. More discussions on Xu and Raginsky (2017) is expected, since it proposes to use I(w; D) as a generalization bound. It seems, in terms of generalization, minimizing I(w; D) is a sufficient condition while minimizing Fisher is a necessary condition. \\n3. There are in fact 4 key aspects: sufficiency, minimality, invariance and generalization. It would be great to have a theorem to summarize the relationships between them. \\n4. Could you elaborate on the footnote 3?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper deals with where the information is in a deep network and how information is propagated when new data points are observed. The authors measure information in the weights of a DNN as the trade-off between network accuracy and weight complexity. They bring out the relationships between Shannon MI and Fisher Information and the connections to PAC-Bayes bound and invariance. The main result is that models of low information generalize better and are invariance-tolerant.\\n\\nThe paper is very well written and concepts are theoretically-well documented.\\n\\nIn Definition 3.1 for the \\u2018Information in the Weights\\u2019, how does the complexity of the task vary with \\\\beta? Is the Pareto curve provided in the paper?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a theoretical account of information encoded within deep neural networks subject to information theoretic measures. In contrast to other efforts that examine information encoded in weights, this work emphasizes the effective information in the activations. This characterization is further related to information in the weights, and a theoretical justification is made for what this means with respect to properties of generalization and invariance in the network.\\nThe notion of attaching the weights that represent the training set to the activations that accord with the test set in a theoretical framework is interesting. In practice, I would have liked to see a bit more attachment of the theoretical formalisms to the empirical justification that follows the references. This, however, is a matter of personal bias as I don't typically produce papers that are principally theoretical contributions in my own work. Overall, the content of the paper seems sound and the theoretical and empirical justifications seem well founded but I also can't claim to be an expert in this area.\"}"
]
} |
H1gHb1rFwr | Extreme Values are Accurate and Robust in Deep Networks | [
"Jianguo Li",
"Mingjie Sun",
"Changshui Zhang"
] | Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature. This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness. We borrow the scale-space extreme value idea from SIFT, and propose EVPNet (extreme value preserving network) which contains three novel components to model the extreme values: (1) parametric differences of Gaussian (DoG) to extract extrema, (2) truncated ReLU to suppress non-stable extrema and (3) projected normalization layer (PNL) to mimic PCA-SIFT like feature normalization. Experiments demonstrate that EVPNets can achieve similar or better accuracy than conventional CNNs, while achieving much better robustness on a set of adversarial attacks (FGSM,PGD,etc) even without adversarial training. | [
"Biological inspired CNN architecture design",
"Adversarial Robustness Architecture"
] | Reject | https://openreview.net/pdf?id=H1gHb1rFwr | https://openreview.net/forum?id=H1gHb1rFwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-gdco-XJyV",
"SyeEkVTKjH",
"ryevdCnYoB",
"BJlcCanKjH",
"H1lWGT3YiH",
"rklYN27c9H",
"Sylz5pxycr",
"r1lcfqfsYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726057,
1573667803972,
1573666414795,
1573666258158,
1573666056642,
1572645937047,
1571913098314,
1571658257892
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1542/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1542/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1542/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1542/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1542/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1542/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1542/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This manuscript proposed biologically-inspired modifications to convolutional neural networks including differences of Gaussians convolutional filter, a truncated ReLU, and a modified projected normalization layer. The authors' results indicate that the modifications improve performance as well as improved robustness to adversarial attacks.\\n\\nThe reviewers and AC agree that the problem studied is timely and interesting, and closely related to a variety of recent work on robust model architectures. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and importance of the results. In reviews and discussion, the reviewers noted issues with clarity of the presentation and sufficient justification of the approach and results. In the opinion of the AC, the manuscript in its current state is borderline and could be improved with more convincing empirical justification.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"Thanks all the reviewers for helpful comments and suggestions.\\nThe updated version includes following revisions to accommodate reviewers' concern.\\n(1) Revise $\\\\epsilon$-ball definition in page-3 to make it consistent to the symbol used in experimental parts.\\n(2) Revise Eq-7 with expansion so that it looks more consistent with DoG definition. Add a note below to describe an important property. \\n(3) Add definition for the $\\\\ell_p$ norm in Figure-2 and corresponding body part. \\n(4) Add more results on ImageNet at Appendix-A, especially results by the plane structure MobileNet-v1 and our extensions. \\n(5) Add results for CIFAR-10 by Wide-ResNet. The accuracy is now on-par with state-of-the-art results on CIFAR-10. \\n(6) Add a comparison to \\\"Extreme value theory\\\" and the paper suggested by reviewer-#3. \\n(7) Some grammar error and typos fix.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank the reviewer for the helpful feedback and suggestions.\\n\\n1. ## Details on How pDOG replace DoG ##\\nIn DoG, the operation is shown in Eq-2,\\n $D(x,y,\\\\sigma) = G(x, y, \\\\sigma) \\\\otimes I_1 \\u2013 G(x,y, \\\\sigma) \\\\otimes I_0$.\\nWhere $\\\\sigma$ is pre-designed Gaussian kernel size, $\\\\otimes$ means convolution the kernel with input image, and $I_1 = G(x,y, \\\\sigma) \\\\otimes I_0$.\\n \\nIn the pDoG, we just mimic the filtering process in Eq-7 with\\n \\t$d_1 = DW(f_1, w) \\u2013 DW(f_0, w)$,\\nwhere DW is depth-wise convolution, $w$ is the parameters of DW, and $f_1 = DW(f_0, w)$.\\nIt is obviously that the pDoG replace the Gaussian blur in DoG with a learnt convolutional filter, and each channel is processed separately, so that the operation could be modeled with a depth-wise convolution.\\nWe want to stress that pDoG is inspired from DoG and mimic the processing, but it does not approximate Gaussian distribution at all. The successive filtering with same learnt kernel will produce a scale-space, and we want to find the stable pixels/keypoints in the scale-space. We hope that pDoG are able to learn better transformations than traditional DoG. \\n\\n2. ## 1x1 convolution and PCA ##\\nIt is commonly known that 1x1 convolution is in actually MLP, as stated in the earliest work for 1x1 convolution (network-in-network, NIN) [1], while MLP is known that could be used to learn PCA projection matrix from data [2]. Here by \\u201ca PCA with learnable projection matrix\\u201d, we are stressing that 1x1 convolution is similar to PCA as it can be viewed as a dimensionality projection process with learnable projection matrix. Eq-14 (revised version) further explores how PCA data covariance matrix \\u201cA\\u201d is formulated in PNL.\\n\\n 3. ## Notation ##\\n - \\u201c\\\\| \\\\|_p\\u201d: Here we mean \\\\ell_p norm.\\n - \\u201cd= w x h\\u201d: Here we mean \\u2018d\\u2019 is the product of \\u2018w\\u2019 and \\u2018h\\u2019, where \\u2018w\\u2019 and \\u2018h\\u2019 refers to the size/resolution of a feature map before the GAP layer. This means we reshape one 2D feature-map channel into a vector, and thus reshape the whole feature map from a 3D tensor into a 2D matrix with size $d \\\\times c$, where c is the number of channels.\\n - \\u201cmax(s_0, s_1)\\u201d: Here we mean it is maxout operation, which does per-element maximum. \\nWe have revised the description of those notations in the updated version.\\n\\n4. ## why $\\\\ell_2$ norm for row vectors, why not on column vectors ##\\nComputing $\\\\ell_2$ norm on row vectors of feature map, will output a vector of dimension $c$ , where $c$ is the number of feature map channels. This is consistent with global average pooling. If normalized on column vector, it will output a vector of dimension $d = w\\\\times h$, i.e., the resolution of feature map. This will make the network not able to produce fixed length feature vector for different resolution input images, and requires additional overhead to handle on this issue.\\n\\n5. ## manifold space of PNL ##\\nThe normalization after 1x1 convolution in PNL will push different data samples with feature representation $v$ distributed on a hyper-ball, while the GAP (global average pooling) is actually L_1 normalization, which will push data samples with GAP feature presentation distributed on a hyper-cubic. Here we just mean that these two have different geometric structure as manifolds. We will revise the description accordingly.\\n\\n6. ## Comparison with [3] ##\\nThanks for pointing out that paper. First, we would like to emphasize that scale-space extreme values and extreme value theory are two different concepts/theory and developed indecently. Extreme value theory tries to model the extreme, rare events of the \\u201cdata distributions\\u201d in certain kinds of functional space. While scale-space extreme values are biological inspired for the object boundary, keypoints, etc for \\u201cone input image\\u201d. Second, more specifically, [2] leverages more on data distribution extremes to propose an attack-independent metric CLEVER to measure robustness of neural network classifier. While, we focus on extract extremes in feature maps. Hence, there are no explicitly connections between these two concepts, and between our work and [3].\\n\\n[1] Network In Network. Lin et al. ArXiv 1312.4400.\\n[2] Neural networks for pattern recognition, C. Bishop, Oxford Press, 1995.\\n[3] Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach. Weng et al. ICLR 2018.\"}",
"{\"title\": \"Response to Reviewer #`1\", \"comment\": \"Thank you for the detailed feedback and suggestions and we are happy to address your concerns.\\n\\n1. ## Clarity ##\\nThanks for pointing out about the writing issue. We will fix the grammatical errors in our revision and improve the presentation of Figure 2.\\n\\n2. ## Novelty ##\\nWe would like to point out that our work is significant different to most existing CNN architecture design works in three aspects.\\nFirst, to the best of our knowledge, we are the first to bring biologic inspired scale-space theory for traditional robust visual features like SIFT into CNN architecture design. Especially, the successive minus operations make the minus sign (-) not able to be absorbed into $\\\\mathbf{w}$ in Eq-7 for replacing minus into addition operation. To the best of our knowledge, this is the first time, minus component has been introduced into deep neural networks, which brings totally new element for architecture design/search. \\nSecond, we are the first to consider full network architecture design from adversarial robustness perspective, rather than from clean accuracy or efficiency perspective. It is also quite different to majority of works on adversarial robustness, such as adversarial training [1, 2], input/feature map de-noising [3, 4]. Our designed network shows not only better clean accuracy but also much better adversarial robustness on a bunch of dataset (CIFAR-10, SVHN, ImageNet). \\nFurthermore, the feature map by EVPConv illustrated in Figure-1 is also more meaningful and explainable.\\nWe believe our work is different to those works the reviewer referred to as \\u201cultimately make little or no impact on the field.\\u201d We also believe that biologic-inspired scale-space theory is quite fundamental and promising direction for CNN architecture design.\\n\\n3. ## Impact by state-of-the-art networks ##\\nWe use current ResNet architecture, as it was widely used in existing studies [2,4,5,6]. Per the suggestion, we also conduct experiments to test the performance on more deeper or wider state-of-the-art networks like Wide-ResNet [7]. The results are shown in Appendix-B. We can clearly see that even clean accuracy is much higher, EVPNet still brings significant robustness improvement over the baseline, while still keep similar clean accuracy.\\n\\n4. ## Results on more datasets like ImageNet ##\\nIn fact, in the original submission, we already include results with ResNet on ImageNet in Appendix-A. We further include more results for the Mobilenet-v1 (without residual connection) structure. All demonstrates effectiveness of the proposed method. \\n\\n[1] Explaining and Harnessing Adversarial Examples. Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. ICLR 2015.\\n[2] Towards Deep Learning Models Resistant to Adversarial Attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu. ICLR 2018.\\n[3] Defense against adversarial attacks using high-level representation guided denoiser. Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, Jun Zhu. CVPR 2018.\\n[4] Feature Denoising for Improving Adversarial Robustness. Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Kaiming He. CVPR 2019.\\n[5] PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples. Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, Nate Kushman. ICLR 2018.\\n[6] Theoretically Principled Trade-off between Robustness and Accuracy. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan. ICML 2019.\\n[7] Wide Residual Networks. Sergey Zagoruyko, Nikos Komodakis. BMVC, 2016.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank the reviewer for the helpful feedback and suggestions.\\n\\n1. ## pDoG: no evidence that feature maps are Gaussian ##\\nDoG is not targeting for modeling the distribution of whole feature maps, but for local pixels (like 3x3) in the scale-space, and ensure to find stable pixels/keypoints across a series Gaussian blur of input image. So it does not require feature map to be Gaussian distribution. Our pDoG further extends DoG to make local filter without Gaussian assumption (see Eq-2 vs Eq-7 in the revised version). For each feature map channel, we view it as one input image, and blur that image successively with the same learnt filter, so that the series of blurred images consist of a scale-space, and we could find stable pixels/keypoints across all scales. Hence, we only mimic DoG for the filtering process rather than the filter itself. \\n\\n2. ## truncated ReLU applied to other datasets ##\\nWe have tested our framework on CIFAR-10, SVHN, and large-scale ImageNet dataset. The results demonstrate that the tReLU module is generally effective and applicable in our EVPNet framework.\\n\\n3. ## PNL vs BN ##\\nPNL is a 1x1 projection layer followed by L2 normalization, while BN is \\u201cdepth-wise 1x1 layer\\u201d (aka the scaling layer) followed by a covariance shifting for each pixel. The major difference is that PNL will project each channel to a scalar after L2 normalization and the whole tensor to a vector, while BN will not change the shape of input tensor.\\n\\n4. ## EVPNet for other network ##\\nFor time limitation, we are not able to train VGGNet on ImageNet. However, we extend the novel blocks into the MobileNet (v1 without residual connection, very similar to VGG). Experimental results are added to Appendix-A. It shows that EVP-MobileNet still demonstrate much better accuracy and robustness comparing to original MobileNet and SE-MobileNet. We also provide experiments on Wide-ResNet in Appendix-B, which does not have bottleneck structures as ResNet in our experiments. \\n\\n5. ## why epsilon always being 8 ##\\nOn CIFAR-10, \\\\epsilon = 8 is a common choice in many adversarial attack studies [1,2,3,4,5]. \\nWe also clarify the definition of epsilon-ball just after Eq-3.\\n\\n6. ## parametric or non-parametric ##\\nHere we talk parametric which means the filter parameters are learnt from data in pDoG, while in traditional DoG, the filter parameter is designed and fixed to be Gaussian function. Please also see Eq-2 and Eq-7 for the difference. The notion is somewhat different from the concept parametric model and non-parametric model in statistics.\\n\\n[1] Towards Deep Learning Models Resistant to Adversarial Attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu. ICLR 2018.\\n[2] Defense against adversarial attacks using high-level representation guided denoiser. Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, Jun Zhu. CVPR 2018.\\n[3] Feature Denoising for Improving Adversarial Robustness. Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Kaiming He. CVPR 2019.\\n[4] PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples. Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, Nate Kushman. ICLR 2018.\\n[5] Theoretically Principled Trade-off between Robustness and Accuracy. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan. ICML 2019.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a network model named EVPNet, inspired by the idea scale-space extreme value from SIFT, to improve network robustness to adversarial pertubations over textures. To achieve better robustness, EVPNet separates outliers (non-robust) from robust examples by extenting DoG to parametric DoG, utilising truncated ReLU, and then applying a projected normalisation layer to mimic PCA-SIFT like feature normalisation, which are the three novelties that the authors claim in this paper. In the experiments, FGSM and PGD are used to provide adversarial attacks, and experiments conducted on CIFAR-10 and SVHN reveal that EVPNet enhances network robustness.\\nOverall, this paper contributes to network robustness from an architecture perspective; in the contrast, most prior works focus more on robust feature extraction and loss function design. The ablation study of EVPNet demonstrates the effectiveness of its each novel component. A further investigation presents that EVPNet reduces the error ampplification effects. The example that the authors show in this paper demonstrates the improvement of EVPNet to image textures.\", \"the_reviewer_has_some_main_concerns_regarding_the_claimed_novelty\": \"1. pDoG computes the difference between outputs of two depth-convolution layers, but there is no evidence that the distribution of feature maps is gaussian or gaussian-like. There is no clarification for this point.\\n\\n2. Truncated ReLU is a modified ReLU. Does the learnable truncated parameter \\\\theta limit its applicability to different datasets?\\n\\n3. The Projected Normalisation Layer (PNL) seems a reasonable implementation, but essentially it is not very different from batch normalisation. The authors state only its difference to global average pooling but not to batch normalisation which should be a better comparison.\\n\\nFor the experiments, the following should be addressed:\\n\\n4. Experiments were conducted only for the SE-ResNet architecture via replacing its CNN kernel by the proposed EVPNet. Although SE-ResNet shows good performance on some common data, but the squeeze-excitation block might bring in non-robustness. Hence, the reviewer thinks that it is risky to claim: the replacement of EVPNet in CNN layers is robust to adverserial attacks based only on this implementation. Try EVPNet for a more basic network architecture (VGG) would be suggested.\\n\\n5. The \\\\epsilon, which represents the adversarial attack tolerance, is always 8. There is no explaination in this paper, why not other values.\", \"minor_comments\": \"6. The authors did not clarify why the first novel component is called \\\"parametric\\\" DoG. There is a more \\\"non-parametric\\\" block.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a SIFT-feature inspired modification to the standard convolutional neural network (CNN). Specifically the authors propose three innovations: (1) a differences of Gaussians (DoG) convolutional filter; (2) a symmetric ReLU activation function (referred to as a truncated ReLU; and (3) a projected normalization layer. The paper makes the claim that the proposed CNN variant (referred to as the EVPNet) demonstrates superior performance as well as improved robustness to adversarial attacks.\", \"clarity\": \"Overall, the paper is not particularly well written. There are multiple missing articles and other grammatical errors that make it a bit arduous to read, though I do not believe they have obstructed my ability to understand the contributions. The section describing the projected normalization layer (second half of page 5) is a bit confusing. Figure 2(c) is not helpful in shedding light on the details, though I think a more detailed figure could be quite helpful. Beyond these issues, the paper is relatively clear in the presentation of the material.\", \"novelty\": \"Over the last few years there have been many, many proposals for how to vary the basic CNN architecture to improve performance. Some of these have lead to genuine performance gains and have become part of the standard CNN specification. ReLUs, ResNets and Batch Normalization are particularly prominent examples of contributions that have been shown to lead to improvements in performance. Yet the vast majority of these sorts of proposals ultimately make little or no impact on the field. In light of this, I would rate the novelty of the basic goal of this paper as relatively low, though the specific proposal is novel to me and seems reasonable.\", \"impact\": \"The impact potential for this paper lies with the performance offered by the proposed innovations. With respect to overall performance improvement the proposed method has not been shown to perform quite at a state-of-the-art level, as given by these resource:\", \"svhn\": \"https://paperswithcode.com/sota/image-classification-on-svhn\", \"cifar_10\": \"https://paperswithcode.com/sota/image-classification-on-cifar-10\\n\\nThe authors compare the performance of their proposed EVPNet against a fair baseline - a squeeze-and-excite ResNet model. These sorts of controlled experiments are useful, but the actual reported performance for both models are somewhat off of the state-of-the-art and it's not clear that the relatively small benefit the authors show over their baselines are maintained for higher performing architectural configurations. Can this architecture be competitive with the state-of-the-art? The current paper in it's current \\n\\nMost of the results relate to the claim that the proposed model is robust to adversarial examples. Unfortunately, this is not a particular area of expertise for me, so it's difficult for me to provide a confident assessment of the contribution here, though I will say two things: (i) the method seems to provide a significant increase in adversarial robustness across the\\nbaseline architectures investigated. (ii) the authors demonstrate that the benefit provided by the proposed architecture seems to persist even when training for adversarial defence is introduced. \\n\\nI would have liked to see more datasets explored in Experiments section. I especially would have liked to see results on ImageNet. \\n\\nMy current rating is weak reject based on the weakness of the writing and the lack of strong empirical evidence in support of the effectiveness of the proposed contributions.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"In this paper, a new network architecture called EVPNet was proposed to improve robustness of CNNs to adversarial perturbations. To this end, EVPNet employs three methods to leverage scale invariant properties of SIFT features in CNNs.\", \"The proposed network and the methods are interesting, and provide promising results in the experiments. However, there are several issues with the paper:\", \"The authors claim that Gaussian kernels are replaced by convolution kernels to mimic DoGs. However, it is not clear (1) how this replacement, or employment of convolution kernels can mimic DoGs, or (2) more precisely, how the corresponding learned convolution kernels approximate Gaussian kernels. In order to verify and justify this claim, please provide detailed theoretical and experimental analyses.\", \"It is also claimed that \\u201ca 1 \\u00d7 1 conv-layer, can be viewed as a PCA with learnable projection matrix\\u201d. However, this statements is not clear. How do you assure that a 1x1 conv layer employs a PCA operation or the corresponding projection?\", \"What does \\\\| \\\\|_p denote? Does it denote \\\\ell_p norm?\", \"What does x denote in d = w x h? Previously, it was used to denote matrix size.\", \"Why do you compute \\\\ell_2 norm for row vectors instead of column vectors? How do the results change when they are calculated for column vectors?\", \"According to the notation, s_0 and s_1 are vectors. Then, what does max denote in (14)? That is, how do you compute max(s_0, s_1), more precisely?\", \"In the statement \\u201cPNL produces a hyper-ball in the manifold space\\u201d, what do you mean by the \\u201cmanifold space\\u201d? What are the structures (e.g. geometry, metrics etc.) and members of this space?\", \"Please conceptually and theoretically compare the proposed method with state-of-the-art methods following similar motivation, such as the following:\", \"Weng et al., Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach, ICLR 2018.\"]}"
]
} |
BJlrZyrKDB | Statistically Consistent Saliency Estimation | [
"Emre Barut",
"Shunyan Luo"
] | The use of deep learning for a wide range of data problems has increased the need for understanding and diagnosing these models, and deep learning interpretation techniques have become an essential tool for data analysts. Although numerous model interpretation methods have been proposed in recent years, most of these procedures are based on heuristics with little or no theoretical guarantees. In this work, we propose a statistical framework for saliency estimation for black box computer vision models. We build a model-agnostic estimation procedure that is statistically consistent and passes the saliency checks of Adebayo et al. (2018). Our method requires solving a linear program, whose solution can be efficiently computed in polynomial time. Through our theoretical analysis, we establish an upper bound on the number of model evaluations needed to recover the region of importance with high probability, and build a new perturbation scheme for estimation of local gradients that is shown to be more efficient than the commonly used random perturbation schemes. Validity of the new method is demonstrated through sensitivity analysis.
| [
"Deep Learning Interpretation",
"Saliency Estimation",
"High Dimensional Statistics"
] | Reject | https://openreview.net/pdf?id=BJlrZyrKDB | https://openreview.net/forum?id=BJlrZyrKDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"H1j5_z_qfo",
"B1xxQlE2or",
"B1e-3t6VjH",
"r1gYNWXVsr",
"ByxPoy74iH",
"Bylrty7Njr",
"HyeyFAMViS",
"r1g_1TGNjB",
"rkgorCQj5B",
"Bkxs5ykUcB",
"S1xcj6rAKH",
"SyekXJ1AYH",
"B1eTWxW2FB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798726028,
1573826583594,
1573341608710,
1573298481417,
1573298078588,
1573298044557,
1573297783500,
1573297376148,
1572712003429,
1572364178822,
1571868066134,
1571839766972,
1571717125136
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1541/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1541/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1541/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1541/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1541/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission proposes a statistically consistent saliency estimation method for visual model explainability.\", \"strengths\": \"-The method is novel, interesting, and passes some recently proposed sanity checks for these methods.\", \"weaknesses\": \"-The evaluation was flawed in several aspects.\\n-The readability needed improvement.\", \"after_the_author_feedback_period_remaining_issues_were\": \"-A discussion of two points is missing: (i) why are these models so sensitive to the resolution of the saliency map? How does the performance of LEG change with the resolution (e.g. does it degrade for higher resolution?)? (ii) Figure 6 suggests that SHAP performs best at identifying \\\"pixels that are crucial for the predictions\\\". However, the authors use Figure 7 to argue that LEG is better at identifying salient \\\"pixels that are more likely to be relevant for the prediction\\\". These two observations are contradictory and should be resolved.\\n-The evaluation is still missing some key details for interpreting the results. For example, how representative are the 3 images chosen in Figure 7? Also, in section 5.1 the authors don't describe how many images are included in their sanity check analysis or how those images were chosen.\\n-The new discussion section is not actually a discussion section but a conclusion/summary section.\\n\\nBecause of these issues, AC believes that the work is theoretically interesting but has not been sufficiently validated experimentally and does not give the reader sufficient insight into how it works and how it compares to other methods. Note also that the submission is also now more than 9 pages long, which requires that it be held to a higher standard of acceptance.\\n\\nReviewers largely agreed with the stated shortcomings but were divided on their significance.\\nAC shares the recommendation to reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Overview of Revisions\", \"comment\": \"We would like to again thank the five anonymous reviewers for their valuable input. We made numerous additions and changes to the main body and the numerical results of the paper. We believe that the revisions suggested by the reviewers have greatly improved the presentation of the paper, and the new numerical results better demonstrate the validity to our approach.\", \"the_main_changes_are_as_follows\": [\"The sensitivity analysis in Section 5.2 has been completely changed. We now present average of the results from 500 images; previous results used 3 samples.\", \"Additional discussions have been added to improve the readability of the paper. Mainly:\", \"A new figure (Figure 1) has been added to Section 2 to motivate the formulation for LEG.\", \"We now state empirical and theoretical run time complexities of the procedure.\", \"We describe why the proposed linear program is an ideal choice for estimating LEG.\", \"A sketch of the proof technique for Theorem 1 has been added to the main text.\", \"The paper now concludes with a discussion section that summarizes our contributions.\", \"Due to the additions, the paper is now a little over 9 pages. We hope that this will not inconvenience the reviewers.\"]}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the comment!\\n\\nFollowing your suggestion, we downsized the saliency maps from the other procedures and repeated the sensitivity analysis. The new results can be found in the updated paper. We found that this step significantly improves the performance of our competitors. We made the following addition to the text:\\n\\n\\\"In order to make the comparison between the methods more fair, we downsize the saliency maps resulting from GradCAM, LIME and SHAP to a 28 by 28 grid. Interestingly, we find that this step improves the performance of these estimators.\\\"\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for the encouraging comments and the suggestions! Please see our answers below.\\n\\n1a. Following the suggestion, we changed the paragraphs around Definition 2. We removed the reference to 2D Fused Lasso; our formulation is not the same, and the comment does not help with the presentation. The paragraph following Definition 2 now lists the intuition behind the setup. It now reads:\\n\\n\\\"Our approach is based on the \\\"high confidence set\\\" approach which has been successful in numerous applications in high dimensional statistics (Candes et al, 2007; Cai et al, 2011; Fan, 2013). The set of $g$ that satisfy the constraint in the formulation is our high confidence set; if $L$ is chosen properly, this set contains the true LEG coefficient, $\\\\gamma(f,x_0,F)$, with high probability. This setup ensures that the distance between $\\\\gamma$ and $\\\\tilde{\\\\gamma}$ is small. When combined with the TV penalty in the objective function, the procedure seeks to find a solution that both belongs to the confidence set and has sparse differences on the grid. Thus, the estimator is extremely effective at recovering $\\\\gamma$ that have small total variation.\\\"\\n\\n1b. We added a paragraph following the theorem to cite the relevant work and to summarize the proof technique. We now state:\\n\\n\\\"The proof is built on top of the \\\"high confidence set\\\" approach of Fan (2013). In the proof, we first establish that, for an appropriately chosen value of $L$, $\\\\gamma^*=\\\\gamma(f,x_0,F)$ satisfies the constraint in our formulation with high probability. Then, we make use of TV sparsity of $\\\\tilde{\\\\gamma}$ and $\\\\gamma^*$ to argue that the two quantities cannot be too far away from each other, since both are in the constraint set. The full proof is provided in the Appendix.\\\"\\n\\n2. We added the time complexity for primal-dual interior point method solvers in our discussion following Definition 2. We note that this complexity is meant to be an upper bound rather than the expected run time. Furthermore, it is possible to obtain much faster rates by utilizing the sparseness of the constraint matrix (Yen et al, 2015) or by using recently proposed stochastic central path methods (Cohen et al, 2018). We leave those as future directions of research. The sample complexity depends on the sparsity of the LEG coefficient, and can be derived from Theorem 1. Ignoring the log terms, the sample complexity is given by $n=O((p_1p_2)^{1/2} s)$. We now state this quantity in the remarks after Theorem 1.\\n\\n3a. We would like to ensure that the reviewer is interested in seeing more results like the sensitivity analysis of Section 5.2; with a larger data pool than three randomly selected images, similar to the study in Chen et al (2019). We have just started a new run where we include C-Shapley as a competitor, and will include these results before the end of the rebuttal period.\\n\\n[Edit on Nov 15: We revised our sensitivity analysis section and now present a new study with 500 images. LEG appears to be at least as good as GradCAM. Please see the new Section 5.2 for the details.]\\n\\n3b. According to our understanding, the connectedness of C-Shapley is mainly due to the fact that C-Shapley considers connected pixels while computing the Shapley values, and the connectedness is not explicitly imposed by the procedure.\\n\\n3c. Regarding the sampling procedures; C-Shapley goes through all pixels, and computes an approximate Shapley score. For a specific pixel, the procedure returns its marginal contribution, where the contribution is defined as the change of the score of the prediction when that pixel and (a certain combination of) its neighboring pixels are removed (i.e. painted black). The method provides a local estimate of the effect of the pixel. Our approach to the problem is strictly different, as we seek to find a local linear approximation to the function first, and then evaluate the contribution of the pixels based on their contribution to this local linear approximation. Instead of completely removing the pixels, we instead vary their intensity, and try to estimate a smoothed version of the gradient by using such perturbations. In that sense, our method is more closely related to saliency approaches that are based on computing the gradient of the predictor with respect to the input.\", \"references\": [\"Yen, Ian En-Hsu, et al. \\\"Sparse linear programming via primal and dual augmented coordinate descent.\\\" Advances in Neural Information Processing Systems. 2015.\", \"Cohen, Michael B., Yin Tat Lee, and Zhao Song. \\\"Solving linear programs in the current matrix multiplication time.\\\" Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. ACM, 2019.\", \"Chen, Jianbo, Le Song, Martin J. Wainwright and Michael I. Jordan. \\\"L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data\\\". International Conference on Learning Representations. 2019.\"]}",
"{\"title\": \"Response to Reviewer 5 - Questions\", \"comment\": \"We thank the reviewer for the detailed comments and the constructive criticism! Following the suggestions, we made numerous changes to improve the readability and the flow of the paper. In our first reply, we respond to the questions. We provide our comments to the suggestions in our second reply.\\n\\n> \\\"What does it mean to take expectation wrt F+x_0. I was particularly confused by the last point, because F is a continuous distribution, while presumably x_0 is the point of interest. The paper notes in several places that it can sample from F+x_0, is this equivalent to sampling from F and adding point x_0?\\\"\\n\\n- Exactly! We note this in our notation section. We are happy to reiterate the description (or introduce the notation at a better point in text) if the reviewer finds that would improve the flow.\\n\\n> \\\"Definition 1: I had a difficult time understanding this definition. What is g here? I assume\\nit is the gradient based on the reference to the first order Taylor expansion.\\\"\\n\\n- Our formulation is based on a linear approximation of the function $f(\\\\cdot)$ around the point of interest $x_0$. For this approximation, the most sensible approach would be to use the gradient of $f(\\\\cdot)$ at $x_0$; however this results in solutions that are too noisy and not meaningful for interpretation. To avoid such issues, we instead seek to find a new coefficient that can provide a reliable linear approximation around the neighborhood of $x_0$. As the reviewer noted, this definition is motivated by a first order Taylor expansion. In order to improve our presentation, we added a new figure (Figure 1) that demonstrates LEG visually on a toy example. If the reviewer does not find this to be appropriate, we can provide more motivation by citing other relevant work in the saliency literature.\\n\\n> \\\"In addition, why is the estimand squared?\\\"\\n\\n- We use squared loss as it results in an analytical solution. We also find it to be a natural choice. A similar square based loss is also used in the LIME paper.\\n\\n> \\\"What is LEG0 in figure 5?\\\"\\n\\n- LEG0 is the estimate with a smaller choice of $L$, which results in less sparse solutions. Please see the 7th line of the third paragraph of Section 5.2, where we state \\\"For LEG, we provide two solutions, a sparse solution which corresponds to a larger choice of the penalty parameter $L$ and a noisy solution which is obtained with a smaller choice of $L$, denoted by LEG and LEG0, respectively.\\\"\\n\\n> \\\"Is $\\\\kappa$ in your theorem 1, the condition number of the covariance matrix of the perturbation?\\\"\\n\\n- $\\\\kappa$ is the constant defined in Assumption 1. It is more related to the minimum eigenvalue of $\\\\Sigma$ than its condition number.\"}",
"{\"title\": \"Response to Reviewer 5 - Comments\", \"comment\": \"> \\\"The LEG method is not sufficiently motivated. Here, I am specifically referring to the functional form of the estimated itself in definition 1. See the question section for some of the issues I raised there.\\\"\\n\\n- We hope that our added figure (Figure 1) and the captions provide a better picture for our motivation. We are happy to include a more detailed explanation if it is needed.\\n\\n> \\\"From figure 4, we see that the method passes the proposed sanity checks which seem like a\\nkey motivation for this work, however, the authors don't give an explanation for why this is the\\ncase.\\\"\\n\\n- We added the following discussion to the subsection. We now state:\\n\\n\\\"Our procedure treats the classifier as a black-box and the explanations offered by LEG-TV are based solely on the predictions made by the neural network. During the sanity check, when the weights of the neural network are randomly perturbed, the predictions change significantly and no longer depend on the input. Thus, we expect the local linear approximations of the underlying function to be flat, which would result in saliency scores of zero for all of the pixels. Finally, small artifacts that might arise in this process, such as positive or negative saliency scores with no spatial structure, should be smoothed over due to the TV penalty, further robustifying our procedure.\\\"\\n\\n> \\\"The paper notes that LEG can be estimated using an LP; it would have been great for the authors\\nto completely spell this out in the appendix or somewhere in the text. What is the exact form of the\\nLP? What are the constraints?\\\"\\n\\n- We now provide the exact form of the LP in the appendix, under the alternative formulation.\\n\\n> \\\"As the authors know, the two evaluations presented in the paper: sanity checks, and the zeroing out procedure (in figure 5) don't actually tell us which method is a good method, just rule out a method. I would encourage the authors to design a toy task where the ground truth attributions are know, then train a model to be 100 percent or so accurate on this task. You can then obtain LEG-TV estimates from this model and compare to the ground-truth.\\\"\\n\\n- We also would like to have such a setup for showing our method's efficacy. However, there are major limitations. Most importantly, all of the methods have different estimands. We do not know how to come up with an example where all saliency methods would seek to estimate the same ground-truth. Furthermore, the LEG estimand changes with respect to $\\\\Sigma$, and using different distributions result in different ground-truths; that is, it is hard to compare LEG even to itself! We would be very happy to try out new simulations if the reviewer could suggest possible directions to avoid the multiple estimand issue. Additionally, we argue that the sensitivity analysis is a good indicator of which method performs better. If a method is successful at identifying regions of high saliency, then the probabilities should significantly drop when those regions are masked. Thus, rate of the probability change would be a reliable indicator for understanding which methods overperform.\\n\\n> \\\"I found the proof of lemma 1 confusing, the authors say it follows trivially, but I don't see it. For example, there should be a factor of 2 somewhere after taking the derivative wrt to , but I don't see it. It is fine for the authors to spell out the derivation here if possible.\\\"\\n\\n- The proof now includes the derivation. Thanks to the comment, we also realized that an implicit assumption (that F has to be centered) was not listed, and we corrected the text.\\n\\n> \\\"The paper ends quite abruptly with no conclusion or discussion. It would be great to include a wrap up\\nsection that puts the contributions into context.\\\"\\n\\n- We now include a short discussion that summarizes our contributions.\\n\\n> \\\"I get the sense that this method should be computationally intensive, though the paper says otherwise. It is fine for a method to be computationally intensive, but can the authors speak to this issue?\\\"\\n\\n- The method is computationally intensive, but its computational complexity is much lower compared to its alternatives which often rely on non-convex formulations. We now include the time complexity for solving a linear program with an interior point method in our discussion following Definition 2, and also provide the runtime using commercial software (which is roughly 3 seconds on a standard PC).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the constructive comments and the questions! Please see our answers below.\\n\\n> Can the proposed approach be integrated with different types of Saliency map methods?\\n\\n- Certainly! One such integration is to use the proposed perturbation scheme in other perturbation based saliency methods, such as SmoothGrad, VarGrad or SmoothCASO (from the paper that you cited). An another possible avenue is to combine the constrained penalization approach with other saliency methods that utilize some sort of loss function. For instance, one can build a saliency estimation procedure which seeks to find a solution where the infinity-norm of the gradient of the loss is bounded. Our proof technique can be used to obtain theoretical consistency for such procedures.\\n\\n> Can you explain the differences of the proposed approach and the group feature formulation of https://arxiv.org/abs/1902.00407 ?\\n\\n- The group feature formulation approach of Singla et al (2019) seeks find pixels (or more precisely, perturbations) whose alteration will cause misclassification, and hence are more likely to be salient. There are two major differences between our work and theirs. Firstly, their method requires knowledge of the underlying neural network, as it uses the gradient and the hessian of the loss function at the input. Our procedure treats the network as a black-box, and can be utilized without any information about the network architecture or weights. Secondly, our approach for finding the salient points are different: our method seeks to approximate the predictor at a neighborhood around a specific input, and uses that information to identify the salient pixels; where as Singla et al provide local interpretations by finding ideal perturbations for a specific input. They also offer smoothed versions of their procedures, which help them obtain more interpretable solutions, but they don't have theoretical consistency results that would help them establish bounds on sample complexity. Due to its relevance, we also added this work, along with the work of Fong and Vedaldi (2017), to the literature review in our introduction.\\n\\n> Is there a quantitative way to assess the performance of the proposed approach?\\n\\n- Unfortunately, it is hard to find a single metric that can properly quantify how good an interpretation is. Most papers in the literature provide visual comparisons, and those are impossible to judge objectively. Furthermore, as different saliency methods estimate completely different quantities, there are no baseline problems with a specific ground-truth on which methods can be evaluated and compared against each other. We have seen two techniques in the literature that provide fully quantitative comparisons: (i) Sensitivity analyses based on perturbations; and (ii) Sanity checks. We include both of these in our numerical experiments.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the constructive comments and the suggestions! Please see our comments below.\\n\\n> \\\"However, I would like to remark that solving (4) might be empirically difficult (depending on the size of the problem) even though it is convex and can be solved in polynomial time theoretically. I wonder if the authors could clarify the setup of their experiments (instead of writing \\\"The problem in equation 4 can be solved by any linear programming software, for which many open-source implementations exist\\\"), and if the author could remark on the empirical running time.\\\"\\n\\n- In our numerical experiments we used the MOSEK solver. We edited the paper to clarify this point and included the empirical running time, which is less than 3 seconds on an 8 core computer. We also included the time complexity for interior point methods: one can obtain an $\\\\epsilon$-accurate solution in $O((p_1p_2)^{3.5}\\\\log(1/\\\\epsilon))$ time. We note that it is possible to obtain much faster rates by utilizing the sparseness of the constraint matrix (Yen et al, 2015) or by using recently proposed stochastic central path methods (Cohen et al, 2018). We leave those as future directions of research.\\n\\n> \\\"Also, I am not sure if \\\"Note that if L = 0, then the TV-penalization has no effect and the solution of the above procedure reduces to the empirical estimate,\\\" as the objective function is in L1 norm.\\\"\\n\\n- This is correct. However, when $L=0$, the constraint reduces to an equality. As this equality is under determined, TV-penalization still plays an effect. The correct statement should be \\\"the coefficient is equal to the empirical estimate up to a location shift\\\". We found that this addition would make the paper harder to read, and we removed that comment. The proof for the above statement can be found in the appendix as an additional lemma (Lemma 3).\", \"references\": [\"Yen, Ian En-Hsu, et al. \\\"Sparse linear programming via primal and dual augmented coordinate descent.\\\" Advances in Neural Information Processing Systems. 2015.\", \"Cohen, Michael B., Yin Tat Lee, and Zhao Song. \\\"Solving linear programs in the current matrix multiplication time.\\\" Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. ACM, 2019.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"Summary\\nThis paper proposes an attribution method, linearly estimated gradient (LEG) for deep networks in the image setting.\\nThe paper also introduces a variant of the estimator called LEG-TV, which includes a TV penalty, and provides a \\ntheorem on the convergence rate of the estimator. The paper finds that the LEG attributions pass sanity\\nchecks. \\n\\nMy recommendation\\nOverall, I am recommending this paper as a weak accept. There are several points to address with\\nregards to the exposition and flow of the paper, which is my biggest issue with this paper. I believe\\nthe authors can address this point and I am willing to raise my point on this basis. The paper also\\nprovide some theoretical analysis of the proposed method, which is typically lacking for most\\nof the interpretation methods in this domain. \\n\\n\\nPossible Improvements\\n- The LEG method is not sufficiently motivated. Here, I am specifically referring to the functional\\nform of the estimated itself in definition 1. See the question section for some of the issues \\nI raised there. \\n- From figure 4, we see that the method passes the proposed sanity checks which seem like a \\nkey motivation for this work, however, the authors don't give an explanation for why this is the\\ncase.\\n- The paper notes that LEG can be estimated using an LP; it would have been great for the authors\\nto completely spell this out in the appendix or somewhere in the text. What is the exact form of the\\nLP? What are the constraints? \\n- As the authors know, the two evaluations presented in the paper: sanity checks, and the zeroing\\nout procedure (in figure 5) don't actually tell us which method is a good method, just rule out a method.\\nI would encourage the authors to design a toy task where the ground truth attributions are know, then\\ntrain a model to be 100 percent or so accurate on this task. You can then obtain LEG-TV estimates\\nfrom this model and compare to the ground-truth. \\n- I found the proof of lemma 1 confusing, the authors say it follows trivially, but I don't see it. For example, \\nthere should be a factor of 2 somewhere after taking the derivative wrt to $vec(g)$, but I don't see it. It is\\nfine for the authors to spell out the derivation here if possible. \\n-The paper ends quite abruptly with no conclusion or discussion. It would be great to include a wrap up\\nsection that puts the contributions into context. \\n- I get the sense that this method should be computationally intensive, though the paper says otherwise.\\nIt is fine for a method to be computationally intensive, but can the authors speak to this issue?\\n\\n\\nSome Questions\", \"definition_1\": \"I had a difficult time understanding this definition. What is $g$ here? I assume\\nit is the gradient based on the reference to the first order Taylor expansion. In addition, why\\nis the estimand squared? Further, What does it mean to take expectation wrt $F + x_0$. I was\\nparticularly confused by the last point, because F is a continuous distribution, while presumably\\n$x_0$ is the point of interest. The paper notes in several places that it can sample from $F + x_0$,\\nis this equivalent to sampling from $F$ and adding point $x_0$?\\n\\nWhat is LEG0 in figure 5?\\n\\nIs $\\\\kappa$ in your theorem 1, the condition number of the covariance matrix of the perturbation?\\n\\n\\nConclusion\\nOverall, the paper provides a nice method along with analysis on convergence rates and other statistical\\nproperties. Several of the key issues/questions I have about the paper are raised above. None of these\\nshould be dealbreakers but would require the authors to flesh out more details and possibly justify\\ncertain choices. In general, I think more effort should be put into the flow and writing of the paper.\\nOverall, this is an interesting contribution.\\n\\n\\n## After reading author responses\\nI believe the authors have clarified and improved the readability of the paper and clarified several of the questions\\nthat I had. I am raising my score to an accept. While I believe this is a valuable contribution to the sea of attribution methods that have now been published, like the authors noted, it is still not clearly if attribution methods as a whole\\nare useful of decision making or understanding of a model by either a generic end user or the model developer. \\nThis is a huge problem in this area that deserves significant attention. This said, the goal of this current paper is to\\ntake a step towards developing a principled method, so this is a step in that direction perhaps.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors propose a nice framework for interpreting differentiable models without access to model details. The framework also comes with two empirical estimates, with solid theoretical back-up.\\n\\nDefinition 1 in Section 2 provides a definition for the method to study (LEG). LEG is a nice generalization of multiple existing approaches. There also seems to be a lot more to investigate for future work based on this framework. \\n\\nAfter the authors proposed Definition 2 (seems to be a bit too straightforward, to be discussed later), Theorem 1 is proposed to characterize its convergence. It also provides guidelines for selecting the covariance matrix $\\\\Sigma$, as discussed in Section 4.\\n\\nOverall, the paper proposes a nice framework for model interpretation. It is well backed up by theory. It should clearly be accepted, although the paper is relatively weak in experiments. Below I will discuss some weaknesses and potential improvements.\", \"weakness\": \"1. Theorem 1 provides a nice characterization for the property of the proposed LEG-TV estimate. But there are two weaknesses. \\n\\nFirst, the authors should note that the proposed definition 2 is still a bit too straightforward without intuitive explanations. For example, what is the general form of 2D Fused Lasso? How is it applied to approximate Definition 1 / Equation (2) to get the expression in Definition 2? More details may be explained to help readers understand.\\n\\nSecond, it should be carefully discussed that the proof of Theorem 1 depends on existing work if the connection is close. For example, the authors may add in the main content \\\"the proof of Theorem 1 is built on top of ...\\\" or statements like that, probably with a short sketch / intuition on the entire proof if space permits.\\n\\n2. Can the authors be more specific on the time complexity and sample complexity of the proposed algorithms?\\n\\n3. It seems the experimental section of the paper is not satisfactory. Almost all results are single-image analysis, instead of systematic empirical analysis on an image data set. Without such analysis, it is hard to see the advantage of the proposed method over other comparing saliency maps and model-agnostic methods. For example, is the proposed method more sample-efficient than LIME or SHAP, or other more efficient procedures such as L(C)-Shapley? Is the proposed method really selecting meaningful segments for the model? (It may be tested by evaluating the log-odds-ratio after the top selected features are masked.) It is observed that LEG is able to select connected regions (Figure 6). The same phenomenon has been observed for C-Shapley. It may be helpful if the connection is discussed (such as the connection between sampling procedure of C-Shapley and the procedure imposed by LEG-TV).\\n\\nThe reviewer is not conditioning the \\\"accept\\\" decision on adding any of the suggested improvements on experiments given the limited rebuttal period and limited space, although it may benefit the paper of some of them can be addressed.\\n-----------------------------------------------------------------------------------------------------\\n--------------------------------------Post Author Response--------------------------------\\n-----------------------------------------------------------------------------------------------------\\n\\nThe authors have addressed most of my concerns. Theorem 1 and the sketch of the proof have been discussed. Also, complexity has been discussed (the definition of $s$ is embedded in the theorem and may be made clearer). Last but not least, authors have carried out experiments in a larger scale to get more stable results. \\n \\n The performance, in terms of log-odds-ratio, may not be as good as some of the comparing methods. However, the paper provides a creative framework for incorporating structure into feature attribution scores in model interpretation. So I will keep my score.\", \"a_typo\": \"The first paragraph of Section 3: \\\" less model evaluations.\\\" should be \\\" fewer model evaluations.\\\"\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work proposes a statistical framework for saliency estimation for black box\\ncomputer vision models, based on solving a convex program in (4). It also gives theoretical analysis on its consistency in Theorem 1, and run a few simulations to show the empirical performance of the proposed method.\\n\\nThe method proposed seems to be novel and reasonable. As a result, I tend to accept this paper. However, I would like to remark that solving (4) might be empirically difficult (depending on the size of the problem) even though it is convex and can be solved in polynomial time theoretically. I wonder if the authors could clarify the setup of their experiments (instead of writing \\\"The problem in equation 4 can be\\nsolved by any linear programming software, for which many open-source implementations exist\\\"), and if the author could remark on the empirical running time.\\n\\nAlso, I am not sure if \\\"Note that if L = 0, then the TV-penalization has no effect and the solution of the above procedure reduces to the empirical estimate,\\\" as the objective function is in L1 norm.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Author propose a statistical framework and a theoretically consistent procedure for saliency estimation that is close to the empirical solution and has sparse differences on the grid. I think the idea of the paper is quite interesting and the results are significant.\\n\\nIn particular, having an upper bound on the number of model evaluations to recover the region of\\nimportance with high probability is significant. Moreover, they have proposed a new perturbation scheme for estimation of gradients that works better than random perturbation schemes.\", \"i_have_some_questions_listed_below\": [\"Can the proposed approach be integrated with different types of Saliency map methods?\", \"Can you explain the differences of the proposed approach and the group feature formulation of https://arxiv.org/abs/1902.00407 ?\", \"Is there a quantitative way to assess the performance of the proposed approach?\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Authors proposes an interesting statistical method to detect saliency in images. Authors provides a specific estimator that is fast to compute and characterize its performance w.r.t. parameters.\\n\\nMy main concern is the experiment section. \\\"For computational efficiency, we compute saliency maps on a 28 by 28 grid (i.e. \\u03b3\\u02dc \\u2208 R 28\\u00d728) although the standard input for VGG-19 is 224 by 224. T\\\". Shouldn't we do the same thing for all baselines? The seemingly good sailency results might stem from this artifact.\"}"
]
} |
HylNWkHtvB | Domain-Independent Dominance of Adaptive Methods | [
"Pedro Savarese",
"David McAllester",
"Sudarshan Babu",
"Michael Maire"
] | From a simplified analysis of adaptive methods, we derive AvaGrad, a new optimizer which outperforms SGD on vision tasks when its adaptability is properly tuned. We observe that the power of our method is partially explained by a decoupling of learning rate and adaptability, greatly simplifying hyperparameter search. In light of this observation, we demonstrate that, against conventional wisdom, Adam can also outperform SGD on vision tasks, as long as the coupling between its learning rate and adaptability is taken into account. In practice, AvaGrad matches the best results, as measured by generalization accuracy, delivered by any existing optimizer (SGD or adaptive) across image classification (CIFAR, ImageNet) and character-level language modelling (Penn Treebank) tasks. This later observation, alongside of AvaGrad's decoupling of hyperparameters, could make it the preferred optimizer for deep learning, replacing both SGD and Adam. | [
"sgd",
"adaptive methods",
"adaptability",
"dominance",
"vision tasks",
"decoupling",
"rate",
"adam",
"simplified analysis",
"new optimizer"
] | Reject | https://openreview.net/pdf?id=HylNWkHtvB | https://openreview.net/forum?id=HylNWkHtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rcRPTcLTD_",
"BJggfpXoiH",
"SJe0hjnFoH",
"HJeDIo3Kir",
"SJlv1snKjS",
"H1xb4q2toB",
"H1gL5F3Yir",
"SyltgthtiH",
"H1g0TuhKir",
"rklCY7VJ9r",
"HkgBFoJCKS",
"r1lrEWi3YB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725992,
1573760264303,
1573665718110,
1573665615061,
1573665502878,
1573665320839,
1573665165702,
1573665008885,
1573664966104,
1571926917550,
1571842940700,
1571758381062
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1540/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1540/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1540/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1540/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes an adaptive gradient method for optimization in deep learning called AvaGrad. The authors argue that AvaGrad greatly simplifies hyperparameter search (over e.g. ADAM) and demonstrate competitive performance on benchmark image and text problems. In thorough reviews, thorough author response and discussion by the reviewers (which are are all appreciated) a few concerns about the work came to light and were debated. One reviewer was compelled by the author response to raise their recommendation to weak accept. However, none of the reviewers felt strongly enough to champion the paper for acceptance and even the reviewer assigning the highest score had reservations. A major issue of debate was the treatment of hyperparameters, i.e. that the authors tuned hyperparameters on a smaller problem and then assumed these would extrapolate to larger problems. In a largely empirical paper this does seem to be a significant concern. The space of adaptive optimizers for deep learning is a crowded one and thus the empirical (or theoretical) burden of proof of superiority is high. The authors state regarding a concurrent submission: \\\"when hyperparameters are properly tuned, echoing our results on this matter\\\", however, it seems that the reviewers disagree that the hyperparameters are indeed properly tuned in this paper. It's due to these remaining reservations that the recommendation is to reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to response to reviewer 3 [*/2]\", \"comment\": \"Thank you very much for the detailed responses! These are very thorough in addressing the concerns raised, I appreciate (and sympathize with the work involved in) the quick turn around in adding additional experiments. I will definitely be raising my score.\"}",
"{\"title\": \"Comments on revision and concurrent submission\", \"comment\": \"We thank the reviewers.\\n\\nWe would like to emphasize that our main contribution is not Delayed Adam and its convergence analysis, but our newly-proposed optimizer, AvaGrad. Additionally, we establish the empirical finding that adaptive methods can outperform SGD across different tasks/datasets, when training distinct architectures -- even in settings where SGD has been universally adopted, such as image classification on ImageNet. These findings require proper tuning of \\\\epsilon, whose optimal values are as large as \\\\epsilon=10, a value 9 orders of magnitude larger than the recommended. Without the decoupling between the learning rate and \\\\epsilon offered by AvaGrad, proper tuning can be extremely costly: in total, we performed over 450 runs to assess the validation performance with different settings for \\\\alpha and \\\\epsilon for each adaptive method. \\n\\nWe have also revised the paper, incorporating changes suggested in the reviews:\\n\\n - Our experimental setup now includes AdaShift, following the same protocol we used for other adaptive methods. CIFAR and PTB results are shown in Table 1; ImageNet results will be added to the camera-ready version.\\n\\n - We re-tuned SGD on the large-scale experiments (Wide ResNet 28-10 on CIFAR, 4x1000 LSTM on Penn Treebank) to make the comparison against adaptive methods stronger. In all cases, the learning rate that performed the best was the same one chosen from the previous protocol (the one that performed best in the small-scale experiment), hence the numbers are the same.\", \"reviewers_may_want_to_take_note_of_the_following_concurrent_iclr_2020_submission_and_the_discussion_surrounding_it\": \"On Empirical Comparisons of Optimizers for Deep Learning\", \"https\": \"//openreview.net/forum?id=HygrAR4tPS\\n\\nIt makes similar observations about the superiority of existing adaptive optimizers when hyperparameters are properly tuned, echoing our results on this matter. Our contributions, however, go beyond this mere observation, as we also propose, theoretically motivate, and experimentally evaluate a better adaptive optimizer: AvaGrad.\\n\\nFinally, we are preparing a code repository that we will make publicly available in the near future.\"}",
"{\"title\": \"Response to reviewer 4 [1/2]\", \"comment\": \"Thank you for the review and your comments. We address your points individually below \\u2014 please let us know if we can clarify or address any further concerns.\\n\\n\\n\\n\\u201cI am not sure if analyzing RMSProp/Adam in this setting should be considered a significant contributions of the paper\\u201d\\n\\nThe convergence rate of Delayed Adam is not the main point of the paper. Its form \\u2014 which depends explicitly on the norm of \\\\eta \\u2014 is what motivates the design of AvaGrad, hence the role of the analysis was to, first of all, inspire the design of a better adaptive method. AvaGrad, a new adaptive algorithm with different characteristics and increased performance, is the major contribution of the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cWas momentum used with SGD?\\u201d\\n\\nWe used SGD with nesterov momentum of 0.9, closely following Zagoruyko & Komodakis for the CIFAR Wide ResNet experiments, Merity et al. for the Penn Treebank LSTM experiments, and He et al. for the ResNet ImageNet experiments.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cHow is the optimal hyperparameters (learning rate and damping, i..e, epsilon, parameters) selected?\\u201d\\n\\nWe first performed extensive grid search on smaller-scale experiments, training a Wide ResNet 28-4 on CIFAR, and a 4-layer, 300 units per layer LSTM on Penn Treebank, evaluating the performance on the validation set \\u2014 the results for Adam and AvaGrad are given in Figure 2 and Figure 3. Next, we used the values that yielded the best validation performance to train a Wide ResNet 28-10 on CIFAR, a 4-layer, 1000 units LSTM on Penn Treebank, and a ResNet 50 on ImageNet, achieving the results presented in Table 1. The protocol is described in Section 6.2 (paragraph 3 and the two following bullet points) and Section 6.3 (paragraph 2).\\n\\nIn particular, for CIFAR we searched over:\\n\\n - For AvaGrad and AvaGradW:\\nLearning rate in {0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 100.0, 500.0, 1000.0, 5000.0}\\nEpsilon in {1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1.0, 10.0, 100.0}\\n\\n - For Adam, AMSGrad, AdamW, AdaBound:\\nLearning rate in {0.00005, 0.00001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 100.0, 500.0}\\nEpsilon in {1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1.0, 10.0, 100.0}\\n\\nThe best learning rate and epsilon, for each method, was:\\n\\nAdam (0.1, 0.1)\\nAMSGrad (0.1, 0.1)\\nAvaGrad (1.0, 10.0)\\nAvaGradW (1.0, 0.1)\\nAdamW(0.5, 10.0)\\nAdaBound(0.005, 0.01)\\n\\nFor Penn Treebank, we searched over:\\n\\n - For AvaGrad and AvaGradW:\\nLearning rate in {0.2, 2.0, 20.0, 200.0, 2000.0}\\nEpsilon in {1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1.0, 10.0, 100.0}\\n\\n - For Adam, AMSGrad, AdamW, AdaBound:\\nLearning rate in {0.0002, 0.002, 0.02, 0.2, 2.0, 20.0}\\nEpsilon in {1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1.0, 10.0, 100.0}\\n\\nThe best learning rate and epsilon (\\\\alpha, \\\\epsilon), for each method, was:\\n\\nAdam (0.002, 1e-8)\\nAMSGrad (0.002, 1e-8)\\nAvaGrad (200, 1e-8)\\nAvaGradW (200, 1e-6)\\nAdamW(0.002, 1e-5)\\nAdaBound(0.002, 1e-8)\\n\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cDo any of these conclusions change when trying out a very small or very large batch size?\\u201d\\n\\nAlthough an interesting question, it was out of scope with respect to our initial goal of evaluating whether adaptive methods can outperform SGD using the recommended hyperparameters i.e. the batch sizes used in Zagoruyko & Komodakis, Merity et al., and He et al. Unfortunately we cannot re-run our experiments by the rebuttal deadline, but we do believe batch size would be interesting to investigate. For the camera-ready version, we will generate heatmaps, like the ones in Figure 2, that explore batch size as a hyperparameter.\"}",
"{\"title\": \"Response to reviewer 4 [2/2]\", \"comment\": \"\\u201cusing the same optimal hyperparams as the WRN-28-4 task (...) makes the other claim about AvaGrad generalizing just as well as SGD weaker\\u201d\\n\\nThis is an important observation. We re-tuned the learning rate of SGD for CIFAR 10/100 with the Wide ResNet 28-10, and for PTB with the 4x1000 LSTM, yielding the following performance for each learning rate:\\n\\nC10 (val error):\\n1.0: \\t8.84%\\n0.5: \\t4.98%\\n0.1: \\t3.86%\\n0.05: \\t4.20%\\n0.01: \\t5.14%\\n\\nC100 (val error):\\n1.0: \\t37.13%\\n0.5: \\t23.25%\\n0.1: \\t19.05%\\n0.05: \\t19.92%\\n0.01: \\t22.51%\\n\\nPTB (bits per character, lower is better):\\n100.0: \\t1.473 bpc\\n20.0: \\t1.238 bpc\\n10.0: \\t1.253 bpc\\n2.0:\\t\\t1.298 bpc\\n1.0: \\t\\t1.348 bpc\\n\\nThe best learning rates, even when re-tuning SGD on the large experiments, were the same as the ones we used to get the results in Table 1. Performing the same level of tuning we did for the Wide ResNet 28-4 and 4x300 LSTM models for adaptive methods is unfeasible, as it consisted of over 150 runs for each algorithm. Since adaptive methods, even when not re-tuned, still outperform a re-tuned SGD, our findings remain consistent.\\n\\nIn practice, it is common to tune hyperparameters in smaller models and translate these to large-scale experiments \\u2014 moreover, the hyperparameters used for SGD coincide with the ones extensively used in the literature, e.g. the ResNet (He et al.\\u201915), DenseNet (Huang et al.\\u201916), Wide ResNet (Zagoruyko & Komodakis\\u201916) and ResNeXt (Xie et al.\\u201916) papers.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cOne of the key claims that adaptive gradient methods generalize better when using a large damping (epsilon) parameter has appeared in previous papers as well [2, 3].\\u201d\\n\\nThe values for epsilon explored in [2] and [3] are at most 1e-3, a value 100 times smaller than the one we found to be optimal for Adam/AMSGrad and 10,000 times smaller than the optimal one for AvaGrad/AdamW.\", \"to_clarify_our_contribution\": \"we observe that the optimal hyperparameters for popular adaptive methods can be many orders of magnitude larger than the ones typically explored in the literature. Tuning them to such extreme ranges had not been done before due to the computational costs of grid search, especially since the optimal values for epsilon are strongly coupled with the learning rate (Figure 2 and 3, left plots) in a non-linear manner (more specifically, there are two visible \\u2018regimes\\u2019 that determine how epsilon and the learning rate interact).\\n\\nOur proposed method, AvaGrad, effectively decouples the two (Figure 2 and 3, right plots), hence hyperparameter tuning can be broken into two line searches. The precise interaction between the learning rate and epsilon, as shown in our figures, has not appeared in previous works, nor has an effective method to tune both hyperparameters without yielding quadratic time complexity. Lastly, showing that adaptive methods \\u2014 even Adam \\u2014 can outperform SGD on ImageNet, without extra caveats such as involved warmup schedules, is an important and novel experimental result.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cit is not clear whether the worse rate is due to the analysis\\u201d / \\u201ccan be confusing to the reader, and it is best to explicitly mention the setting under which the different results were derived\\u201d\\n\\nWe agree that these can be sources of confusion to the readers, and we have clarified this in the revised version of the paper.\"}",
"{\"title\": \"Response to reviewer 3 [1/2]\", \"comment\": \"Thank you for the review and your comments. We address your points individually below \\u2014 please let us know if we can clarify or address any further concerns.\\n\\n\\n\\n\\u201cAdaShift was published at ICLR last year and seems to include a closely related analysis and update rule\\u201d\\n\\nThe differences in the analysis and update rule are significant. AvaGrad, our main contribution, includes the convergence fix of Delayed Adam \\u2014 a delay in the computation of v_t \\u2014 but, more significantly, also applies a new adaptive scaling factor to gradients. This new adaptive scaling rule is responsible for AvaGrad\\u2019s superior performance on real datasets and its better hyperparameter separation.\\n\\nContrasting Delayed Adam with AdaShift, the update rule in AdaShift applies a delay in the computation of v_t and, at the same time, a limited horizon on the computation of m_t. If we set n=1 and \\\\phi as the identity function, then we recover an update rule that is similar to Delayed Adam, but with \\\\beta_1 = 0 (that is, we lose first-order momentum). In the general setting where n>1, AdaShift requires storing a history of the past n gradients, and there is little relation to the update rule of Delayed Adam.\\n\\nAvaGrad, and not Delayed Adam, is our actual contribution in terms of a new adaptive method. AvaGrad\\u2019s key significance is that the parameter-wise learning rates \\\\eta_t are normalized, which is precisely what decouples the learning rate and epsilon. In our presentation, Delayed Adam serves as a motivation for the design of AvaGrad: the normalization of \\\\eta is inspired by the convergence rate of Delayed Adam.\\n\\nIn terms of analysis, Zhou et al. analyze AdaShift in the online convex optimization framework, while we provide convergence rates in the smooth stochastic non-convex setting: both the implications and the proof technique differ significantly between the two.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cCould you further clarify the differences between the two\\u201d\\n\\nFor any update rule where the gradient and \\\\eta are uncorrelated, we can use Theorem 2 (considering standard assumptions) to assure convergence regardless of how \\\\eta is computed. On the other hand, if \\\\eta has a different form, then having the gradient and v_t to be uncorrelated might not be enough to guarantee convergence. In other words, to guarantee convergence, correcting for the gradient and \\\\eta is sufficient for any functional form of \\\\eta, while correcting for the gradient and v_t is might not be sufficient (it is sufficient, however, when \\\\eta_t is only a function of v_t).\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201ctheir Theorem 1 could be compared to yours\\u201d\\n\\nAlthough this might be a minor concern, the differences are significant and important: their Theorem 1 is a statement about regret in the online convex optimization framework, while our Theorem 1 is about stationarity in the stochastic smooth non-convex setting.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cincluding AdaShift in your experiments would be very useful for demonstrating their differences\\u201d\\n\\nThanks for the suggestion -- we have now added AdaShift to our experiments. Following the same protocol we used for all adaptive methods, we first performed grid search over the learning rate and epsilon on CIFAR with a Wide ResNet 28-4, where AdaShift performed best with lr = epsilon = 1.0. Next, we trained a Wide ResNet 28-10, where AdaShift achieved 4.08% and 18.88% error on CIFAR10 and CIFAR100 (outperforming every adaptive method except for AvaGrad on the latter). Following the same protocol n Penn Treebank, it yielded a bpc of 1.274 with a 4-layer LSTM with 1000 units per layer. We won\\u2019t have ImageNet results for AdaShift by the rebuttal deadline, but we will add them to the camera-ready version of the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cretuning across experiments\\u201d\\n\\nThis is an important observation. We re-tuned the learning rate of SGD for CIFAR 10/100 with the Wide ResNet 28-10, and for PTB with the 4x1000 LSTM, yielding the following performance for each learning rate:\\n\\nC10 (val error):\\n1.0: \\t8.84%\\n0.5: \\t4.98%\\n0.1: \\t3.86%\\n0.05: \\t4.20%\\n0.01: \\t5.14%\\n\\nC100 (val error):\\n1.0: \\t37.13%\\n0.5: \\t23.25%\\n0.1: \\t19.05%\\n0.05: \\t19.92%\\n0.01: \\t22.51%\\n\\nPTB (bits per character, lower is better):\\n100.0: \\t1.473 bpc\\n20.0: \\t1.238 bpc\\n10.0: \\t1.253 bpc\\n2.0:\\t\\t1.298 bpc\\n1.0: \\t\\t1.348 bpc\\n\\nThe best learning rates, even when re-tuning SGD on the large experiments, were the same as the ones we used to get the results in Table 1. Performing the same level of tuning we did for the Wide ResNet 28-4 and 4x300 LSTM models for adaptive methods is unfeasible, as it consisted of over 150 runs for each algorithm. Since adaptive methods, even when not re-tuned, still outperform a re-tuned SGD, our findings remain consistent.\"}",
"{\"title\": \"Response to reviewer 3 [2/2]\", \"comment\": \"\\u201cCan you comment on why this diminished adaptivity would be desirable in the worst case scenario analyzed?\\u201d\\n\\nExactly characterizing why adaptivity is undesirable in theory is beyond the scope of this paper, but the same observation is present in previous papers in the literature, for example:\\n\\nStaib et al. - Escaping Saddle Points with Adaptive Gradient Methods (check Section 5.1)\\nDe et al. - Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cWas SGD with momentum used?\\u201d\\n\\nWe used SGD with a Nesterov momentum of 0.9, following seminal works such as the ResNet (He et al.\\u201915), DenseNet (Huang et al.\\u201916), Wide ResNet (Zagoruyko & Komodakis\\u201916) and ResNeXt (Xie et al.\\u201916) papers.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cWas a validation set used for CIFAR?\\u201d\\n\\nWe used a validation set of 5k examples sampled from the training set for the hyperparameter search and the results in Figure 2 and 3. For the final results in Table 1, we used the actual test set composed of 10k images. We have updated the paper to clarify this.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cI would be curious if the trend continues for more extreme values of alpha and epsilon\\u201d:\\n\\nWe have performed preliminary experiments and observed that the trend indeed continues for extreme values of alpha and epsilon when training with Adam or AMSGrad. In particular, the performance with alpha = c * epsilon for some fixed c is nearly identical regardless of alpha and epsilon, as long as they are large enough. For a motivation why this happens, note that if epsilon is large, then \\\\eta_t = 1 / (\\\\sqrt(v_t) + \\\\epsilon) is approximately 1 / \\\\epsilon. If we set \\\\alpha = c * \\\\epsilon, then the update rule becomes w_{t+1} = w_t - c * \\\\epsilon * g_t / \\\\epsilon = w_t - c * g_t, regardless of \\\\alpha and \\\\epsilon.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cit is much more common to use SGD with momentum\\u201d\\n\\nBy \\u2018vanilla SGD\\u2019 we meant SGD with momentum. We use SGD with momentum for all experiments in the paper. We have changed \\u2018vanilla SGD\\u2019 to \\u2018SGD\\u2019 in the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cIt would be nice to highlight in color the diff from vanilla Adam in the Algorithm sections.\\u201d\\n\\nWe have put the differences in red in the revised version of the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cIt is not super clear from the text how in eq 26 you get \\\\sum{E[f(w_t)|Z]} = f(w_1)\\u201d\\n\\nHere, f(w_1) - E[f(w_{T+1})|Z] is the result of a telescoping sum (more specifically, the sum in the right-hand side of the first line in eq 26). First, we have that E_S[E_{s_t}[f(w_{t+1})]|Z] = E_S[f(w_{t+1})] due to the assumption. With this, the sum has the form \\\\sum_{t=1}^T a_t - a_{t+1} (where a_t is the expectation of f(w_t)) and the sum of all terms equals to a_1 - a_{T+1} by telescoping sum. Since w_1 does not depend on how points are sampled, we have E[f(w_1)] = f(w_1), finally yielding f(w_1) - E[f(w_{T+1})|Z]. We have updated the paper to make this clearer, adding an additional step in the derivation.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cI believe the H in the leftmost term in the last line of eq 33 should be an L.\\u201d\\n\\nThis was indeed a typo, which we have fixed in the revised version \\u2014 thanks for pointing it out.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cauthors commonly use polynomial, linear, exponential, cosine, or other learning rate decay/warmups\\u201d\\n\\nWe have changed \\u2018typically\\u2019 to \\u2018not uncommonly\\u2019 to be more precise. In this statement we were referring specifically to the papers that presented our baselines (Zagoruyko & Komodakis, Merity et al.), which all use a step-wise learning rate schedule.\"}",
"{\"title\": \"Response to reviewer 2 [1/2]\", \"comment\": \"Thank you for the review. We address your points individually below \\u2014 please let us know if we can clarify or address any further concerns.\\n\\n\\n\\n\\\"If the Adam-type algorithms are the delayed version in Table 1?\\u201d\\n\\nNo, the adaptive methods in Table 1 are not the delayed versions. In Table 1, the only method that applies a delay in the computation of v_t is AvaGrad (and, consequently, AvaGradW). Delayed Adam was only used in the synthetic experiment of Figure 1, for the purpose of theoretical motivation and evaluation.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cIt is not compatible with AdamW\\u201d\", \"there_is_no_issue_of_compatibility\": \"we are proposing a new optimizer, not modifying existing ones. We do not modify Adam or AdamW in any form for the results in Table 1: other than AvaGrad and AvaGradW, we use PyTorch built-in or official implementations for each method, and the results were achieved when running with hyperparameters found after extensive grid search (Figures 2 and 3).\\n\\nThe optimizer referred as \\u2018AvaGradW\\u2019 in Table 1, which yielded the best results on Penn Treebank, is the result of applying weight decay as in Loshchilov & Hutter to AvaGrad, our newly proposed optimizer. \\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cAre you still using bias correction in the proposed method?\\u201d\\n\\nWe use bias correction similarly to Adam, dividing m_t by 1 - \\\\beta_1^t, and v_{t-1} by 1 - \\\\beta_2^{t-1}.The norm of v_{t-1}, used to scale the learning rate, is computed from the bias-corrected v_{t-1}.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cDo you update the model for the first step?\\u201d\\nWe do not update the model in the first step.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201cThe results on the image datasets seem too good to be true.\\u201d\\n\\nOur results are correct as reported and highlight the value of our proposed optimizer, which improves results across different datasets and architectures.\\n\\nTo achieve improved results with existing optimizers, we had to search over a hyperparameter space larger than typically employed, which may be why those best case results appear surprisingly good. See an example run with hyperparameter settings where Adam outperforms SGD at the end of our reply.\"}",
"{\"title\": \"Response to reviewer 2 [2/2]\", \"comment\": \"\\u201cImplementation Issue\\u201d\\n\\nThe learning rates of Adam and Delayed Adam are not equivalent. As an informal motivation to see this, consider a sequence of many samples with small gradients, followed by a sample with a large gradient. For Adam, the large gradient will be immediately used to update v_t, which will increase, hence decreasing \\\\eta_t and consequently also the step size. On the other hand, for Delayed Adam, the large gradient will not affect \\\\eta_t, but only \\\\eta_{t+1}, hence the step size will be larger than the one which Adam would have computed. In practice, to achieve similar behavior with Delayed Adam, one should use a smaller learning rate than the one used with Adam.\\n\\nWe have performed runs using the same codebase (https://github.com/LiyuanLucasLiu/RAdam/tree/master/cifar_imagenet) so that results can be more easily replicated. For Delayed Adam, we performed the steps 2 to 4 in your review so that we have a matching implementation (which is equivalent to the one used in the experimental results in our paper), referred to as \\u2018dadam\\u2019 when chosen by command-line arguments below. To control the epsilon parameter, we added a command-line argument args.eps:\\n\\nparser.add_argument('--eps', default=1e-8, type=float, help='epsilon parameter for adaptive methods')\\n\\nWhich is used to instantiate the adaptive methods, e.g.:\\n\\noptimizer = optim.Adam(model.parameters(), lr=args.lr, betas=(args.beta1, args.beta2), weight_decay=args.weight_decay, eps=args.eps)\\n\\nTo facilitate reproducibility, we remove cudnn.benchmark=True in cifar.py (right after model = torch.nn.DataParallel(model).cuda()), replacing it with:\\n\\ncudnn.deterministic = True\\ncudnn.benchmark = False\\n\\nWe also use a manual seed 0 for all experiments. The commands, followed by the given results, were:\\n\\nAdam, lr=0.001, eps=1e-8\\npython cifar.py -a resnet --depth 20 --epochs 164 --schedule 81 122 --gamma 0.1 --wd 1e-4 --optimizer adam --beta1 0.9 --beta2 0.999 --checkpoint ./logdir --gpu-id 0 --model_name adam_001 --lr 0.001 --manualSeed 0 --eps 1e-8\", \"best_acc\": \"91.91%\\n\\nTo further facilitate reproducibility, we are preparing a codebase to be publicly available with our implementation of AvaGrad and the code to run our experiments.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors present a new adaptive gradient method AvaGrad. The authors claim the proposed method is less sensitive to its hyperparameters, compared to previous algorithms, and this is due to decoupling the learning rate and the damping parameter.\\n\\nOverall, the paper is well written, and is on an important topic. However, I have a few concerns about the paper, which I will list below.\\n\\n1. The fact that adaptive gradient methods converge with a fast rate when the sum in the denominator is taken till the t-1th iterate has appeared in previous papers before [1]. The convergence rate analysis for this case is fairly simple, and I am not sure if analyzing RMSProp/Adam in this setting should be considered a significant contributions of the paper.\\n\\n2. The proposed algorithm AvaGrad is a simple but interesting idea. I have a number questions about the experimental evaluation though, which makes it hard for me to evaluate the significance of the results presented:\\n\\na) Was momentum used with SGD?\\n\\nb) How is the optimal hyperparameters (learning rate and damping, i..e, epsilon, parameters) selected?\\n\\nc) Do any of these conclusions change when trying out a very small or very large batch size?\\n\\nd) I am not convinced that using the same optimal hyperparams as the WRN-28-4 task on the WRN-28-10 and ResNet50 models is a reasonable experiment. Why is this a good idea? While this does support the claim that adaptive gradient methods are less sensitive to hyperparameter settings, but makes the other claim about AvaGrad generalizing just as well as SGD weaker?\\n\\ne) One of the key claims that adaptive gradient methods generalize better when using a large damping (epsilon) parameter has appeared in previous papers as well [2, 3].\\n\\n\\nOverall, in my view, this is a borderline paper mostly because I think a number of the results presented have been shown in other recent papers. My score reflects this. However, I think decoupling the learning rate and the damping parameter by normalizing the preconditioner is a simple but interesting idea, and I am willing to increase my score based on the discussion with the authors and other reviewers.\\n\\n\\n[1] X. Li and F. Orabona. On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes. In AISTATS 2019\\n[2] M. Zaheer, S. Reddi, D. Sachan, S. Kale, and S. Kumar. Adaptive methods for nonconvex optimization. in NeurIPS 2018.\\n[3] S. De, A. Mukherjee, and E. Ullah. Convergence guarantees for rmsprop and adam in non-convex optimization and an empirical comparison to nesterov acceleration. arXiv:1807.06766, 2018.\", \"a_few_more_minor_comments\": \"1. The authors say that methods like AMSGrad fail to match the convergence rate of SGD. But this statement seems misleading since it is not clear whether the worse rate is due to the analysis (which gives an upper bound) or the algorithm?\\n\\n2. In the related work section, the authors discuss convergence rates of algorithms with constant and decreasing step sizes together. This can be confusing to the reader, and it is best to explicitly mention the setting under which the different results were derived.\\n\\n=======================================\", \"edit_after_rebuttal\": \"I thank the authors for the detailed response and for the updated version of the paper. After discussion with other reviewers, we are still not convinced that the hyperparameter tuning in the experiments (especially the baselines) is rigorous enough. This is especially important given that this paper proposes a new optimizer. We are also concerned about the novelty of the results, and believe most of these results have appeared in previous work. So I am not increasing my score. I would encourage the authors to do a more rigorous experimental evaluation of the proposed algorithm.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper the authors develop variants of Adam which corrects for the relationship of the gradient and adaptive terms that causes convergence issues, naming them Delayed Adam and AvaGrad. They also provide proofs demonstrating they solve the convergence issues of Adam in O(1/sqrt(T)) time. They also introduce a convex problem where Adam fails to converge to a stationary point.\\n\\nThis paper is clearly written and has reasonable experimental support of its claims. However in terms of novelty, AdaShift was published at ICLR last year (https://openreview.net/forum?id=HkgTkhRcKQ) and seems to include a closely related analysis and update rule of your proposed optimizers. In AdaShift instead of correcting for the correlation between the gradient and eta_t, they correct for the relationship between the gradient and the second moment term v_t. Could you further clarify the differences between the two, both in your approach to deriving the new update rule and the algorithms themselves? Additionally, their Theorem 1 could be compared to yours, but noting the differences for these seems less important. If the optimizers are unique enough, including AdaShift in your experiments would be very useful for demonstrating their differences.\\n\\nRegarding experiments, while it is true that adaptive methods are supposed to be less \\u201csensitive\\u201d to hyperparameter choices, the limits of the feasible ranges for each hyperparameter could vary drastically across problems (especially, as previously demonstrated, across different batch sizes.) Thus, not retuning across experiments seems like it could negatively affect performance for any of the transferred hyperparameter settings. Instead of demonstrating hyperparameter insensitivity by carrying over hyperparameter settings, one could instead retune for each problem and show that a higher percent of hyperparameter combinations result in the same/similar best performance (similar to what is done in Fig. 2, but also showing a (1-dimensional) SGD version which would presumably contain fewer high performing settings.)\", \"some_additional_comments\": \"-The contribution of Theorem 1 is a nice addition to the literature.\\n-Your tuning of epsilon is great! I believe more papers should include epsilon in their hyperparameter sweeps.\\n-Scaling epsilon with step size makes sense when considering that Adam is similar to a trust region method, where epsilon is inversely proportional to the trust region radius. However, in section 5 implying that epsilon should be as large as possible in the worst case seems like an odd result given that this would always diminish your second moment term as much as possible, defeating the point of the additional Adam-like adaptivity. Can you comment on why this diminished adaptivity would be desirable in the worst case scenario analyzed?\\n-The synthetic toy problem is much appreciated, more papers should start with a small, interpretable experiment.\\n-Was SGD with momentum used? If not, this may not be a fair comparison, as I believe it is much more common to use momentum with SGD. If momentum was used, was the momentum hyperparameter tuned? If not, this may be advantageous to the Adam based methods, as they have more versatile adaptability and thus may not need as much care with their selection of momentum values.\\n-Was a validation set used for CIFAR? You note in appendix C that there are 50k train and 10k test. You mention validation performance in the main text, so this is just double checking.\\n-The demonstration in figures 2, 3 of decoupling the step size and epsilon in interesting! Given that the best performing values seem to be on the edges of the ranges tested, I would be curious if the trend continues for more extreme values of alpha and epsilon (one could sparsely search along the predicted trendlines.)\", \"nits\": \"-\\u201cVanilla SGD is still prevalent, in spite of the development of seemingly more sophisticated adaptive alternatives...\\u201d This could use some citations to back up the claim, because as far as I know it is much more common to use SGD with momentum and is actually rare to use vanilla SGD (the DenseNets and Resnets citations use momentum=0.9.)\\n-It would be nice to highlight in color the diff from vanilla Adam in the Algorithm sections.\\n-It is not super clear from the text how in eq 26 you get \\\\sum{E[f(w_t)|Z]} = f(w_1)\\n-I may be misreading something, but I believe the H in the leftmost term in the last line of eq 33 should be an L.\\n-In section 5, \\u201cfor a fixed learning rate (as is typically done in practice, except for discrete decays during training)\\u201d seems like an overly broad claim, given that authors commonly use polynomial, linear, exponential, cosine, or other learning rate decay/warmups. Granted for some CIFAR and ImageNet benchmarks there are more common discrete learning rate schedules, but that does not seem to be the overwhelmingly prevalent technique.\\n\\nOverall, while this area of analyzing Adam and proposing modifications is a popular and crowded subject, I believe this paper may contribute to it if my concerns are addressed. While I currently do not recommend acceptance, I am open to changing my score after considering the author comments!\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper proposes a new adaptive method, which is called AvaGrad. The authors first show that Adam may not converge to a stationary point for a stochastic convex optimization in Theorem1, which is closely related to [1]. They then show that by simply making $eta_t$ to be independent of the sample $s_t$, Adam is able to converge just like SGD in Theorem2. Theorem2 follows the standard SGD techniques. Next, they propose AVAGRAD, which is based on the idea of getting rid of the effect of $\\\\epsilon$.\", \"strength\": \"The experiment results are impressive. They show that Adam can outperform SGD on vision tasks.\\nRecently, people have found out that $\\\\epsilon$ is a very sensitive hyper-parameter. It is good to see some research directly addresses this problem.\", \"weakness\": \"The word \\\"domain\\\" is confusing. \\nIf the Adam-type algorithms are the delayed version in Table 1?\\nIt is not compatible with AdamW. \\nThe results on the image datasets seem too good to be true.\", \"implementation_issue\": \"***Many implementation details in the below discussion are different from the paper (e.g. hyperparameters and network architecture). So the following experiment results may not be used for assessment of the quality of the proposed method.***\\nI tried the proposed Delayed Adam on CIFAR-10 using the codebase in (https://github.com/LiyuanLucasLiu/RAdam/tree/master/cifar_imagenet). The performance seems the same as Adam. Delayed Adam even leads to a *divergence* problem, especially with a large learning rate (0.03). The divergence problem never happens when using Adam and AdamW with the same hyperparameters.\", \"implementation_details\": \"1. Replace the optimizer with the original PyTorch Adam implementation. (https://github.com/pytorch/pytorch/blob/master/torch/optim/adam.py) \\n2. Swap line 96 and 108 as suggested in the paper. \\n3. Modified line 89 (bias_correction2=1 - beta2 ** (state['step']-1)) \\n4. Do not run line 97-107 when state['step']==1. \\n5. Run the following code: python cifar.py -a resnet --depth 20 --epochs 164 --schedule 81 122 --gamma 0.1 --wd 1e-4 --optimizer adam --beta1 0.9 --beta2 0.999 --checkpoint ./logdir --gpu-id 0 --model_name adam_003 --lr 0.03\\n\\nIf the authors can provide more implementation details, I would promote my rating. \\ne.g., \\n1. Are you still using bias correction in the proposed method? If so how do you use them?\\n2. Do you update the model for the first step?\", \"reference\": \"[1] On the convergence of adam and beyond\"}"
]
} |
ByeVWkBYPH | Neural Networks for Principal Component Analysis: A New Loss Function Provably Yields Ordered Exact Eigenvectors | [
"Reza Oftadeh",
"Jiayi Shen",
"Zhangyang Wang",
"Dylan Shell"
] | In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors. This downside originates from an invariance that cancels out in the global map. Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss. We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60,000 x784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs' downsides. | [
"Principal Component Analysis",
"Autoencoder",
"Neural Network"
] | Reject | https://openreview.net/pdf?id=ByeVWkBYPH | https://openreview.net/forum?id=ByeVWkBYPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HnAMBCWpHq",
"S1gCqzfYsH",
"HkeAmzftor",
"r1g7nlftiS",
"HJebJxzKiH",
"Byl_DpEy9S",
"SylQ9IDCtB",
"HkxnO6hitS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725961,
1573622421951,
1573622310416,
1573621931160,
1573621720921,
1571929440430,
1571874443343,
1571700083995
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1539/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1539/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1539/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Quoting from R3: \\\"This paper proposes and analyzes a new loss function for linear autoencoders (LAEs) whose minima directly recover the principal components of the data. The core idea is to simultaneously solve a set of MSE LAE problems with tied weights and increasingly stringent masks on the encoder/decoder matrices.\\\" With two weak acceptance recommendations and a recommendation for rejection, this paper is borderline in terms of its scores.\\n\\nThe approach and idea are interesting. The main shortcoming of the paper, as highlighted by the reviewers, is that the approach and theoretical analysis are not properly motivated to solve an actual problem faced in real-world data. The approach does not provide a better algorithm for recovering the eigenvectors of the data, nor is it proposed as part of a learning framework to solve a real-world problem. Experiments are shown on synthetic data and MNIST. As a stand-alone theoretical result, it leaves open questions as to the proposed utility.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3- Part 1/2\", \"comment\": \"This review has been extremely useful---responding to it has broadened our understanding of our submission and enabled us to identify several connections that were heretofore less clear to us.\\n\\nWe would love to hear your feedback on the following discussions inspired by your review, and we will be more than happy to incorporate them into the revised paper if you are in favor of so.\\n\\nBefore addressing the reviewer's objections, we consider two key points worthy of emphasizing, that we will use throughout this response:\\n\\n(i) Corollary 1 and Remark (5) which follows it: Based on the corollary any critical point of our loss $L$ is a critical point of the original MSE loss but not vice versa. In light of Theorem 2 this means that $L$ eliminates those undesirable global minima of the original loss (i.e., exactly those which suffer from the invariance).\\n\\nAbove describes advantage owing to the difference from the original loss, but there is also further profit gained from their similarities:\\n\\n(ii) Consider the side by side comparison of our loss and MSE loss, along with their respective gradients, provided on pages 4 and 5 of the paper. Any gradient written for the original loss can be turned simply into the gradient for $L$ by just two component-wise matrix products with constant matrices. Moreover, by Lemma 1 the complexity of evaluating $L$ itself is of the same order as MSE loss too. Given the many repeated terms in the formulas, a careful implementation will eliminate much of the added complexity.\\n\\nBuilt on the above two points, the clarification for the two claims mentioned in the review are then as follows.\\n\\n(1) Theoretical contribution: \\nWe believe point (i) alone is a substantial and important theoretical contribution: Analyzing the loss surface for various architectures of linear/non-linear neural networks is a highly active and prominent area of research. Many of these works (e.g. [2, 3, 4, 5]) start by citing the seminal results of [1] for shallow LAEs before extending it to more complex networks. However, most work retains the original MSE loss, and they prove the same critical point characterization of [1] for their specific architecture of interest. Most notably [2] extends the results of [1] to deep linear networks and shallow RELU networks. First, the submission is unique in going after a loss with better loss surface properties. In addition, secondly, given that the set of critical points of $L$ is a subset of critical points of MSE loss, many of the mentioned results likely extend. In light of the removal of undesirable global minima through $L$, examining more complex networks is certainly a very promising direction.\"}",
"{\"title\": \"Response to Reviewer #3- Part 2/2\", \"comment\": \"(2) Practical implications:\\nThe question of whether randomized SVD outperforms SGB-based methods, or visa versa, remains an open one, and is a likely data-dependent question, as factors such as size and sparsity are important. There have been several developments by others who themselves have outlined the benefits of SGD-based PCA/SVD (for instance, [6] and also cf. their quote from [3]). Chief among the compellingly reasons is that, in recent years, we have seen unprecedented gains in the performance of very large SGD optimizations, with autoencoders in particular successfully handling larger numbers of high-dimensional training data (e.g., images). The loss function we offer is attractive in terms of parallelizability and distributability, and does not prescribe any single specific algorithm or implementation, so stands to continue to benefit from the arms race between SGD and its competitors.\\n\\nFinally, the submission's focus has been on rigorously establishing theoretical properties. It is not our current focus of interest, for instance, to conduct a size analysis, as this is better deferred to some a treatment with a clear and specific characterization of problem instances of particular interest. In contrast, our own research directions involve us seeking to generalize the theory to tensors and tensor rank decomposition.\\n\\nNext, we offer a response to the suggestion that one can always recover the eigenvectors/eigenvalues by a final decomposition step and the SVD would be dominated by MSE LAE costs anyway. We clarify our position in two parts:\\n\\n(1) The exact cost of a post hoc processing step to perform the SVD will depend on the density and size of the data. It isn't hard to imagine circumstances (e.g., with large, dense inputs) in which even on the reduced output, this cubic step dominates and is prohibitive.\\n\\n(2) More importantly, this single loss function (without an additional post hoc processing step) fits seamlessly into optimization pipelines (where SGD is but one instance). The result is that the loss allows for PCA/SVD computation\\nas a single optimization layer, akin to an instance of a fully differentiable building block in a NN pipeline [7], potentially as part of a much larger network. \\n\\nIn light of the importance of (2), we intend to make this benefit much more explicit in the paper's introduction.\\n\\n[1] Baldi, Pierre, and Kurt Hornik. \\\"Neural networks and principal component analysis: Learning from examples without local minima.\\\" Neural networks 2.1 (1989): 53-58.\\n\\n[2] Zhou, Y., and Y. Liang. \\\"Critical points of linear neural networks: Analytical forms and landscape properties.\\\" Proc. Sixth International Conference on Learning Representations (ICLR). 2018.\\n\\n[3] Kunin, Daniel, et al. \\\"Loss Landscapes of Regularized Linear Autoencoders.\\\" International Conference on Machine Learning. 2019.\\n\\n[4] Pretorius, Arnu, Steve Kroon, and Herman Kamper. \\\"Learning Dynamics of Linear Denoising Autoencoders.\\\" International Conference on Machine Learning. 2018.\\n\\n[5] Frye, Charles G., et al. \\\"Numerically Recovering the Critical Points of a Deep Linear Autoencoder.\\\" arXiv preprint arXiv:1901.10603 (2019).\\n\\n[6] Plaut, Elad. \\\"From principal subspaces to principal components with linear autoencoders.\\\" arXiv preprint arXiv:1804.10253 (2018)\\n\\n[7] Amos, Brandon, and J. Zico Kolter. \\\"Optnet: Differentiable optimization as a layer in neural networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"This reviewer's comments were especially valuable in helping to establish where some assumptions were in fact not needed.\\n \\n1. This is a very good point. After careful examination, in the case that input and output have different dimensions, say ${Y}\\\\in \\\\mathbb{R}^{n\\\\times m}$ and ${X}\\\\in \\\\mathbb{R}^{n'\\\\times m}$, all the claims actually still hold and the given loss can be used as a linear least square regressor. In the writing we assumed the same dimension since the focus was on low rank decomposition where ${Y}={X}$. We have added a remark (6) in the paper to be explicit about this fact. The reason for the validity of the claims for the case $n'\\\\neq n$ as explained in the remark is as follows: \\n\\nThe given loss function structurally is very similar to MSE loss and can be represented as a sum of Frobenius norms on the space of $n\\\\times m$ matrices. In this case the covariance matrix $ {\\\\Sigma}={\\\\Sigma}_{yx} {\\\\Sigma}_{xx}^{-1} {\\\\Sigma}_{xy}$ is still $n\\\\times n$. Clearly, for under-constrained systems with $n<n'$ the full rank assumption of $ {\\\\Sigma}$ holds. For the overdetermined case, where $n'>n$ the second and third assumptions in Assumption 1 can be relaxed: we only require ${\\\\Sigma}_{xx}$ to be full rank since this is the only matrix that is inverted in the theorems. Note that if $p>\\\\min(n',n)$ then ${\\\\Lambda}_{\\\\mathbb{I}_p}$: the $p\\\\times p$ diagonal matrix of eigenvalues of ${\\\\Sigma}$ for a $p$-index-set $\\\\mathbb{I}_p$ bounds to have some zeros and will be say rank $r<p$, which in turn, results in an encoder with rank $r$. However, the Theorem 1 is proved for encoder of any rank $r\\\\leq p$. Finally, following Theorem 2 then the first $r$ columns of the encoder ${A}$ converges to ordered eigenvectors of ${\\\\Sigma}$ while the $p-r$ remaining columns span the kernel (sub)space of ${\\\\Sigma}$.\\n\\n2 and 3- we have updated the introduction.\\n\\n4-Fixed\\n\\n5- Dealing with large datasets is a leading edge of our algorithm when the whole data is too large to fit in memory. We don't expect the performance to be different if we switch to a larger dataset since our algorithm allows processing the data in batches, in which case the algorithm will yield the result that converges to the desired real ordered eigenvectors as well. \\n \\n6- We conducted extra experiments on MNIST dataset with compressed dimension p being 1,5,10,20,50 and 100. The settings of the other parameters is the same as the ones shown in our paper. The results are as follows: reconstruction error is 2.857e6, 2.113e6, 1.619e6, 1.127e6, 5.546e5, 2.700e5, respectively, and the total running time on average (with one GeForce GTX 1080 Ti Graphics Card) is 0.253 seconds, 7.062 seconds, 26.855 seconds, 4 minutes 18.408 seconds, 17 minutes 10.213 seconds, 35 minutes 31.986 seconds, respectively.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks. We have added one paragraph at the start of section 4 that provides an overview of the arguments of the proofs. For the other point, please refer to remarks 2, 3 (new), 4, and 5 which hopefully clarifies the significance of the theorems.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes and analyzes a new loss function for linear autoencoders (LAEs) whose minima directly recover the principal components of the data. The core idea is to simultaneously solve a set of MSE LAE problems with tied weights and increasingly stringent masks on the encoder/decoder matrices. My intuition is that the weights that touch every subproblem are the most motivated to find the largest principal component, the weights that touch all but one find the next largest, and so forth; I found this idea clever and elegant.\\n\\nThat said, I lean towards rejection, because the paper does not do a very good job of demonstrating the practical or theoretical utility of this approach. As I see it, there are two main claims that one could make to motivate this work:\\n1. This is a practical algorithm for doing PCA.\\n2. This is a step towards better understanding (and perhaps improving) nonlinear autoencoders, which do things that PCA can't.\\nClaim (2) might be compelling, but the authors do not make it, and it isn't self evident.\\n\\nI do not find claim (1) convincing on the basis of the evidence presented. PCA is an extremely well studied problem, with lots of good solutions such as randomized SVD (Halko et al., 2009). A possible advantage of using LAEs to address the PCA problem is that they play nicely with SGD, but again, the claim that the SGD-LAE approach is superior to, say, randomized SVD on a data subsample requires evidence. Also, even if one buys the claim that LAEs are a good way to solve PCA, one can always recover the eigenvectors/eigenvalues by a final decomposition step; the authors claim that an advantage of their approach is that it does not require such \\\"bells and whistles\\\", but this seems like a pretty minor consideration; implementing the proposed loss function seems at least as complicated as making a call to an SVD solver, and it's hard for me to imagine a situation where the cost of that final SVD isn't dominated by the cost of solving the MSE LAE problem.\\n\\nIn summary, I think this paper proposes a clever and elegant solution to a problem that doesn't seem to be very important. I can't recommend acceptance unless the authors can come up with a stronger argument for why it's not just interesting but also useful.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). With this new loss function, the decoder weights of LAEs can eventually converge to the exact ordered unnormalized eigenvectors of the sample covariance matrix. The main contribution is to add the identifiability of principal components in PCA using LAEs and. Two empirical experiments were done to show the effectiveness of proposed loss function on one synthetic dataset and the MNIST dataset.\\nOverall, this paper provides a nontrivial contribution for performing principal component analysis (PCA) using linear autoencoders (LAEs), with this new novel loss function. This paper is well presented.\", \"there_are_some_issues_to_be_addressed\": \"1. The output matrix is constrained to be the same size of the input, which is scarcely seen in practical applications.\\n2. Literature on (denoising) auto-encoder can be reviewed more thoroughly.\\n3. Comparison with state-of-the-art auto-encoder can be provided to demonstrate the effectiveness of the proposed algorithm.\\n4. It is better to explain the meaning of each variable when it first appears, e.g., , the projection matrices A and B, and Variable A* in theorem 2.\\n5. In the experiment part, in both the Synthetic Data or MNIST, the size of each data set is relatively small. It's better to add experimental results on big data sets with larger dimension.\\n6. In order to better show the effectiveness of the new loss function, you can add some comparative test for different choice of compressed dimension p.\\n7. There are some typos, such as \\u2018faila\\u2019 in the second line of the second paragraph in the INTRODUCTION.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new loss function to compute the exact ordered eigenvectors of a dataset. The loss is motivated from the idea of computing the eigenvectors sequentially. However doing so would be computationally expensive, and the authors show that the loss function they propose (sum of sequential losses) has the same order (constant less than 7) of computational complexity as using the squared loss. A proof of the correctness of the algorithm is given, along with experiments to verify its performance.\\n\\nThe loss function proposed in the paper is useful, and the decomposition in Lemma 1 shows that it is not computationally expensive. While the writing of the proofs of the theorems is clear, I find it hard to understand the flow of the paper. It would help if the authors could summarize the argument of the proof at the start of Sec 4. Along a similar vein, it would also help if the authors could describe in words the significance / claim of every theorem.\\n\\nThe repetition in stating the theorems can be avoided. The main result (Theorem 2) is stated twice.\"}"
]
} |
ryxmb1rKDS | Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control | [
"Yaofeng Desmond Zhong",
"Biswadip Dey",
"Amit Chakraborty"
] | In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system, given by an ordinary differential equation (ODE), from observed state trajectories. To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner. In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way, which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy. In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum. This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies. | [
"Deep Model Learning",
"Physics-based Priors",
"Control of Mechanical Systems"
] | Accept (Poster) | https://openreview.net/pdf?id=ryxmb1rKDS | https://openreview.net/forum?id=ryxmb1rKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"RgxhNQxYQn",
"BkeTkhEisS",
"rkgZ6sVosr",
"HJlHKVMYsS",
"r1gT57zKoS",
"S1gQ6lGKoH",
"ryxW_CZFjB",
"r1gsylCe9B",
"HJxvhJ_0YS",
"H1l1Ek92KB",
"ryxuD-JoDB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798725933,
1573764068557,
1573764025331,
1573622909433,
1573622677497,
1573621946710,
1573621352701,
1572032483323,
1571876782553,
1571753766915,
1569546592227
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1538/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1538/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1538/AnonReviewer2"
],
[
"~Yiping_Lu1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a novel method for learning Hamiltonian dynamics from data. The data is obtained from systems subjected to an external control signal. The authors show the utility of their method for subsequent improved control in a reinforcement learning setting. The paper is well written, the method is derived from first principles, and the experimental validation is solid. The authors were also able to take into account the reviewers\\u2019 feedback and further improve their paper during the discussion period. Overall all of the reviewers agree that this is a great contribution to the field and hence I am happy to recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Uploaded a Revision with Changes Highlighted\", \"comment\": \"We have now uploaded a revised version of our manuscript with all the changes highlighted. Moreover, this revision includes additional pointers (shown within red boxes in the right margin) to connect key changes to specific comments from your review. We are using the format \\u201cR1-C$x$\\u201d to show the connection between your Comment-$x$ and the corresponding change in the manuscript.\\n\\n*** Please note that these pointers are only visible in offline PDF-viewers.\"}",
"{\"title\": \"Uploaded a Revision with Changes Highlighted\", \"comment\": \"We have now uploaded a revised version of our manuscript with all the changes highlighted. Moreover, this revision includes additional pointers (shown within red boxes in the right margin) to connect key changes to specific comments from your review. We are using the word \\u201cR3\\u201d to show the connection between your Comments and the corresponding changes in the manuscript.\\n\\n*** Please note that these pointers are only visible in offline PDF-viewers.\"}",
"{\"title\": \"Part 2 of the Official Response\", \"comment\": \"*************** Part 2 of the Official Response ***************\", \"equation_11\": \"$\\\\tilde{f}$ is parametrized to have 0 output on $\\\\dot{u}$. Eqn.11 is only used for training when we have trajectory data with \\u201cconstant $u$\\u201d. We can safely parametrize $\\\\dot{u}=0$ since this is consistent with the data. Once the model has been trained, we can apply a time-varying $u$ as we have done in the control tasks.\\n\\n********** See Below for Part 1 of the Official Response **********\"}",
"{\"title\": \"Thanks so much for your constructive feedback!\", \"comment\": \"Thank you for reviewing our paper! We greatly appreciate your insightful comments and the super constructive feedback. We have updated our paper to address your comments/concerns.\\n\\nTo obey length restrictions, we have split our official response into two parts. This is Part 1.\\n\\nIn the following, we address your concerns individually.\", \"ablation_study_of_differentiable_ode_solver\": \"We have carried out the ablation study to distinguish the effect of using a differentiable ODE solver. We compare the result of Unstructured SymODEN and HNN since both models approximate the Hamiltonian using a neural network and the main difference is the use of differentiable ODE Solver. As HNN does not employ any angle-aware design, we perform the ablation study only for Task 1. We have discussed the details of the experimental setup and the results in a new section in the Appendix (Appendix C: Ablation Study of Differentiable ODE Solver). We can see that Unstructured SymODEN performs significantly better than HNN in terms of training and prediction errors associated with our experiment (Task 1: Pendulum with Generalized Coordinate and Momentum Data). From the evolution of MSE for a particular trajectory (Figure 7), we can notice that Unstructured SymODEN makes better prediction. In fact, the HNN loss term compares the estimated and true symplectic gradient while SymODEN loss term compares the estimated trajectories. Therefore, even if the training error from the HNN is comparable to that from the Unstructured SymODEN, the error in HNN is accumulated during trajectory prediction (which is obtained by integrating the symplectic gradient). This, as expected, leads to a larger error in predictions from HNN. \\n\\nEffects of $\\\\tau$: \\nWe have added a new section in the Appendix (Appendix D: Effects of the time horizon $\\\\tau$) to discuss the effects of time horizon $\\\\tau$. From these results, we can see that larger values of $\\\\tau$ lead to smaller error. This is expected since a large $\\\\tau$ penalizes inaccurate predictions in the long run. We have also observed in our experiments that a larger $\\\\tau$ requires more time to train the model since it involves more function evaluations.\", \"pd_controller\": \"In Section 2.2 and \\u201cAppendix B: Special Case of Energy-based Controller - PD Controller with Energy Compensation\\u201d (in the updated version), we have shown that if the desired potential energy is given by a quadratic form, the \\u201cpotential energy shaping + damping injection\\u201d becomes equivalent to a \\u201cPD controller + energy compensation\\u201d. A pure PD controller would work well only around the equilibrium point since in that region the linearized dynamics matches the true dynamics.\", \"control_of_cartpole_system\": \"Thanks for highlighting that the PD controller design for the CartPole system does not depend on the dynamics. The CartPole system is an underactuated system; incorporating deep learning into controller design for underactuated systems would be the focus of our future work. However, we have also trained SymODEN on a fully-actuated version of CartPole. Due to restrictions on the manuscript length, we have included the corresponding results inside Appendix (Appendix E: Fully-actuated Cartpole and Acrobot). Here we have shown that the SymODEN framework can learn the dynamics and control the fully-actuated CartPole. We have also removed the PD controller results for CartPole system since the learned model has not been used to design this controller (as you have pointed out).\", \"mpc\": \"We considered performing MPC based on the learned dynamics during the course of our research on this topic. In particular we used mpc.pytorch [1]. However, it didn\\u2019t perform as expected in these control tasks. The main reason might be the fact that the dynamics learned by a neural network is indeed an approximation (albeit a very accurate one); as mentioned in the mpc.pytorch documentation [2] - \\u201cSometimes the controller does not run for long enough to reach a fixed point, or a fixed point doesn\\u2019t exist, which often happens when using neural networks to approximate the dynamics. When this happens, our solver cannot be used to differentiate through the controller, because it assumes a fixed point happens.\\u201d Therefore, we switch towards energy-based controllers since we are learning the energy and it\\u2019s natural to leverage the learned energy for controller design. In Appendix E (Fully-actuated Cartpole and Acrobot), we trained our model on the fully-actuated version of Cartpole and Acrobot and have shown that our framework can successfully control those fully-actuated systems. We hope our work creates new research directions combining MPC or energy shaping control with interpretable end-to-end learning framework. \\n\\n[1] Amos, Brandon, et al. \\\"Differentiable MPC for End-to-end Planning and Control.\\\" Advances in Neural Information Processing Systems. 2018.\\n[2] https://locuslab.github.io/mpc.pytorch/\\n\\n*************** Part 1 of the Official Response ***************\"}",
"{\"title\": \"Thanks so much for your constructive feedback!\", \"comment\": \"Thank you for reviewing our paper! We greatly appreciate your insightful comments and constructive feedback.\\n\\nIn the following, we address your concerns individually.\\n\\n1) In fact, our method is applicable to the class of systems described by Eqn.4 instead of Eqn.11. We have used Eqn.11 only for training purposes. In our response to the next point, we have provided a detailed discussion to address this ambiguity. We agree that Eqn.4 cannot describe the dynamics of every physical system. However, Eqn.4 is inspired by the port-Hamiltonian systems which are applicable to a large class of systems since it takes dissipation and external forcing into account. In this current work, we only consider external forcing with a focus on the control of physical systems. Adding dissipation to accommodate a broader variety of systems will be the topic of future work.\\n\\n2) We use the \\u201cconstant $u$\\u201d dynamics (Eqn.11) only for training purposes. As the Neural ODE framework requires the dimension of the domain of the input function to be the same as the dimension of the corresponding co-domain/range, we need to use an augmented dynamics. Eqn.11 provides the simplest form of augmented dynamics, since it uses a \\u201cconstant $u$\\u201d. Then, if we create a dataset of trajectories each of which correspond to different, but constant, values of $u$, we can use it to train the model by leveraging Eqn.11. Once we have a trained model, we can actually apply any time-varying input $u$ to the dynamics (Eqn.4). \\nThis is indeed an expected outcome. In the SymODEN framework, we are actually learning the functions $H(q, p)$ and $g(q)$. We can learn $H(q, p)$ by using a constant $u=0$. And $g(q)$ can be learned by using training data with a non-zero $u$. With different values of $u$, such as [-2.0, -1.0, 1.0, 2.0] we are able to learn the input matrix $g(q)$. If $u$ is multi-dimensional, we can create a training dataset by considering inputs in such a way that any given input in the dataset has only one non-zero component. For example, the trajectories which were created using inputs with non-zero entry at the $i$-th component, will help us learn the $i$-th column of the input matrix $g(q)$. Learning an accurate enough $H(q, p)$ and $g(q)$ ensures that we have also learned the dynamics with high accuracy. Afterwards, applying any \\u201ctime-varying $u$\\u201d as an input to the system would not create any issue. \\n\\n3) We\\u2019ve fixed the sentence above Eqn.11. As explained in our response to the previous point, this assumption on the training dataset aids the learning process. Also, in traditional system identification, constant external forcings are often used to get the system responses.\\n\\n4) Short summary of each task has now been added to the beginning of each subsection. Hopefully it provides more guidance throughout the task section. We have also moved the summary of results (previously Section 4.2) to the end of Section 4. \\n\\n5) We hope that our responses to point (2) and (3) have clarified the ambiguity. We use \\u201cconstant $u$\\u201d only for training purposes. Once the dynamics have been learned, any \\u201ctime-varying $u$\\u201d can be applied for prediction and control tasks. Also, in Figure 4 (previously Figure 6), we have included horizontal lines to highlight the expected results. \\n\\n6) The Acrobot is also an underactuated system, which means that we can learn the dynamics with high accuracy but we cannot design a good controller by using only potential energy shaping. In particular, as $g(q)g(q)^T$ is not invertible in this case, we also need kinetic energy shaping in order to design a controller. Our future work will focus on how to incorporate kinetic energy shaping into a deep learning framework. However, we have also trained SymODEN on a fully actuated version of Acrobot. Due to restrictions on the manuscript length, we have included the corresponding results inside the Appendix (Appendix E: Fully-actuated Cartpole and Acrobot) and added a subsection in the main body to describe the Acrobot task (4.5 Task 4: Acrobot). \\n\\n7) We have added a footnote to the abstract to give rationale for using the word \\u201cSymplectic\\u201d.\\n\\n8) Thank you for directing our attention to this delineation. We have rephrased this sentence. Now it reads as follows: \\u201cOur results show that incorporation of such physics-based inductive bias offers insight about relevant physical properties of the system, such as inertia, potential energy, total conserved energy.\\u201d.\\n\\n9) In the updated version of our paper, we have introduced the acronym ODE within the Abstract itself.\\n\\n10) Figure 6 (in the updated version) shows MSE and Total energy of a trajectory with previously unseen initial conditions. In all our experiments, we have a separate test set to make sure our models generalize well. We have now included a new section in the Appendix (Appendix F: Test Errors of the Tasks) to show the test errors for all the tasks.\\n\\n11) Thank you for pointing out to these typos, we have corrected them in the updated version.\"}",
"{\"title\": \"Thanks so much for your encouraging response!\", \"comment\": \"Thank you for reviewing our paper! We greatly appreciate your insightful comments and constructive feedback. We have updated our paper to address your comments/concerns.\\n\\nIn the following, we address your concerns individually:\\n\\n--- We have updated our paper to include a definition of $g(q)$ after Equation 4 in Section 2.1.\\n\\n--- We have now included a new section in the Appendix (Appendix B: Special Case of Energy-based Controller - PD Controller with Energy Compensation) to show derivations of these equations describing external control .\\n\\n--- We have fixed this typo.\\n\\n--- To incorporate physics-based prior knowledge while learning dynamics from observed time-series data, our framework (SymODEN), in essence, learns three functions $-$ $M^{-1}(q)$, $V(q)$ and $g(q)$. We have shown that with a small dataset, containing as low as 16 training trajectories with 20 time steps in each, we can learn accurate models for simple systems and design controllers based on the learned model. For larger dynamical systems, we believe that as long as the underlying dynamics is governed by Equation 4 and large enough neural networks are used to learn those three functions, our framework should work quite well. For example, Equation 4 can describe the dynamics of any large mechanical system whose kinematic structure can be represented by kinematic trees. However, as $M^{-1}(q)$, $V(q)$ and $g(q)$ typically involve more parameters for larger dynamical systems, this might require more data to train the model. Still, compared with the baseline models and model-free reinforcement learning approaches, the amount of data required in our framework should be smaller.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"============ Update ===========\\nThe authors have done a good job at addressing my concerns, and the revised version of the submission is substantially improved. I have adjusted my score and recommend accepting the paper.\\n==============================\\n\\nRecent work has explored encoding analytic mechanics formulae into neural networks as inductive biases to learn physics models that generalize better. Neural networks are implemented to learn quantities like kinetic and potential energy rather coordinate derivatives. In this paper, the work of [1] is extended to incorporate a more generalizable approach to modeling functions on angles, integral approach where errors are backpropagated through an ODE solver rather than fitting errors in the derivatives, and modeling response to controls. \\n\\nFor Lagrangian and Hamiltonian systems it\\u2019s often easier to work with non-euclidean generalized coordinates rather than a constrained Euclidean system, however it can be difficult to design well parametrized neural network functions on a manifold like a circle. The authors address this by still expressing the having the Hamiltonian expressed in terms of circular generalized coordinates, but parametrizing the functions on the Euclidean embeddings. The paper shows that this approach does not have a problem with generalizing to large angles that the na\\u00efve approach does. \\n\\nThe integral approach to computing errors seems sensible, and appears to work well but no comparison is made to the previous method working with derivatives. This would be useful in demonstrating that the predictive performance is at least no worse than the approach taken in [1] and doesn\\u2019t require knowing or estimating derivatives. Also it would be good to have an ablation study investigating predictive performance as a function of tau, the number of integration timesteps for computing the error.\\n\\nThe paper shows evaluation of the learned dynamics system for control on two examples, an inverted pendulum and CartPole. In the inverted pendulum example, the learned potential energy and the control response is used to design a control that shapes the potential energy and with additional damping. Since the control output is closely related to a PD controller, it would be good to compare against a standard P(I)D controller that doesn\\u2019t depend on the model. For the CartPole system, the control is exactly a PD controller and it\\u2019s not clear how the Hamiltonian/dynamics model are used at all in this example. Generally the paper does a good job at demonstrating benefits in modeling ability and generalization, but the experiments applying this model to control are not very convincing. Having an example where the model is applied for a standard approach like MPC for one of these problems would useful for gauging efficacy in possible control applications.\\n\\nThere are some promising leads explored in this paper for learning physical system dynamics effectively with neural nets. I think there is a lot of promise in the approach, however some of the improvements over past work have not been adequately tested (integral approach and sensitivity to # of integration steps) and the control experiments are not very convincing although it seems like they could be. I lean towards a weak reject for the paper as is, but if the authors flesh out the control experiments and do some ablation studies I will improve my score.\", \"minor_comments_and_questions\": \"Is f tilde in equation 11 parametrized to have 0 output on u dot or is it expected that this relationship would simply be approximated by the model?\\n\\n[1] Sam Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian Neural Networks. arXiv:1906.01563, 2019.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Update: I have read the author's response. Thank you!\\n***\\n\\nThis is an excellent paper that integrates inductive biases from physics into learning dynamical models that can then be integrated into deep RL-based control tasks.\\n\\nThe model approximates the dynamical function f(q, p, u), where q are the generalised coordinates of the system (mixture of positions in R^1 and angles in S^1), p are the generalised momenta and u is the external control input. Function f can then be integrated by a numerical solver, and used as the dynamical function in a Neural ODE (ordinary differential equation) for modelling the continuous time evolution of the nonlinear dynamical system. The dynamics function is explicitly written as the equations of the Hamiltonian dynamical system, involving the 1) inverse of the mass function, 2) potential energy and 3) control function, in a complex graph (Figure 1) that transforms positional and angular coordinates and momenta x and the external control into f(x, u).\\n\\nThe derivation of the method is long but very well written and didactic. The experiments on the control suite of OpenAI Gym are simple (pendulum and cart-pole only) but thorough, and compare the proposed method with a non-inductive bias method (still relying on Neural ODEs), a simpler naive and geometric baselines. Overall, the paper is very easy to follow (even for someone who does not work on control experiments in deep RL) while addressing complex physics.\", \"minor_remarks\": \"Can you define g in 2.1?\\nCan you add the derivation of equations (8) through (10) in the appendix?\\nThe second to last sentence on page 4 seems unfinished.\\nCan the authors comment on how this model would scale to larger (e.g. multiple joints) dynamical systems?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose a framework for learning the dynamics of a system with underlying Hamiltonian structure subjected to external control. Based on the extended equations of motion, the authors suggest how to apply NeuralODE in a way that makes use of the prior information that the unconstrained system is Hamiltonian and subjected to a control term. For a range of tasks, the authors then demonstrate that the proposed SymODEN framework can learn the dynamics and recover the known analytical solution, and that they can derive a controller that allows them to drive the system to a target configuration.\\n\\nI find this paper very interesting and the formulation elegant. In particular, I appreciate that the paper is pretty much self-contained and that the authors derive the theory from first principles. However, there are a couple of points (listed below) that should be addressed prior to publication to improve the clarity of the paper, and to help the reader to fully appreciate the depth of the experimental section. If these points were addressed in sufficient detail, I would be willing to increase the score:\\n\\n1) Reading the abstract and the introduction, I got the impression that SymODEN can be applied to any physical system while the method is in fact only applicable to systems governed by Eq. (11). I think it\\u2019s worth mentioning in the text that not every physical system follows (constrained) Hamiltonian Dynamics. \\n\\n2) You mention that a controller is designed based on the learned dynamics. For example, in the first sentence in Sec. 2.2: \\u2018Once the dynamics of a system have been learned, it can be used to synthesize a controller to maneuver the system to a reference configuration q*\\u2019. I think it is important to specify here which dynamics you are referring to, i.e. the constrained or unconstrained dynamics. Later on it becomes clear that you use constant-u training data and that u is part of the input to the model, so it has to be constrained dynamics, but at this stage it is still unclear to the reader.\\n\\n3) In Eq. (11), you set du/dt = 0 without motivating this restriction. Can you provide at least one sentence on why this is an interesting choice and back it up with a reference? Furthermore, the sentence above Eq. (11) is broken and needs fixing. \\n\\n4) Tasks (general): You consider a range of tasks and I appreciate that you start with a simple and intuitive system. However, I think a bit more guidance throughout the tasks section would be very helpful. In particular, I would suggest that you provide a short (one or two sentences) summary at the beginning of each task to say what exactly it is that you are trying to test or demonstrate. Furthermore, I would suggest showing the summary of results, i.e. Sec. 4.2, after you introduce the individual tasks rather than before.\\n\\n5) Task 2: This task addresses multiple things in one go: Initially, you demonstrate that you can recover the results of Task 1 without access to the generalised momenta, and explain why this can only be done up to a constant scaling factor. This is very interesting and clear. However, then you jump straight into the controller and things become a little unclear because, at this stage, it still seems that the dynamics are the same as in Task 1, i.e. unconstrained. Please add a sentence or two for clarification. Another important aspect that is not commented on at all is the behaviour of u(t) in Eq. (27). In particular, isn\\u2019t u(t) expected to satisfy du/dt = 0 based on Eq. (11)? The results in Fig. 6 suggest that this is not the case (see time interval [2, 6]). This seems like an interesting and surprising behaviour, especially because SymODEN was only trained with constant-u training data. I would appreciate if the authors could comment on this. I would also suggest to add horizontal lines to Fig. 6 to indicate the expected results.\\n\\n6) Task 4: Why did you not explain this task in a dedicated section like you did for all other tasks?\\n\\n7) Symplectic: Since both the method and the title of the paper contain the word \\u2018symplectic\\u2019, it would be good if you explained what the term actually means.\\n\\n8) \\u2018Our results show that incorporation of such physics-based inductive bias can provide knowledge about relevant physical properties (mass, potential energy) and laws (conservation of energy)...\\u2019. To me, this statement is slightly misleading. You did not demonstrate that SymODEN \\u2018provides knowledge\\u2019 of laws of the system; energy conservation (for u = 0) as a law is hard-coded into your network. The specific value of the energy can be inferred but that I would consider a physical property. I would suggest to change the wording to reflect this clearly.\\n\\n9) Introduce the acronym ODE much earlier than in Sec. 3.1.\\n\\n10) Model training: What happens if you use unseen initial conditions rather than the ones in the training data? Perhaps you could add a comment to clarify.\\n\\n11) There are many typos and grammar mistakes in the paper. Please revise it carefully. To give you a few examples:\\n\\u2018are both reformulation\\u2019 -> \\u2018are both reformulations\\u2019\\nSec 2.1: Decide on whether you use plural or singular for \\u2018dynamics\\u2019 and be consistent.\\n\\u2018on a equal footing\\u2019 -> \\u2018on an equal footing\\u2019 \\n\\u2018beyond classical mechanics, the Hamiltonian\\u2019 -> \\u2018Beyond classical mechanics, Hamiltonian \\u2026\\u2019\\n\\u2018Hamiltonian is same as\\u2019 -> \\u2018Hamiltonian is the same as\\u2019\\n\\u2018represents potential energy\\u2019 -> \\u2018represents the potential energy\\u2019\\n\\u2018trajectory actually converge to\\u2019 -> \\u2018trajectory actually converges to\\u2019\\n\\u2018a ODE solver -> \\u2018an ODE solver\\u2019.\\n\\u2018Lagrangian and Hamiltonian formulation\\u2019 -> \\u2018Lagrangian and Hamiltonian formulations\\u2019\\n\\u2018assume that q and p evolves\\u2019 -> \\u2018assume that q and p evolve\\u2019\\n\\u2018translational coordinate\\u2019 -> \\u2018translational coordinates\\u2019\\n\\u2018naive baseline model approximate\\u2019 -> \\u2018naive baseline model approximates\\u2019\\netc.\\n\\n\\n*************************************\\nThe authors addressed my comments and answered my questions clearly. I therefore increased the score. *************************************\"}",
"{\"comment\": \"Congrats on your work and I really enjoy reading it.\\nI'm writting the comment to introduce some of our related works on discretization Neural ODEs and Control\\nLu Y, Zhong A, Li Q, et al. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations[J]. arXiv preprint arXiv:1710.10121, 2017.\\nLong Z, Lu Y, Ma X, et al. PDE-net: Learning PDEs from data[J]. arXiv preprint arXiv:1710.09668, 2017.\\n\\nAlso these early papers also aim to connect ODEs and deep learning\\nWeinan, E. \\\"A proposal on machine learning via dynamical systems.\\\" Communications in Mathematics and Statistics 5.1 (2017): 1-11.\\nLi, Qianxiao, et al. \\\"Maximum principle based algorithms for deep learning.\\\" The Journal of Machine Learning Research 18.1 (2017): 5998-6026.\\nWeinan, E., Jiequn Han, and Qianxiao Li. \\\"A mean-field optimal control formulation of deep learning.\\\" Research in the Mathematical Sciences 6.1 (2019): 10.\\nHaber, Eldad, and Lars Ruthotto. \\\"Stable architectures for deep neural networks.\\\" Inverse Problems 34.1 (2017): 014004.\\nChang, Bo, et al. \\\"Multi-level residual networks from dynamical systems view.\\\" arXiv preprint arXiv:1710.10348 (2017).\\nRuthotto, Lars, and Eldad Haber. \\\"Deep neural networks motivated by partial differential equations.\\\" arXiv preprint arXiv:1804.04272 (2018).\", \"title\": \"Related works\"}"
]
} |
Syx7WyBtwB | Interpretations are useful: penalizing explanations to align neural networks with prior knowledge | [
"Laura Rieger",
"Chandan Singh",
"W. James Murdoch",
"Bin Yu"
] | For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. | [
"explainability",
"deep learning",
"interpretability",
"computer vision"
] | Reject | https://openreview.net/pdf?id=Syx7WyBtwB | https://openreview.net/forum?id=Syx7WyBtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"feJeNRVyd",
"rkeCgD1Gir",
"HyeYjLkMor",
"rJgWqUyGsB",
"S1l8xL1GjB",
"SyxRnHJMiH",
"BJxDW5Jl5B",
"rklgSCRptr",
"Hkgqk1x6Fr",
"Bkg-Tf8j_B",
"SkgINhwquH",
"B1gBfcu2wS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment"
],
"note_created": [
1576798725904,
1573152502142,
1573152417280,
1573152393435,
1573152238355,
1573152181537,
1571973631449,
1571839543783,
1571778274168,
1570624184953,
1570565166182,
1569651213150
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1536/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1536/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1536/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
],
[
"~Joseph_David_Janizek1"
],
[
"ICLR.cc/2020/Conference/Paper1536/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper contains interesting ideas for giving simple explanations to a NN; however, the reviewers do not feel the contribution is sufficiently novel to merit acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you to reviewers - updates to manuscript\", \"comment\": [\"We would like to thank all reviewers for their time and effort. We have responded to their concerns below, and made the following changes to the manuscript as a result:\", \"We have added references to Zaidan 2007 and Strout 2019\", \"Per the comment from Joseph Janizek (author of the expected gradients paper), we updated the computational and accuracy results for expected gradients. While improved, it still fails to beat a random baseline\", \"Improved ColorMNIST results: Previous results on ColorMNIST were non-deterministic (despite a set random seed) due to a strange cuDNN setting. While rerunning those experiments we discovered that increasing the regularization parameter improves the mean accuracy using CDEP to 31% (previously 25.5%). We have updated the manuscript accordingly.\"]}",
"{\"title\": \"Author's response continued\", \"comment\": \"\\u201cFigure 3 is nice but not terribly surprising\\u201d\\n\\nWe agree - we included Figure 3 not as a shocking finding, but to visually explain what our method does (in addition to the text/equation descriptions elsewhere), as well as provide a sanity check. It should also be noted that we obtain the explanations with a different method (GradCAM) than the one used for the optimization (CD). This gave us some indication that we were not overfitting to a particular explanation algorithm.\"}",
"{\"title\": \"Author's response\", \"comment\": \"We would like to thank the reviewer for their thoughtful review. We address your concerns below.\\n\\n\\u201cThe main advantage of this effort compared to work that directly penalizes the gradients (as in Ross et al.) is that the method does not rely on second gradients (gradients of gradients), which is computationally problematic\\u201d\\n\\nWhile our approach has computational benefits, we would also note that empirically CDEP produces significantly better results. On color MNIST Ross et al. provides no benefit (accuracy the same as a random baseline - 10%), while CDEP achieves 31%. Similarly, for the skin cancer dataset, Ross et al. actually hurt accuracy. We attribute this to CDEP allowing the penalization of features, including interactions of features, rather than just feature-level gradients.\\n\\n\\u201cI am not sure I agree with the premise as stated here. Namely, the authors write \\\"For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective\\\" -- I would argue that an explanation may be useful in and of itself by highlighting how a model came to a prediction. I am not convinced that it need necessarily lead to, e.g., improving model performance. I think the authors are perhaps arguing that explanations might be used to interactively improve the underlying model, which is an interesting and sensible direction.\\u201d\\n\\nWe agree that our abstract is strongly worded - this is by design. We feel that explainable deep learning research is currently overwhelmed with different explanation algorithms, yet has very few (arguably no) success stories of researchers actually using these algorithms to accomplish something of interest to the broader community.\\n\\nExplainable DL techniques can certainly be used to \\u201chighlight how a model came to a prediction\\u201d, but we feel that this is only an intermediate objective, not an end in itself. Ultimately, users want to do things like improve model performance, build trust in a model, identify flaws, or verify that model is being fair with respect to attributes like race, gender, etc. \\n\\nAs a community, we do not currently know how to use our explanation algorithms to accomplish these things, or whether our explanation algorithms are well suited to do so. In fact, we suspect that many published explanation algorithms would fail when evaluated on real end tasks - as we saw with gradients and integrated gradients in this paper. \\n\\nFiguring out how to use explainable DL techniques for anything real is essentially being neglected by current researchers, so we framed our paper to try to shed light on this, and nudge things in the right direction.\\n\\nIf you still disagree with our premise, we\\u2019d be happy to tamp things down, and adjust our abstract to motivate things through the vein of \\u201cexplanations could be useful to improve predictions\\u201d.\\n\\n\\u201cThis work, which aims to harness user supervision on explanations to improve model performance, seems closely related to work on \\\"annotator rationales\\\" (Zaidan 2007 being the first work on this), but no mention is made of this. \\\"Do Human Rationales Improve Machine Explanations?\\\" by Strout et al. (2019) also seems relevant as a more recent instance in this line of work\\u201c\\n\\nThanks for bringing Zaidan 2007 and Strout 2019 to our attention, they are indeed useful prior work in this field. We will include both references in an updated version.\\n\\n\\u201cThe authors compare their approach to Ross and colleagues in Table 1 but see quite poor results for the latter approach. Is this a result of the smaller batch size / learning rate adjustment? It seems that some tuning of this approach is warranted.\\u201d\\n\\nAs you noted, the approach by Ross penalizes gradients of gradients, preventing learning of those weights. This works quite nicely for tasks where the feature to be ignored is always in the same location as we see in the results on DecoyMNIST. In contrast, for the ISIC dataset, the patches are distributed roughly uniformly over the image. By penalizing gradients for the patches, the gradient updates are \\u2018dampened\\u2019 over the entire input for a large part of the training data (patches are present in 45% of samples) and learning is prevented. This issue may be further amplified by the low learning rate and batch size necessary for this approach and dataset. \\n\\nWe can assure you we tried our best to tune Ross\\u2019 approach in order to achieve a fair baseline (despite the fact that their approach is roughly 80 times slower than CDEP, making extensive tuning difficult).\"}",
"{\"title\": \"Author's response\", \"comment\": \"We would like to thank the reviewer for their time and thoughtful comments. We address their concerns below.\\n\\n\\u201cI like the high-level idea of this work and agree that there is not much work on using prediction explanations to help improve model performance. However, there are two major concerns of the model and experiment design. \\n\\nFirst, it seems like the proposed method requires whoever use it already know what the problem is.\\u201d\\n\\nWe agree - after a practitioner has found a flaw in their model or limitations in their training data (using any existing interpretation technique), our technique is designed to help rectify that flaw by altering the model.\\n\\n\\u201cMy question is that if we already know the bias or the mismatch, why not directly use this information in the regularization to penalize some features? Is it necessary to resort to some explanation generation methods?\\u201d\\n\\nIt is not clear to us how, exactly, we could \\u201cdirectly use this information in the regularization to penalize features\\u201d without resorting to explanations. Consider the skin cancer (ISIC) example shown in Figure 2. The image patches that we want the model to ignore occur at different places in different images, and the model used is a CNN. To us, it is unclear how to compute (let alone regularize) the contribution of a feature to a model/prediction without the use of explanations. Beyond the methods we compare against, we are not aware of any other way to do so.\\n\\nIf there is relevant prior work that we are missing that describes such a technique, we would love to take a look.\\n\\n\\u201cMy second concern is more like a personal opinion. In the experiment of section 4.2, if the colors are good indicators of these digits in the training set, I don't it is wrong for a model to capture these important features. However, the way of altering examples in the same class with different colors in training and test sets seems questionable, because now, the distributions of training and test images are different. On the other hand, if we already know color is the issue, why not simply convert the images into black-and-white? A similar argument can also be applied to the experiment in section 4.3\\u201d\\n\\nThis is a great thought, which merits some discussion. At first blush, we agree that these are simple problems, which could be solved in simpler ways.\\n\\nHowever, we feel these simulations studies are actually very useful in developing and evaluating algorithms (including, but not limited to explanation algorithms). The reason for this is that they present a \\u201cbare minimum\\u201d for prospective methods to clear, and provide a clear metric of success. Put simply, if a prospective method cannot solve something so simple, it is unlikely to be of use on any \\u201creal\\u201d, i.e. not simulated, datasets.\\n\\nIn the color MNIST example of section 4.2, for instance, we have a clearly defined, spurious correlation (color), which is easy to check if a method has successfully removed from a model. In this idealized setting, our method was able to partially remove the confounding, while other techniques fail completely (underperforming a random benchmark).\\n\\nOf course, passing the \\u201cbare minimum\\u201d is not sufficient to fully validate a method, which is why we included a very real and consequential example in section 4.1 on skin cancer detection. However, we do think that CDEP\\u2019s performance on simulations (both absolute and relative to baselines) provides additional, meaningful, evidence of its effectiveness. \\n\\nAs an aside, we are far from the first ones to use simulation studies like this to validate our methods. Color MNIST was in CVPR last year [1], and was also discussed in a keynote at ICLR 2019 (this is what led us to use it). The Decoy-MNIST dataset was introduced in [2], a fairly successful (90+ citation in 2 years) paper. \\n\\n[1] https://arxiv.org/pdf/1904.07911.pdf\\n[2] https://arxiv.org/pdf/1703.03717.pdf\"}",
"{\"title\": \"Author's response\", \"comment\": \"We would like to thank the reviewer for their thoughtful points. We have addressed their concerns below.\\n\\n\\u201cHowever, I am a bit worried that the proposed approach is somewhat ad hoc. I can imagine there are various explanations that can be generated for the same model. There can also be different prior knowledge available for a particular problems. Which prior knowledge and explanations to use seem to affect a lot about the learned model. But there is no principled approaches for making the selection.\\u201d\\n\\nThis is an excellent point. In short, this is, in some sense of the word, an ad hoc method - but we don\\u2019t think that is a bad thing. CDEP\\u2019s ability to incorporate different forms of prior knowledge is a necessary feature to enable practitioners to use it in a wide variety of settings. While CDEP lacks a formal, mathematical derivation, it produces strong empirical results.\\n\\n\\u201cI can imagine there are various explanations that can be generated for the same model.\\u201d\\n\\nWhen it comes to the mechanical details of our algorithm, CDEP is certainly ad hoc. In particular, we have no proof that CDEP is mathematically optimal/unique, and it is possible that there could be some other version of CDEP, which could produce better results. Such a version may use a different explanation algorithm, or a different approach for penalizing the explanations. \\n\\nHowever, for the methodological choices we made, we are able to show meaningful empirical improvements across a number of different datasets. While a uniqueness proof would be nice, we feel that our empirical results are sufficient to demonstrate the effectiveness of our method.\\n\\n\\u201cThere can also be different prior knowledge available for a particular problem\\u201d\\n\\nWe should be clear - CDEP is not a plug and play tool that can be blindly applied without any knowledge of the underlying data. Rather, CDEP requires a practitioner to carefully examine their model, and dataset. Subsequently, CDEP enables them to use their best judgement in determining what patterns are likely to generalize, and should be used by the model. \\n\\nThis type of \\u201cad hoc\\u201d analysis is critical for real-world uses of machine learning, and CDEP provides a useful tool for doing so. In our skin cancer example, without properly analyzing the model and data, and using CDEP, a practitioner would construct a model that learns to predict whether a patient has a band-aid. Using that band-aid predictor to help diagnose skin cancer would be problematic, to say the least.\\n\\n\\u201cBut there is no principled approaches for making the selection.\\u201d\\n\\nPractitioners can optimize their selections for predictive accuracy on an appropriate dataset.\\n\\n\\n\\u201cFor instance, consider the example in Figure 2 about the presence of patches. Isn't that a too specific knowledge about the dataset, which in turn makes the proposed approach not general? I have doubts on how useful a method is if it relies on such specific prior knowledge about the data.\\u201d\\n\\nAs we discussed above, Figure 2 is one example of the type of prior knowledge that CDEP can use. However, the general theme of models learning spurious correlations is a fairly common problem that should not require much motivation.\\n\\nFor other examples, within our paper, we\\u2019d point to our other results in 4.2 and 4.3, as well as prior work on penalizing explanations [1]. As noted by another reviewer, there is also a line of work on non-deep learning models in NLP surrounding annotator rationales [2] [3]. CDEP could also certainly be used in improving the fairness of a model (ensuring that a model does not discriminate based on sensitive attributes like gender, race, etc.). There have also been other failures in medical machine learning that could benefit from CDEP [4]. In the month since we completed this work, we\\u2019ve also come in touch with some biologists who will be using CDEP in their research. \\n\\n[1] https://arxiv.org/pdf/1703.03717.pdf\\n[2] https://www.aclweb.org/anthology/N07-1033.pdf\\n[3] https://arxiv.org/abs/1905.13714\\n[4] Slide 30: http://theory.stanford.edu/~ataly/Talks/berkeley_ig_talk_feb_2019.pdf (talk from co-creator of integrated gradients)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose to add a regularizer to the loss function when training a prediction model. In particular, the regularizer considers explanations during the model training; if the explanations are not consistent with some prior knowledge, then explanation errors will be introduced.\\n\\nThe motivation for the proposed research is interesting and has some merit. However, I am a bit worried that the proposed approach is somewhat ad hoc. I can imagine there are various explanations that can be generated for the same model. There can also be different prior knowledge available for a particular problems. Which prior knowledge and explanations to use seem to affect a lot about the learned model. But there is no principled approaches for making the selection.\\n\\nIn some sense, standard regularizers such as L1 or L2 are are intrinsic regularizers, while the proposed regularizer is extrinsic regularizer. I think the extrinsic regularizer certainly has some merit, but it is also hard to regulate.\\n\\nFor instance, consider the example in Figure 2 about the presence of patches. Isn't that a too specific knowledge about the dataset, which in turn makes the proposed approach not general? I have doubts on how useful a method is if it relies on such specific prior knowledge about the data.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a way of using generated explanations of model predictions to help prevent a model from learning \\\"unwanted\\\" relationships between features and class labels. This idea was implemented with a particular explanation generation method from prior work, called contextual decomposition (CD). For a given feature, the corresponding CD can be used to measure its importance. The proposed learning objective in this work optimizes not only the cross entropy loss, but also the difference between the CD score of a given feature and its explanation target value. Experiments show that this new learning algorithm can largely improve the classification performance.\\n\\nI like the high-level idea of this work and agree that there is not much work on using prediction explanations to help improve model performance. However, there are two major concerns of the model and experiment design. \\n\\nFirst, it seems like the proposed method requires whoever use it already know what the problem is. For example, \\n\\n- in section 3.3, the model inputs include a collection of features and the corresponding explanation target values.\\n- in section 4.1, it is already known that some colorful patches only appear in some non-cancerous images but not in cancerous images. \\n- it is even more obvious in section 4.2 and 4.3, because in both experiments, the training and test examples were altered on purpose to create some mismatch. \\n\\nMy question is that if we already know the bias or the mismatch, why not directly use this information in the regularization to penalize some features? Is it necessary to resort to some explanation generation methods?\\n\\nMy second concern is more like a personal opinion. In the experiment of section 4.2, if the colors are good indicators of these digits in the training set, I don't it is wrong for a model to capture these important features. However, the way of altering examples in the same class with different colors in training and test sets seems questionable, because now, the distributions of training and test images are different. On the other hand, if we already know color is the issue, why not simply convert the images into black-and-white? A similar argument can also be applied to the experiment in section 4.3\\n\\nOverall, I like the idea of using explanations to help build a better classifier. However, I am concerned about the value of this work.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper presents a method intended to allow practitioners to *use* explanations provided by various methods. Concretely, the authors propose contextual decomposition explanation penalization (CDEP), which aims to use explanation methods to allow users to dissuade the model from learning unwanted correlations.\", \"The proposed method is somewhat similar to prior work by Ross et al., in that the idea is to include an explicit term in the objective that encourages the model to align with prior knowledge. In particular, the authors assume supervision --- effectively labeled features, from what I gather --- provided by users and define an objective that penalizes divergence from this. The object that is penalized is $\\\\Beta(x_i, s)$, which is the importance score for feature s in instance $i$; for this they use a decontextualized representation of the feature (this is the contextual decomposition aspect). Although the authors highlight that any differentiable scoring function could be used, I think the use of this decontextualized variant as is done here is nice because it avoids issues with feature interactions in the hidden space that might result in misleading 'attribution' w.r.t. the original inputs.\", \"The main advantage of this effort compared to work that directly penalizes the gradients (as in Ross et al.) is that the method does not rely on second gradients (gradients of gradients), which is computationally problematic. Overall, this is a nice contribution that offers a new mechanism for exploiting human provided annotations. I do have some specific comments below.\", \"I am not sure I agree with the premise as stated here. Namely, the authors write \\\"For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective\\\" -- I would argue that an explanation may be useful in and of itself by highlighting how a model came to a prediction. I am not convinced that it need necessarily lead to, e.g., improving model performance. I think the authors are perhaps arguing that explanations might be used to interactively improve the underlying model, which is an interesting and sensible direction.\", \"This work, which aims to harness user supervision on explanations to improve model performance, seems closely related to work on \\\"annotator rationales\\\" (Zaidan 2007 being the first work on this), but no mention is made of this. \\\"Do Human Rationales Improve Machine Explanations?\\\" by Strout et al. (2019) also seems relevant as a more recent instance in this line of work. I do not think such approaches are necessarily directly comparable, but some discussion of how this effort is situatied with respect to this line of work would be appreciated.\", \"The experiment with MNIST colors was neat.\", \"The authors compare their approach to Ross and colleagues in Table 1 but see quite poor results for the latter approach. Is this a result of the smaller batch size / learning rate adjustment? It seems that some tuning of this approach is warranted.\", \"Figure 3 is nice but not terribly surprising: The image shows that the objective indeed works as expected; but if this were not the case, then it would suggest basically a failure of optimization (i.e., the objective dictates that the image should look like this *by construction*). Still, it's a good sanity check.\"]}",
"{\"comment\": \"Thanks for your feedback on how to better customize your method to the color MNIST task. Based on your suggestion, we ran an experiment which penalized the variance between attribution of different color channels, yielding a new accuracy of 10.3%. Seeing as baseline (random) accuracy is 10% on this dataset, 10.3% is not a meaningful gain, especially relative to the 25.2% accuracy our method achieves. In fact, this solution occurs only at a high enough penalty rate that the training accuracy goes down to near random.\\n\\nWe will report these numbers, including the computational comparison, in an updated manuscript, when allowed to do so (after reviews have been returned).\", \"title\": \"Penalizing variance in attributions for color channels does not improve performance meaningfully\"}",
"{\"comment\": \"Hi, I\\u2019m one of the authors of the Attribution Priors (Expected Gradients) method. Thank you for the citation \\u2014 it\\u2019s always exciting to see more work on this relatively new research area!\\n\\nWe noticed that you said EG has high runtime and memory requirements because we recommend 200 samples per example - but our paper actually recommends exactly the opposite! In fact, all of our image experiments (MNIST and ImageNet in the supplement), use exactly 1 sample per example during training, which corresponds to no additional memory requirements and roughly the same training speed as Ross et al. (2017). This works because a single sample is an unbiased estimator for the true value of EG! Thus, this process regularizes the true value in expectation over many training steps. \\n\\nWe also notice that for the color MNIST problem, you choose to penalize the magnitude of the individual EG attributions (using an L2 penalty). One benefit of our attribution priors framework is that many human-intuitive priors (such as \\u201cattributions should be similar across color channels\\u201d) can be directly encoded as a penalty on the EG attributions. We believe such task-specific priors can lead to greatly improved performance.\\n\\nWe would be eager to see further comparisons with our method, and hope this insight allows for a more computationally-manageable workload.\", \"title\": \"Computational Efficiency of Expected Gradients\"}",
"{\"comment\": \"Hi Pankaj,\\n\\nThanks for your comment. By my count, we cited 18 papers proposing different interpretation techniques, as yours does. Given the number of papers in this space, we simply can't cite everything, particularly given that the relevance here is indirect - your paper focuses on RNNs, which is only 1 of our 4 examples, and we focus on uses of interpretation techniques, not developing them.\\n\\nI see that you have made similar comments on many other submissions this year, many with the exact same text. I do not think this is behavior that is good for our community.\", \"title\": \"We will not add reference\"}"
]
} |
BygzbyHFvB | FreeLB: Enhanced Adversarial Training for Natural Language Understanding | [
"Chen Zhu",
"Yu Cheng",
"Zhe Gan",
"Siqi Sun",
"Tom Goldstein",
"Jingjing Liu"
] | Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. | [
"freelb",
"adversarial training",
"model",
"natural language",
"experiments",
"maximal risk",
"input perturbations",
"effective",
"generalization",
"language models"
] | Accept (Spotlight) | https://openreview.net/pdf?id=BygzbyHFvB | https://openreview.net/forum?id=BygzbyHFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"S5FRMZhf59",
"Bkljp70sjB",
"Bklg57RosH",
"BJeQ87CjiS",
"rkeZXiHs9r",
"SyeaSzBZcS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798725875,
1573802947434,
1573802887934,
1573802826658,
1572719385407,
1572061765500
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1535/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1535/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1535/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1535/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1535/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper proposes a new algorithm for adversarial training of language models. This is an important research area and the paper is well presented, has great empirical results and a novel idea.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Thank you for the generous acknowledgement of our work! We agree that it is a good idea to add \\u201cnatural\\u201d into the title. However, we are only focusing on Natural Language Understanding tasks in this paper and have not tried deploying such a technology to pretraining language models for better feature representations. Therefore, we have changed the title to \\u201cFreeLB: Enhanced Adversarial Training for Natural Language Understanding\\u201d.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Thank you for the acknowledgement of our work and the valuable suggestions! We try to address all of your specific concerns and comments below.\\n\\n> \\\"- In tables 4 and 5, why are only results on RTE, CoLA, and MRPC presented? If this is because there was not noticeable difference on the other GLUE datasets, please mention it in the text.\\\"\\nWe were not able to finish the experiments on all tasks by the time of submission, since each evaluation is relatively expensive, taking at least 5 runs. During the discussion period, we have focused on providing more results of YOPO on other GLUE tasks, as shown in Table 8 in the Appendix. YOPO is an important variant of adversarial training method to compare with, and the results indicate a deteriorating performance with the increase of shallow-layer updates, something YOPO advocates. We have not been able to finish the experiments for evaluating the effect of variational dropout and comparing the embedding space invariance (Table 5) on the remaining GLUE tasks, but will definitely add them into our next version. We are not able to provide results on SuperGLUE for the moment due to its huge scale, but we have achieved more improvements on CommonsenseQA and ARC. The results have been integrated into the new version.\\n\\n> \\\"- \\u2026 did you do any error analysis on the models? Did they make different types of errors than models fine-tuned the vanilla way?\\\"\\nWe have compared the models with and without FreeLB based on the diagnostic information provided by the GLUE benchmark. For the ensembled RoBERTa-large models, except for Named Entities (35.0/45.9), Quantifiers (63.9/66.1), Common Sense (59.5/60.5), Interval/Numbers (31.3/38.9), Universal (Logic) (75.3/85.0), Relative Clauses (32.8/37.3), FreeLB demonstrates improvements on all the remaining 30 diagnostic metrics (Matthew\\u2019s Corr), with the most significant improvements in Morphological Negation (80.8/72.9), Negation (38.8/35.5), Conjunction (74.1/67.3), Disjunction (8.8/-3.1), Existential (Logic) (48.7/42.2), Temporal (Logic) (49.1/41.0), Anaphora/Coreference (54.2/48.8), Coordination Scopes (48.8/41.7). \\nFor the single BERT-base model, comparing with Jacob Devlin\\u2019s submission, except for Lexical Entailment (31.4/35.9), Symmetry/Collectivity (0/26.5), Redundancy (59.2/67.7), Structure Ellipsis/Implicits (35.2/39.4), Structure Datives (53.1/67.3), Structure Intersectivity (27/30.4), Structure Restrictivity (-19/-13.5), Negation (15.6/24), Conjunction (-12.1/-8.3), Existential (Logic) (32.4/33.7), Downward Monotone (Logic) (-72.9/-66.5), FreeLB demonstrates improvement on all remaining 25 diagnostic metrics. Consistent with results for RoBERTa, in this case FreeLB also shows significant improvements in Anaphora/Coreference (37.2/32.2), Core Args (37.2/29, 51.6/48.9 for RoBERTa), Morphological Negation (64.2/45), and Coordination Scopes (40.9/29.8). We will add this analysis in the final version.\"}",
"{\"title\": \"Thank the reviewers for their time!\", \"comment\": \"We would like to thank the reviewers for their time and their acknowledgement of our work. We have updated a new version, which corrected the spotted typos and includes more experimental results. Generally, by further exploring the hyperparameters, we have improved our highest single-model dev set accuracy from 78.64 to 78.81 on CommonsenseQA, and our ensembled model obtains an accuracy of 73.1, compared with RoBERTa (ensemble model) 72.5 in the leaderboard. We have also improved our dev/test set accuracy on ARC-Easy and ARC-Challenge from 84.56/85.35 (ensemble) and 67.56/67.32 (ensemble) to 84.91/85.44 (single-model) and 70.23/67.75 (single-model), remaining the first place on both leaderboards. We will add more ablation studies on all tasks of GLUE, and test our method on other important benchmarks in our future version.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors present a new adversarial training algorithm and apply it to the fintuning stage large scale language models BERT and RoBERTa. They find that with FreeLB applied to finetuning, both BERT and RoBERTa see small boosts in performance on GLUE, ARC, and CommonsenseQA. The gains they see on GLUE are quite small (0.3 on the GLUE test score for RoBERTa) but the gains are more substantial on ARC and CommonsenseQA. The paper also presents some ablation studies on the use of the same dropout mask across each ascent step of FreeLB, empirically seeing gains by using the same mask. They also present some analysis on robustness in the embedding space, showing that FreeLB leads to greater robustness than other adversarial training methods\\n\\nThis paper is clearly presented and the algorithm shows gains over other methods. I would recommend that the authors try testing their method on SuperGLUE because it's possible they're hitting ceiling issues with GLUE, suppressing any gains the algorithm may yield.\\n\\nQuestions,\\n- In tables 4 and 5, why are only results on RTE, CoLA, and MRPC presented? If this is because there was not noticeable difference on the other GLUE datasets, please mention it in the text.\\n- I realize that this method is meant to increase robustness in the embedding space, but did you do any error analysis on the models? Did they make different types of errors than models fine-tuned the vanilla way?\\n\\nCouple typos,\\n- Section 2.2, line 1: many -> much\\n- Section 4.2, GLUE paragraph: 88 -> 88.8\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper modifies and extends the recent \\u201cfree\\u201d training strategies in adversarial training for representation learning for natural language. The proposed \\u201cFree\\u201d Large-Batch Adversarial Training is well motived, in comparison with plain PGD-based adversarial training and the existing methods like FreeAT and YOPO, which virtually enlarges the batch size and minimize maximum risk at every ascent step. The contributions are solid.\", \"The proposed methods are empirically shown to be effective, in addition to being aligned with some recent theoretic analysis. The models achieve SOTA on GLUE (by time the paper was submitted; it is not the best model now but that does not affect the contributions), ARC, and the commonsenseQA dataset.\", \"The paper conducted good analysis demonstrating the effectiveness of the proposed components, including detailed ablation analysis.\", \"The paper is well written. It is well structured and easy to follow. A minor suggestion (just a personal view) is that the author(s) may consider using \\u201cnatural language\\u201d instead of just \\u201clanguage\\u201d in the title and may consider using more specific words like \\u201crepresentation\\u201d instead of \\u201cunderstanding\\u201d. But this is minor.\", \"I recommend an accept.\"]}"
]
} |
rygf-kSYwH | Behaviour Suite for Reinforcement Learning | [
"Ian Osband",
"Yotam Doron",
"Matteo Hessel",
"John Aslanides",
"Eren Sezener",
"Andre Saraiva",
"Katrina McKinney",
"Tor Lattimore",
"Csaba Szepesvari",
"Satinder Singh",
"Benjamin Van Roy",
"Richard Sutton",
"David Silver",
"Hado Van Hasselt"
] | This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers. | [
"reinforcement learning",
"benchmark",
"core issues",
"scalability",
"reproducibility"
] | Accept (Spotlight) | https://openreview.net/pdf?id=rygf-kSYwH | https://openreview.net/forum?id=rygf-kSYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7x_6G9OVWG",
"BJx_KB2tiH",
"H1xBKQnFor",
"HygW6_h_sr",
"rJlY979dsS",
"BkeF_Lt_jH",
"HyxWDXpLsH",
"H1l-LcTGjS",
"BJl7GyRZsr",
"B1gnI0T-iH",
"ryx14o6WoH",
"r1xvfq6WiH",
"SJgEVpbAFr",
"rkxk2BR3YH",
"rJxjmH6otS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725847,
1573664128316,
1573663613497,
1573599417193,
1573589905496,
1573586545457,
1573471065064,
1573210696752,
1573146379289,
1573146196014,
1573145383364,
1573145102601,
1571851563782,
1571771814990,
1571702050695
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1534/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1534/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1534/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1534/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a platform for benchmarking and evaluating reinforcement learning algorithms. While reviewers had some concerns about whether such a tool was necessary given existing tools, reviewers who interacted with the tool found it easy to use and useful. Making such tools is often an engineering task and rarely aligned with typical research value systems, despite potentially acting as a public good. The success or failure of similar tools rely on community acceptance and it is my belief that this tool surpasses the bar to be promoted to the community at a top tier venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review summary + thanks to the reviewers\", \"comment\": \"Once again, we would like to thank the reviewers for their efforts and help in improving this paper.\", \"during_the_review_process_we_have\": \"1) Added an explicit related work section, which clarifies the position of bsuite with respect to prior work, and the novelty in this project.\\n2) Clarified links to the opensource code, which reviewers have agreed is generally of high quality.\\n3) Made the connections between theory and practice more explicit, together with surfacing the \\\"example reports\\\" more clearly in Section 3.\\n\\nOverall, although there is still one reviewer who tends towards rejection, even that reviewer says:\\n- The paper is well written, easy to understand. \\n- Provide an industry level code base that can be used efficiently and easily.\\n- The project will be of great value to the research community in the near future.\\n\\nWe hope that following our productive discussion with reviewers, together with clarifications on the novelty of this project, that this means the positive aspects of the project shine through enough to recommend acceptance.\\n\\nMany thanks\"}",
"{\"title\": \"Clarifying requests\", \"comment\": \"Thank you again for your engagement.\\n\\n## Examples of bsuite for diagnosis\\n\\nMy belief is that we have already included three separate examples of how to use bsuite for diagnosis in Appendices C,D,E\\n\\n C - A comparison of DQN, Actor Critic RNN, Bootstrapped DQN and Random agent\\n D - A comparison of \\\"optimizer\\\" algorithm in DQN (SGD, Adam, RmsProp)\\n E - A comparsion of ensemble size in Bootstrapped DQN (size=1, 3, 10, 30)\\n\\nThese show how running on bsuite you can get a snapshot of the agent performance across these targeted dimensions.\\nOther common examples would include \\\"reimplementing\\\" a baseline agent, and then trying to compare the performance that you would expect to obtain.\\n\\nAre there other types of examples that you feel we should include?\\n\\n\\n## Opinionated statements\\n\\nWe believe that part of the value of this paper is in arguing for a principled, focused methodology for research into the core issues in RL research.\\nIt is clear that this specific sentence was not successful in your review, so we are happy to remove it.\\n\\nMany thanks\"}",
"{\"title\": \"Re: Windows support + \\\"baselines\\\" vs bsuite\", \"comment\": \"Yes indeed this is what I understood (the distinction between bsuite and the baseline agents) -- I just wanted to try and run some quick experiment to see \\\"something\\\", but realized that it was not entirely trivial to run the baseline agents under Windows...\\nDo not worry though, I will not penalize your submission because of it (if I had more time I am sure I could get some of them to run).\"}",
"{\"title\": \"Windows support + \\\"baselines\\\" vs bsuite\", \"comment\": \"We are very glad to hear that the scripts live up to your expectation!\\n\\nRegarding difficulties running code on Windows... my understanding is that this should not be the case... but obviously offering tech support via anonymous review is a difficult proposal.\\n\\nI think it's important to separate the core bsuite code, which should work just fine as you say via the pip install, from the \\\"baseline\\\" agents that are examples of specific agent implementations, but not core to anyone else using bsuite with their own working agent.\\n\\nBy default, installing via:\\npip install -e bsuite/\\nWill *not* include the baseline dependencies... this is a conscious choice since TF versioning/Sonnet can be difficult to manage.\\n\\nFor example the agents implemented in Sonnet will not work on windows, but there is not much we can do about that:\", \"https\": \"//github.com/deepmind/sonnet/issues/18\\n\\nIf you have a working implementation of an agent, hopefully the example scripts show that it can be very simple to plug this agent into bsuite.\\nIf you are on Windows and really want to use our baseline agents, we do provide colabs that can allow you to run this without installing anything on your local machine... so maybe that is an option?\\n\\n\\nOverall, we are really glad that you are engaging with this effort and hope that we can continue to make the user experience even more seemless.\\nHopefully these aspects of the paper will make you keen to recommend our acceptance, either through raising your score, or pushing for acceptance in the post-rebuttal stage.\\n\\nMany thanks!\"}",
"{\"title\": \"Re: Examples of code with OpenAI Gym + Dopamine\", \"comment\": \"Thank you!\\n\\nRegarding Windows support, the \\\"pip install -e bsuite\\\" worked well, but I ran into issues trying to run the example scripts, between sonnet not being supported under Windows, not having mujoco, or some code not being TF2-compatible... so not as straightforward as one might have hoped.\\n\\nThat being said, I had a look at the example scripts you mentioned and they are in line with what I was hoping for in terms of quality / simplicity.\"}",
"{\"title\": \"Update after author response\", \"comment\": \"Thank you for your response. It clarified some of the concerns that I had, so I changed the score. I still think the paper can be improved by including an example of how bsuite can be used for diagnosis purposes and also by revising the opinionated statements which are not directly related to the purpose of the paper (such as the one I mentioned in my earlier comment).\"}",
"{\"title\": \"Examples of code with OpenAI Gym + Dopamine\", \"comment\": \"To be more specific regarding examples of using OpenAI Gym or Dopamine Frameworks:\", \"openai_dqn\": \"\", \"https\": \"//anonymous.4open.science/repository/0a9b6721-69c6-42d6-b587-401e0898bfc8/bsuite/baselines/dopamine_dqn/run.py\\n\\n\\nGetting set up on Windows should actually be very simple.\\nWe also provide example launch scripts for running on Google Cloud in the README.md section \\\"Running experiments on Google Cloud Platform\\\"\\n\\nThe inlcluded scripts provide a step by step way to run any of our baseline agents immediately.\\nThis means that it should be possible to prototype an agent in Colab, then run the whole sweep via GCP.\\n\\n\\nMany thanks\", \"openai_ppo\": \"\", \"dopamine_dqn\": \"\"}",
"{\"title\": \"Updating review score\", \"comment\": \"(forgot to mention this above)\\n\\nWe hope that our revision + response is able to answer your concerns... or if not, please know that we are eager to do this in a secondary revision.\\n\\nIf it is enough, then we hope that you will be happy to upgrade your score.\\n\\nMany thanks\"}",
"{\"title\": \"Thank you for your review: we hope our revision will address your concerns\", \"comment\": \"Thank you very much for your review, we hope our revision will address your concerns.\\n\\nQ1 - Relating performance to theoretical accounts\\n\\nWe have added a clarification that our RNN agent was implemented with backprop through time of exactly 30 timesteps, so that the sharp performance transition is exactly evidence of this theory <-> practice interplay.\\nWe had intended to make this clear in the paper, but somehow had forgot to make this explicit.\\n\\nSimilarly, the results of 2.2 are designed to highlight the role of theory-inspired algorithms that outperform the 2^N bound for dithering approaches to exploration.\\nWe have clarified the meaning of this dashed line at 2^N to try to make this interplay more clear.\\n\\n\\nQ2 - Novelty\\n\\nBased on your feedback, we have added Section 1.4 on Related Work.\\nHere we make a better effort to place bsuite in the context of other benchmarks in RL, and to highlight the specific novelty that we offer.\\n\\nWe believe The Behaviour Suite for Reinforcement Learning offers a complementary approach to existing benchmarks in RL, with several novel components:\\n\\n- bsuite experiments enforce a specific methodology for agent evaluation beyond just the environment definition.\\nThis is crucial for scientific comparisons and something that has become a major problem for many benchmark suites (Section 2).\\n\\n- bsuite aims to isolate core capabilities with targeted `unit tests', rather than integrate general learning ability.\\nOther benchmarks evolve by increasing complexity, bsuite aims to remove all confounds from the core agent capabilities of interest (Section 3).\\n\\n- bsuite experiments are designed with an emphasis on scalability rather than final performance.\\nPrevious `unit tests' (such as `Taxi' or `RiverSwim') are of fixed size, bsuite experiments are specifically designed to vary the complexity smoothly (Section 2). \\n\\n- Our open source code has an extraordinary emphasis on the ease of use, and compatibility with RL agents not specifically designed for bsuite\\nEvaluating an agent on bsuite is practical even for agents designed for a different benchmark (Section 4).\\n\\n\\nQ3 - Opinionated statements\\n\\nWe agree that, at points, we are making an opinionated case for the value of a certain style of research.\\nWe hope that we make this clear that it is meant as a *complementary* approach, and that this does not hurt the clarity of our message.\\nIf there are particular lines you would prefer us to remove or rewrite (potentially that one you highlight) then we will be happy to do this.\\n\\n\\nQ4 - Example analyses\\n\\nWe have included several example bsuite analyses but, in the interests of space, we have relegated these to the Appendices C,D,E.\", \"these_include\": \"C - A comparison of DQN, Actor Critic RNN, Bootstrapped DQN and Random agent\\n D - A comparison of \\\"optimizer\\\" algorithm in DQN (SGD, Adam, RmsProp)\\n E - A comparsion of ensemble size in Bootstrapped DQN (size=1, 3, 10, 30)\\n\\nWe hope that these can provide examples of how bsuite can drive interesting research.\\nWe have edited the section to make these analyses more prominent.\\n\\n\\nQ5 - Other aspects of RL\\n\\nIt is clear that our release of bsuite does not cover all the interesting questions in RL.\\nOur aim is to set up a tool that covers *some* interesting diagnostic tools, with the aim of collecting as many as possible of the *best* experiments going forward.\\n\\nWe would love to incorporate excellent experiments that instatiate the dynamics bottleneck, planning horizon and more... we just write an excellent experiment that really measures this well.\\nIf you, or anyone else, can submit this to the [email protected] (or even via github pull) then we would be able to incorporate this very easily.\", \"minor\": \"We have taken these into account .\\n\\n\\nMany thanks!\"}",
"{\"title\": \"Thank you for your review: we hope that our revision will make the value proposition more clear\", \"comment\": \"Thank you very much for your review!\\n\\nBased on your feedback we have added Section 1.4, which outlines the relation to prior work more explicitly.\\nWe also hope that this section makes the *novelty* of our project much more clear.\\n\\nWe believe The Behaviour Suite for Reinforcement Learning offers a complementary approach to existing benchmarks in RL, with several novel components:\\n\\n- bsuite experiments enforce a specific methodology for agent evaluation beyond just the environment definition.\\nThis is crucial for scientific comparisons and something that has become a major problem for many benchmark suites (Section 2).\\n\\n- bsuite aims to isolate core capabilities with targeted `unit tests', rather than integrate general learning ability.\\nOther benchmarks evolve by increasing complexity, bsuite aims to remove all confounds from the core agent capabilities of interest (Section 3).\\n\\n- bsuite experiments are designed with an emphasis on scalability rather than final performance.\\nPrevious `unit tests' (such as `Taxi' or `RiverSwim') are of fixed size, bsuite experiments are specifically designed to vary the complexity smoothly (Section 2). \\n\\n- Our open source code has an extraordinary emphasis on the ease of use, and compatibility with RL agents not specifically designed for bsuite\\nEvaluating an agent on bsuite is practical even for agents designed for a different benchmark (Section 4).\\n\\n\\nOverall, we are delighted that you agree:\\n- The paper is well written, easy to understand. \\n- Provide an industry level code base that can be used efficiently and easily.\\n- The project will be of great value to the research community in the near future.\\n\\nWe believe that, following the changes we have made to the paper, there might be good reason for you to change your review to an \\\"accept\\\".\\n\\nMany thanks\"}",
"{\"title\": \"Thank you for your review - we hope to address your main concerns with this revision\", \"comment\": \"We thank you for your time and comments.\", \"to_address_your_main_concerns\": \"1 - Related work\\n\\nThis was a clear omission in paper and we have now added a section 1.4 on related work that hopefully clarifies some of these issues.\\nIf this discussion is still insufficient then we are absolutely happy to improve this further.\\n\\nOne small thing to note is that we want to emphasize these bsuite \\\"experiments\\\" (or tasks) are more than just the RL environment... since they include both the interaction and the analysis.\\n\\n\\n2 - Limited scope\\n\\nWe hope to grow the bsuite to incorporate as many *excellent* experiments for core RL capabilities as possible.\\nHowever, we also anticipate that many of the most difficult problems in RL will remain too complex to distill to a simple bsuite example.\\n\\nOur goal is to collect the best simple, diagnostic tests of Core RL capabilities.\\nAll of the examples that you list are potential candidates for inclusion *if* we can make an excellent experiment that captures some essence of that problem.\\nHowever, we erred on the side of including *less* where we were unsure of whether we could really make an excellent bsuite experiment.\\n\\nHowever, even if the set of bsuite tasks remains relatively simple, it might still play a useful role in the *other* parts of RL research where we don't currently have good experiments.\\nA researcher interested in multi-agent RL might still gain some value when running their agent on bsuite... even if it currently does not include a specifically multi-agent experiment.\\n\\n\\n3 - Anonymized code\\n\\nWe actually submitted an anonymized version of the code together with the initial paper submission. It is linked at the top of this page:\", \"https\": \"//anonymous.4open.science/r/0a9b6721-69c6-42d6-b587-401e0898bfc8/\\n\\nThe confusion may have come from our paper where the link says \\\"github.com/anon/bsuite\\\" but actually clicking the link would also have taken you to that address.\\n\\nWe believe that you will find both the answers (a) and (b) to be positive.\", \"minor_remarks\": \"- We have taken each of these into account and made appropriate changes to the paper\\n\\n- Fig 2b grey line represents a baseline 2^N learning time, or a baseline scaling for agents that do not perform deep exploration.\\n\\n- We rewrote the last sentences of Section 4. This may make more sense when looking at the code, where we have a more explicit example of asking for bsuite environments with custom OBSERVATION_SPEC.\\n\\n\\nOverall we are happy that you agreed with us on the value of our submission.\\nWe hope that, through our revision, we are able to satisfy your remaining concerns and convert your score to an \\\"accept\\\".\\n\\nMany thanks\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents the \\u00ab Behavior Suite for Reinforcement Learning \\u00bb (bsuite), which is a set of RL tasks (called \\u00ab experiments \\u00bb) meant to evaluate an algorithm\\u2019s ability to solve various key challenges in RL. Importantly, these experiments are designed to run fast enough that one can benchmark a new algorithm within a reasonable amount of time (and money). They can thus be seen as a \\u00ab test suite \\u00bb for RL, limited to small toy problems but very useful to efficiently debug RL algorithms and get an overview of some of their key properties. The paper describes the motivation behind bsuite, shows detailed results from some classical RL algorithms on a couple of experiments, and gives a high-level overview of how the code is structured.\\n\\nI really believe such a suite of RL tasks can indeed be extremely useful to RL researchers developing new algorithms, and as a result I would like to encourage this initiative and see it published at ICLR to help it gain additional traction within the RL community.\\n\\nThe paper is easy to read, motivates well the reasons behind bsuite, and shows some convincing examples. However, in my opinion there remain a few important issues with this submission:\\n\\n1.\\tThere is no \\u00ab related work \\u00bb section to position bsuite within the landscape of RL benchmarks (ex: DMLab, ALE / MinAtar, MuJoCo tasks, etc.). I believe it is important to add one.\\n\\n2.\\tThe current collection of experiments appears to be quite limited. The authors acknowledge the lack of hierarchical RL, but what about other aspects like continuous control, parameterized actions, multi-agent, state representation learning, continual learning, transfer learning, imitation learning / inverse RL, self-play, etc? It is unclear to me whether the goal is to grow bsuite in all these directions (and more) over time, or if there is some kind of \\u00ab boundary \\u00bb the authors have in mind regarding the scope of bsuite. Regardless, the fact is that in its current form, bsuite appears to be suited only to a limited subset of current RL research.\\n\\n3.\\tI wish an anonymized version of the code had been provided, so that reviewers could test it. In particular I wonder (a) if it is easy to setup and run under Windows, and (b) if it is straighforward to plug a bsuite experiment within an algorithm based on the popular OpenAI gym API (I think the latter is true from what is said at the end of Section 4, but I would have appreciated being able to try it out myself).\", \"additional_minor_remarks\": \"\\u2022\\tI noticed two anoymity-related issues with the provided links: (1) the Google Colab notebook revealed to me the name of its author when clicking the \\u00ab Open in Playground \\u00bb link to be able to run it, and (2) the bsuite-tutorial link asks for permission, which might let the authors access reviewer info. I would not hold it against the authors though as I believe these are genuine mistakes and they did their best to preserve anonymity.\\n\\u2022\\tt > 2 in Section 2.1 should probably be t >= 2\\n\\u2022\\tIn FIg. 2b the label for the y axis seems incorrect since good results are near 0\\n\\u2022\\tPlease explain what is the dashed grey line in Fig. 4b\\n\\u2022\\tI was unable to understand the last 2 sentences of Section 4\\n\\u2022\\tSections C.2, D.2 and E.2 all have the same plots\\n\\u2022\\tA few typos: incomplete sentence near bottom of p.3 (\\u00ab the internal workings\\u2026 \\u00bb), \\u00ab These assessment \\u00bb, \\u00ab expeirments \\u00bb, \\u00ab recurrant \\u00bb, \\u00ab length ? 1 \\u00bb, \\u00ab together with an analysis parses this data \\u00bb, \\u00ab anonimize \\u00bb, \\u00ab bsuite environments by implementing \\u00bb, \\u00ab even if require \\u00bb\", \"review_update\": \"the authors have addressed my concerns, and I look forward to using bsuite in my research => review score increased to \\\"Accept\\\"\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Behaviour Suite for Reinforcement Learning\\n\\nIn this paper the authors provide a set of light-weighted but dedicated designed environments, so that researchers can use the environments as a quick indication of the ability of the proposed (or existing) algorithms.\\nI think the paper is well-written, with the intuition clearly demonstrated.\\n\\nI tend to vote for rejection though, given that the novelty in the project is relatively limited.\\nBut I believe in general it is a very valuable project that will be beneficial to future research and I would like to recommend for a workshop publication.\", \"pros\": [\"The paper is well written, easy to understand.\", \"Provide an industry level code base that can be used efficiently and easily.\", \"The project will be of great value to the research community in the near future.\"], \"cons\": [\"The novelty of the project is relatively limited.\", \"The proposed and implemented environments have been studied before.\", \"No explicit conclusion from the evaluation.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": [\"In this paper, the authors propose a set of benchmarks for evaluating different aspects of reinforcement learning algorithms such as generalisation, exploration, and memory. The aim is to provide a set of simple environments to better understand the RL algorithms and also to provide a set of scores that summarise the performance in each respect. The code of the benchmark is also released.\", \"The paper is well written and clear, and generally can provide a useful contribution. In particular, I like the idea of having a set of benchmarks which can be used for the diagnosis of RL algorithms. Having said this, I have the following concerns which are mostly related to the presentation of the paper. Given clarifications in an author response, I would be willing to increase the score.\", \"Based on section 1.1 and elsewhere, it seems that the main driver for developing this benchmark has been connecting theory to practical algorithms (which in my opinion is an important step). However, how this can be achieved using the proposed benchmark is not shown in the paper. This can be for example showing how the generalisation score proposed here is linked to theoretical accounts. Or for example in section 2.1, by showing that the memory length 30 for RNN is related to the theoretical expectations. Alternatively, if linking theory and experiments is not the main driver of this work, then it seems a bit unclear what the point of presenting section 1.1 (and other related discussions) is within the context of the paper.\", \"In terms of novelty, currently the differences between the current work and previous attempts to develop benchmarks is unclear (some examples are mentioned below). In general, a related work section is vital here, but missing in the paper. It should clearly state what the previous attempts in developing benchmarks are, their shortcomings, and how the current work addresses them.\", \"Some statements in the paper sound more like opinions (which I happen to agree with) rather than something being based on the results of the paper. For example, \\\"We should not turn away from deep RL just because our current theory is not yet developed\\\". It is unclear how this statement is related to the results obtained in this work.\", \"In section 3, I would like to see some real examples in which bsuite can be used for diagnosis. I find this application of bsuite (diagnosis) very interesting, but as it stands section 3 is more like a tutorial rather than providing a concrete example.\", \"There are some aspects of RL which are specific to certain classes of RL. For example, in model-based RL, aspects such as the dynamics bottleneck and the planning horizon dilemma have been previously looked at, but are not presented in bsuite. How do the authors envision incorporating such aspects into their framework?\"], \"minor\": [\"\\\"anything for length \\u00bf 1\\\" -> replace \\u00bf\", \"what is the dashed grey line in Fig 4b?\"], \"references\": \"Duan, Yan, et al. \\\"Benchmarking deep reinforcement learning for continuous control.\\\" International Conference on Machine Learning. 2016.\\n\\nBenchmarking Model-Based Reinforcement Learning, Wang et al, 2019.\"}"
]
} |
HJlWWJSFDH | Strategies for Pre-training Graph Neural Networks | [
"Weihua Hu*",
"Bowen Liu*",
"Joseph Gomes",
"Marinka Zitnik",
"Percy Liang",
"Vijay Pande",
"Jure Leskovec"
] | Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction. | [
"Pre-training",
"Transfer learning",
"Graph Neural Networks"
] | Accept (Spotlight) | https://openreview.net/pdf?id=HJlWWJSFDH | https://openreview.net/forum?id=HJlWWJSFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9vWLSWYvh1",
"rJgKcidXir",
"BJgEvjdmsH",
"SJlhViuQsH",
"rkesRlWXoH",
"BkxPCGTX9B",
"SkxAD0ECtB",
"BJeDySijKr",
"HJeOL_2WtS",
"BkgPVKOq_S",
"HkgiRWKV_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798725818,
1573256081480,
1573256027632,
1573255987904,
1573224659283,
1572225742654,
1571864166202,
1571693791463,
1571043408194,
1570568494912,
1570177490759
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1533/Authors"
],
[
"~Simon_Shaolei_Du1"
],
[
"ICLR.cc/2020/Conference/Paper1533/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1533/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1533/AnonReviewer3"
],
[
"~Junhyun_Lee1"
],
[
"ICLR.cc/2020/Conference/Paper1533/Authors"
],
[
"~Junhyun_Lee1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"All three reviewers are consistently positive on this paper. Thus an accept is recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"We thank the reviewer for acknowledging the novelty of our work and for noting that our experiments are thorough.\\n\\nThank you for pointing out a related preprint by Z. Hu et al. [arXiv:1905.13728]. We note the work by Z. Hu et al. was developed independently and concurrently to our work here, and we were not aware of it at the time of writing our paper. We shall cite the preprint and include a discussion in our paper. \\n\\nBriefly, the key difference between our work and that of Hu et al. is that Hu et al. consider a more restrictive setting where graphs are completely unlabeled (i.e., graphs have no node features). Hu et al. then focus on extracting generic graph properties of unlabeled graphs by pre-training on randomly-generated graphs. While the approach is interesting, the limitation of such an approach is that it improves performance only marginally over ordinary supervised classification of the original attributed graphs. This is because it is hard for random unlabeled graphs to capture domain-specific knowledge that is useful for a specific application. Moreover, in practice, graphs tend to have labels together with rich node and edge attributes, but Hu et al.\\u2019s approach cannot naturally leverage such attribute information, which then results in limited gains. \\n\\nIn principle, we could compare our approach against Hu et al., however, right now, this would be extremely challenging because of the following reasons. (1) We cannot find a public implementation of Hu et al.\\u2019s approach for reliable comparison. (2) Reimplementing their method requires knowledge of many specific implementational details and design choices (feature extraction, graph generation, etc.), which are not discussed in their preprint. (3) Finally, their pre-trained GNN operates on unlabeled graphs, and so it cannot be directly applied to our datasets of labeled graphs.\\n\\nLastly, in contrast to Hu et al., our work focuses on important real-world domains, where one wants to pre-train GNNs by utilizing the abundant graph, node, and edge attributes. Importantly, our approach is able to learn a domain-specific data distribution that is useful for downstream prediction. We demonstrate on two application domains that such practical settings (i.e., labeled graphs with naturally-given node and edge attributes) are very important to consider and that our pre-training can substantially improve model performance.\"}",
"{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"We thank the reviewer for acknowledging the technical aspects of the paper and for noting that our\\u200b \\u200bresults\\u200b \\u200bare\\u200b \\u200bsolid\\u200b \\u200band\\u200b \\u200bour\\u200b \\u200banalysis\\u200b \\u200bis\\u200b \\u200bthorough.\", \"re\": \"Linear time complexity in Appendix F\\nWe acknowledge that the time complexity of our pre-training methods was not well explained in Appendix F. In Figure 2 (a) we show that we only sample one node per graph. We then use breadth-first search to extract a K-hop neighborhood of the node, which takes at most linear time with respect to the number of edges in the graph. As a result, pre-training via context prediction has linear time complexity. We will edit Appendix F to include more detailed information and cover this important point. \\n\\nPlease let us know if you have any further questions or comments!\"}",
"{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"We thank the reviewer for insightful feedback and for noting that our\\u200b experiments \\u200bare\\u200b \\u200bsolid\\u200b and our setup and analyses are sound. The reviewer asks great questions, and we provide the answers below.\", \"re\": \"Analysis of different pre-training strategies\\nThank you for bringing up this valuable point. We agree that it is important to understand why some pre-training strategies work better over others. Our key insight backed up with extensive empirical evidence is that a combination of graph-level and node-level methods (Figure 1) is important because it allows the model to capture both local and global semantics of graphs. Further, we find that our structure-based node-level methods (Context Prediction and Attribute Masking) are preferred over position-based node-level methods (Edge Prediction, Deep Graph Infomax). As future work, we plan to further investigate what graph-level and node-level methods are most useful in different domains, and understand what domain-specific knowledge has been learned by the pre-trained models.\"}",
"{\"title\": \"Related Work\", \"comment\": \"Dear authors,\\nThis is a very interesting paper! We would like to draw your attention to our recent paper: https://arxiv.org/abs/1905.13192\\nOn PTC, our graph neural tangent kernel achieves 67.9, which is the best result to date (we are aware of).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors introduce strategies for pre-training graph neural networks. Pre-training is done at the node level as well as at the graph level. They evaluate their approaches on two domains, biology and chemistry on a number of downstream tasks. They find that not all pre-training strategies work well and can in fact lead to negative transfer. However, they find that pre-training in general helps over non pre-training.\\n\\nOverall, this paper was well written with useful illustrations and clear motivations. The authors evaluate their models over a number of datasets. Experimental construction and analysis also seems sound.\\n\\nI would have liked to see a bit more analysis as to why some pre-training strategies work over others. However, the authors mention that this is in their planned future work.\\n\\nAlso, in figure 4, the authors mention that their pre-trained models tend to converge faster. However, this does not take into account the time already spent on pre-training. Perhaps the authors can include some results as to the total time taken as well as amortized total time over a number of different downstream tasks.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes new pre-training strategies for GNN with both a node-level and a graph-level pretraining. For the node-level pretraining, the goal is to map nodes with similar surrounding structures to nearby context (similarly to word2vec). The main problem is that directly predicting the context is intractable because of combinatorial explosion. The main idea is then to use an additional GNN to encode the context and to learn simultaneously the main GNN and the context GNN via negative sampling. Another method used is attribute masking where some masked node and edge attributes need to be predicted by the GNN. For graph-level pretraining, some general graph properties need to be predicted by the graph.\\nExperiments are conducted on datasets in the chemistry domain and the biology domain showing the benefit of the pre-training.\\n\\nThe paper addresses an important and timely problem. It is a pity that the code is not provided. In particular, the node-level pretraining described in section 3.1.1. seems rather complicated to implement as a context graph needs to be computed for each node in the graph. In particular I do not think the satement 'all the pre-training methods are at most linear with respect to the number of edges' made in appendix F is correct.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes pre-training strategies (PT) for graph neural networks (GNN) from both node and graph levels. Two new large-scale pre-training datasets are created and extensive experiments are conducted to demonstrate the benefits of PT upon different GNN architectures. I am relative positive for this work. Detail review of different aspects and questions are as follows.\", \"novelty\": \"As far as I know, this work is among the earliest works to think about GNN pre-training. The most similar paper at the same period is [Z Hu, arXiv:1905.13728]. I read both papers and found they have similar idea about PT although they have different designs. This paper leverages graph structure (e.g., context neighbors) and supervised labels/attributes (e.g., node attributes, graph labels) for PT. These strategies are not surprising for me and the novelty is incremental.\", \"experiment\": \"The experiments are overall good. The authors created two new large scale pre-training graph datasets. Experimental results of different GNN architectures w/o different PT for different tasks are provided. Comparing to non-pretraining GNN, the improvements are significant for most cases.\", \"writing\": \"The writing is good and easy to follow.\", \"questions\": \"I would like to see more discussion about difference between this work and [Z Hu, arXiv:1905.13728]. Comparing to the other work, what are strengths of this work? In addition, have the authors compared the performances of their work and [Z Hu, arXiv:1905.13728] using the same data?\"}",
"{\"comment\": \"Thank you for your reply.\\n\\nThese contributions will bring many benefits to the communities (ML, Bio, and Chemical).\\nBecause the pre-training GNNs is really essential but was a challenging problem.\\n\\nI am looking forward to the available URLs for data and models.\\n\\nNice work!\", \"title\": \"Re: Re: Available repository for Reproducibility\"}",
"{\"comment\": \"We thank the reader for his comments and suggestions. We are in total agreement with the suggestions.\\nWe are working on a comprehensive project website where we will be releasing clean and easy to use data \\ntogether with the splits, which will greatly help the community to move beyond small graph classification benchmarks.\\n\\nWe are also working on releasing the code as well as the final pre-trained models so that the community\\ncan benefit from this work and use the models/data for other downstream prediction tasks.\", \"title\": \"Re: Available repository for Reproducibility\"}",
"{\"comment\": \"Because the pre-trained models are commonly used for various down-stream tasks, I think there should be available URL for codes and pre-trained weights to test its scalability (transferability).\\n\\nSecondly, \\\"Out-of-distribution prediction (scaffold split)\\\" is conducted at this work, so it would be better if there are URL providing dataset split.\\n\\nThird, as I found, there are several settings for download of large scale dataset (ZINC), therefore the providing of large scale dataset you used makes this work more reproducible.\\n\\n\\n* To sum up, could you please provide the URL for codes, large scale datasets, and down-stream datasets (with scaffold split)?\", \"title\": \"Available repository for Reproducibility\"}"
]
} |
S1eWbkSFPS | GRAPHS, ENTITIES, AND STEP MIXTURE | [
"Kyuyong Shin",
"Wonyoung Shin",
"Jung-Woo Ha",
"Sunyoung Kwon"
] | Graph neural networks have shown promising results on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks. Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation. Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs. To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM). GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly. These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations. With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks. Furthermore, we empirically demonstrate the significance of considering global information. The source code will be publicly available in the near future. | [
"Graph Neural Network",
"Random Walk",
"Attention"
] | Reject | https://openreview.net/pdf?id=S1eWbkSFPS | https://openreview.net/forum?id=S1eWbkSFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"eR3twRotB_",
"HJxkOirdjr",
"BygBly6Ijr",
"SyxfZ23UiB",
"rJgQEo2IiS",
"Skegat3LoH",
"SJxIP0_UjH",
"Hyeja6OUoH",
"S1eQBPadqH",
"SJlsm7i6YH",
"SJg4oGBstr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725787,
1573571431398,
1573469933282,
1573469178164,
1573468971049,
1573468600186,
1573453406041,
1573453250971,
1572554554620,
1571824419513,
1571668636191
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1532/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1532/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1532/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Two reviewers are concerned about this paper while the other one is slightly positive. A reject is recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revised draft uploaded\", \"comment\": \"Dear reviewers and all,\\n\\nWe have updated our draft according to the reviewers\\u2019 comments. We thank the reviewers for their rich insightful comments. We would not have been able to reach this revision without the reviewers\\u2019 advices. We believe our draft has improved significantly and would be very grateful if we are informed about any further concerns.\", \"to_summarize_our_main_changes\": \"1. We have modified the last paragraph of page 3 in accordance with the advice of R1. This is also about JK-Net, which R2 pointed out.\\n\\n2. We ran all the experiments and added the results in Section 5.2. The contents of Section 5.2 now include an explanation of oversmoothing (which R3 pointed out), the differences between JK-Net, SGC and our models (which R2 pointed out), and the inference time that all the reviewers pointed out. Overall, we have been able to verify that our model outperforms existing models in many aspects.\\n\\n3. We added the results of JK-GCN, APPNP, SGC in Table 2 and 3 in accordance with the advice of R3. Our models (GSM and GESM) still outperforms existing models.\\n\\n4. We inserted experiments on additional datasets (which R2 proposed) and visualization of attention distributions (which R1 proposed) to the Appendix. We have been able to confirm that our model does not overfit on a particular dataset and move a step closer to understanding the effect of attention through visualizing the attention distribution.\\n\\nKind regards,\\nAuthors\"}",
"{\"title\": \"Author response to Review #1\", \"comment\": \"We appreciate R1 for the clear but rich insightful suggestion. Thanks to R1's constructive feedback, we can find the few things in our study that need to be discussed.\", \"the_responses_for_each_comment_are_as_follows\": \"1. some sentences could be streamlined and some repetitions could be removed. For instance last paragraph in page 3, this should be clear already and should be restated.\\n\\nA)\\nThanks for the advice. We will revise it later and reflect it in the paper.\\n\\n\\n2. One thing that is not clear is how does the model cope with the increase in feature dimensionality due to the concatenation over different steps. Isn\\u2019t this leading to overfitting? Did the authors experiment with other schemes such as averaging or gating? If so it would be nice to see the results for each of the configurations as it is not clear to me what should be chosen a-priori. \\n\\nA)\\nAs represented in Figure 4b (Figure 5 in revision), the test predictions converge to a certain level as the number of steps increases. Although the parameters increase as the steps increase, there is no decrease in performance due to overfitting. \\n\\nWe tried averaging scheme as R1 mentioned but the results were not as good as concatenation scheme (average pooling: 72.5%, max pooling: 77.3%). \\n\\n\\n3. Experiments are nicely executed and the proposed approach is compared against a rich array of other models. Results are state-of-the-art and also the analysis of the model is interesting, i.e. it doesn\\u2019t diverge when increasing # steps at test time.\\n\\nA)\\nThanks for your kind comments. The number of parameters and overfitting in graph neural networks are very important issues. We will study it in future research.\\n\\n\\n4. How does the attention vector look like? Does it tend to peak at a given k, or is it more uniformly distributed? \\n\\nA)\\nThanks for the insightful suggestion. There was no big difference in attention distribution depending on the steps. However, we found that GESM slightly adjusts the weight values compared to the uniform distribution and contributed to performance improvement. We will attach these results to the Appendix. \\n\\n\\n5. How does the model compare to having k GAT layers, each constrained to use neighboring nodes at step k as input for the attention computation? Did the authors experiment on this? \\n\\nA)\\n(Test ACC)\\n+---------------------------------+------------------+-------------------------+---------------------+--------------+\\n| Model/Test ACC | Coauthor CS | Coauthor Physics | Amazon Computers | Amazon Photo |\\n+---------------------------------+------------------+-------------------------+---------------------+--------------+\\n| GCN[1] | 91.1\\u00b10.5 | 92.8\\u00b11.0 | 82.6\\u00b12.4 | 91.2\\u00b11.2 |\\n| GAT[2] | 90.5\\u00b10.6 | 92.5\\u00b10.9 | 78.0\\u00b119.0 | 85.7\\u00b120.3 |\\n| GSM (our base) | 91.8\\u00b10.4 | 93.3\\u00b10.6 | 79.2\\u00b12.1 | 89.3\\u00b11.9 |\\n| GESM (GSM+attention) | 92.0\\u00b10.5 | 93.7\\u00b10.6 | 79.3\\u00b11.7 | 90.0\\u00b12.0 |\\n+---------------------------------+-----------------+--------------------------+---------------------+---------------+\\n\\n (Inference Time(s))\\n+-----------------------+---------+----------+---------+---------+---------+\\n| Model/Step | 2 | 5 | 8 | 15 | 20 |\\n+-----------------------+---------+----------+---------+---------+---------+\\n| GCN[1] | 0.028 | 0.037 | 0.049 | 0.073 | 0.092 |\\n| GAT[2] | 0.136 | 0.201 | 0.315 | 0.577 | 0.781 |\\n| GSM | 0.035 | 0.039 | 0.043 | 0.060 | 0.071 |\\n| GESM | 0.131 | 0.143 | 0.153 | 0.178 | 0.211 |\\n+-----------------------+---------+---------+----------+---------+---------+\\n\\nThanks for the good suggestion. Both of the above experiments were performed with the same hyper parameter under the same conditions with layer: 64 multi head: 8. We can see that our models are faster and stable (A new experiment is conducted on [3] by the suggestion of Reviewer 2). \\n\\n\\n6. Overall I like the work but find the novelty quite limited, more effort could have been put into motivating the soundness of the use of multiple random walks. Perhaps some theory could be developed to make the paper stronger. \\n\\nA)\\nA low-pass filter of GCN [1] matrix has a problem in generalization because it is based on a laplacian eigen basis in spectral domain [4]. Random walk, on the other hand, is not a methodology in spectral domain, so it can be easily applied to multiple graphs.\\n\\n\\nWe will update our manuscript by reflecting the comments and responses as soon as possible.\\n\\n[1] Kipf and Welling: Semi-Supervised Classification with Graph Convolutional Networks\\n[2] Petar Veli\\u010dkovi\\u0107 et al: Graph Attention Networks\\n[3] Shchur et al.: Pitfalls of Graph Neural Network Evaluation\\n[4] Zonghan Wu et al. : A Comprehensive Survey on Graph Neural Networks\"}",
"{\"title\": \"Author response to Review #2 (part3)\", \"comment\": \"4. As your work is quite similar to [1, 2], it would be beneficial to also include the respective results of those methods in Tables 2 and 3. In addition, their differences and similarities should be discussed in detail.\\n\\nA)\\nThe differences between our model and [1, 2, 3] are described in R2\\u2019s question #1, and as R2 mentioned, we added the experimental results of [1, 2, 3] on Table 2 and 3 . The updated results still confirm that our method outperforms the other methods. You will see this in the modified version.\\n\\n\\n5. Since the used benchmark datasets are already reasonably explored, authors are advised to include evaluation on other datasets as well, e.g., from [6].\\n\\nA)\\n(Test ACC)\\n+---------------------------------+------------------+--------------------------+-----------------------------+---------------------+\\n| Model/Test ACC | Coauthor CS | Coauthor Physics | Amazon Computers | Amazon Photo |\\n+---------------------------------+------------------+--------------------------+-----------------------------+----------------------+\\n| GCN[4] | 91.1\\u00b10.5 | 92.8\\u00b11.0 | 82.6\\u00b12.4 | 91.2\\u00b11.2 |\\n| GAT[5] | 90.5\\u00b10.6 | 92.5\\u00b10.9 | 78.0\\u00b119.0 | 85.7\\u00b120.3 |\\n| GSM (our base) | 91.8\\u00b10.4 | 93.3\\u00b10.6 | 79.2\\u00b12.1 | 89.3\\u00b11.9 |\\n| GESM (GSM+attention) | 92.0\\u00b10.5 | 93.7\\u00b10.6 | 79.3\\u00b11.7 | 90.0\\u00b12.0 |\\n+---------------------------------+-----------------+---------------------------+-----------------------------+----------------------+\\n\\nWe added the experiment R2 mentioned. We've been able to see that our model is pretty robust in a variety of datasets and environments.\\n\\n\\n6. The transition matrix is missing self-loops to match with the results of Figure 1. Since you already define analogously to , you should focus on one notation for consistency reasons.\\n\\nA)\\nThank the R2 for pointing out our mistake. We added self-loops to Figure 1.\\n\\n\\nWe will update our manuscript by reflecting the comments and responses as soon as possible.\\n\\n[1] Wu et al.: Simplifying Graph Convolutional Networks\\n[2] Klicpera et al.: Predict then Propagate: Graph Neural Networks meet Personalized PageRank\\n[3] Xu et al.: Representation Learning on Graphs with Jumping Knowledge Networks\\n[4] Kipf and Welling: Semi-Supervised Classification with Graph Convolutional Networks\\n[5] Petar Veli\\u010dkovi\\u0107 et al: Graph Attention Networks\\n[6] Shchur et al.: Pitfalls of Graph Neural Network Evaluation\\n[7] Gleich et al. : Seeded PageRank solution paths\\n[8] Sami Abu-El-Haija et al. : MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing\"}",
"{\"title\": \"Author response to Review #2 (part2)\", \"comment\": \"3. The final prediction layer with weight matrix W_1 operates on all propagation layers, resulting in a parameter complexity of , where is the number of classes. With and , this results in 15.360 c parameters (!!!), whereas GCN [5] only uses 16 c parameters. Hence, I do not think it is fair to promote your model as efficient as vanilla GCN. In addition, the final matrix multiplication results in a computational complexity of which does not nearly match your reported complexity. Furthermore, I do wonder why your model is not heavily overfitting with such an amount of parameters. For example, this is the reason [3] does evaluate its model on a larger training split instead of a smaller one.\\n\\nA)\\n[Table 3-1] Inference Time(s)\\n+-----------------------+---------+----------+---------+---------+---------+\\n| Model/Step | 2 | 5 | 8 | 15 | 20 |\\n+-----------------------+---------+----------+---------+---------+---------+\\n| GCN[4] | 0.028 | 0.037 | 0.049 | 0.073 | 0.092 |\\n| GAT[5] | 0.136 | 0.201 | 0.315 | 0.577 | 0.781 |\\n| GSM | 0.035 | 0.039 | 0.043 | 0.060 | 0.071 |\\n| GESM | 0.131 | 0.143 | 0.153 | 0.178 | 0.211 |\\n+-----------------------+---------+---------+----------+---------+---------+\\n\\n[Table 3-2] Test ACC on new datasets\\n+---------------------------------+------------------+--------------------------+-----------------------------+-----------------------+\\n| Model/Test ACC | Coauthor CS | Coauthor Physics | Amazon Computers | Amazon Photo |\\n+---------------------------------+------------------+--------------------------+-----------------------------+-----------------------+\\n| GCN[4] | 91.1\\u00b10.5 | 92.8\\u00b11.0 | 82.6\\u00b12.4 | 91.2\\u00b11.2 |\\n| GAT[5] | 90.5\\u00b10.6 | 92.5\\u00b10.9 | 78.0\\u00b119.0 | 85.7\\u00b120.3 |\\n| GSM (our base) | 91.8\\u00b10.4 | 93.3\\u00b10.6 | 79.2\\u00b12.1 | 89.3\\u00b11.9 |\\n| GESM (GSM+attention) | 92.0\\u00b10.5 | 93.7\\u00b10.6 | 79.3\\u00b11.7 | 90.0\\u00b12.0 |\\n+---------------------------------+-----------------+---------------------------+-----------------------------+----------------------+\\n\\nWe agree that our method uses more parameter for feature concatenation, thus leading to more computation in the last layer. However, this computation cost does not severely harm the real inference time. \\n\\nTo check a computational complexity, we measured an inference time on Cora dataset. Due to the realistic assumption written in the manuscript (hidden size << non zero entities), the experimental computation complexity increases linearly[8] with respect to steps as shown in Table 3-1 and in the revised manuscript Figure 6. The inference time of GSM is less than GCN in longer steps, and the time of GESM is much faster than GAT while providing higher accuracies.\\n\\nTo check about overfitting, we carried out additional experiments on extensive number of training splits as you mentioned. We used unified number of parameters (unified size: 64, step size: 15) without any special parameter tuning. As shown in Table 3-2, the proposed approaches showed robust performance even in the new datasets.\\n\\nFor a fair comparison, we conducted experiments by reducing the number of parameters used in our method similar to the number of GCN parameters. The experimental results are as follows Table 3-3. \\n\\n[Table 3-3] Test ACC on Cora for a fair comparison\\n+---------------------------+----------------------+---------------------+-------------------+\\n| Model/Test ACC | Cora | Citeseer | Pubmed | \\n+---------------------------+----------------------+---------------------+-------------------+\\n| GCN[4] | 81.5% (23040) | 70.3% (56895) | 79.0% (8048) |\\n| GSM (our base) | 82.1% (21532) | 69.5% (57120) | 79.6% (8260) |\\n+---------------------------+----------------------+---------------------+-------------------+\\n(the number of parameters)\\n\\nGSM has reached or outperformed GCN even with a similar number of parameters as GCN. Using excessive parameters is a disadvantage of the model, as R2 pointed out. However, for GCN, using more parameters does not guarantee to improve performance. Although GSM uses more parameters, it adaptively reflects the local and global of graph information, which leads result to SOTA. Considering the results of Table 3-3 and Table 5 in the draft, we can conjecture that our model does not overfit the data but effectively uses the parameters for modeling graph structure to avoid the oversmoothing issue.\"}",
"{\"title\": \"Author response to Review #2 (part1)\", \"comment\": \"We appreciate R2 for the rich advice. Thanks to your constructive feedback, we were able to rethink about the shortcomings, and it has helped us improve the quality of our research.\", \"the_responses_for_each_comment_are_as_follows\": \"1. The proposed GSM model is not new and only re-uses building blocks from the related work. [1] shows that removing non-linearities is an effective procedure for node classification. [2] investigates the massively stacking of propagations. The procedure of feature concatenation from different locality has been studied in [3]. Applying asymmetric normalization is a standard aggregation scheme for GNNs, e.g., in [4]. \\n\\nA)\\nWe agree that our method, an attention-enhanced mixture of random work, is based on a simple graph block and the novelty might be not very large. However, the relevant methods, including ours, showed noticeably different results for the oversmoothing issue, which has not been resolved yet. Unlike other methods, our method is consistently superior to others in prediction accuracy and computation time.\\n\\n[Table 1-1] To check the oversmoothing issue (Train ACC)\\n+-----------------+-------+-------+-------+-------+-------+\\n| model/step | 2 | 5 | 8 | 15 | 20 |\\n+-----------------+-------+-------+-------+-------+-------+\\n| GCN[4] | 1.00 | 1.00 | 0.92 | 0.23 | 0.20 |\\n| GAT[5] | 1.00 | 1.00 | 0.89 | 0.25 | 0.22 |\\n| SGC[1] | 1.00 | 1.00 | 0.87 | 0.77 | 0.72 |\\n| GSM (ours) | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\\n+-----------------+-------+-------+-------+-------+-------+\\n\\nAs shown in Table 1-1, SGC[1], GCN[4], and GAT[5] suffer from oversmoothing issue. GCN and GAT show severe degradation in accuracy since the 8th step; Even if SGC is better than GCN and GAT, its accuracy continues to decrease as the step size increases. Therefore JK-Net[3], which is based on GCN or GAT propagation, does not seem to utilize global information properly. Our proposed method GSM maintains its accuracy even in global steps without degradation. \\n\\n[Table 1-2] To check the global aggregation (Average Test ACC of 10 runs)\\n+-----------------------+--------+--------+--------+--------+--------+\\n| Model/Step | 10 | 15 | 20 | 25 | 30 |\\n+-----------------------+--------+--------+--------+--------+--------+\\n| JK-Net[3] | 0.44 | 0.40 | 0.30 | 0.27 | 0.24 |\\n| GSM (our base) | 0.80 | 0.81 | 0.81 | 0.81 | 0.81 |\\n+---------------------- +--------+--------+--------+--------+--------+\\n\\nFor more detailed comparison ours with JK-Net about global information, we checked test accuracy by storing features after the 10th step. Both methods are based on concatenation to alleviate the oversmoothing issue, but as the underlying model differs, eventually the test accuracy is significantly different as represented in Table 1-2. Since global information of JK-Net is obtained from GCN or GAT, the longer the step, the more difficult it is to maintain performance. GSM, on the other hand, keeps steady performance, which proves that GSM does not collapse even in global steps. \\n\\nIn addition, APPNP [2] is Neumann series approximation algorithms [7], which can be considered as a simple sum of steps, and it is inferior in performance to ours as shown in various experimental results. \\n\\nTherefore, this is not a trivial approach because other models that share some relevant ideas do not show competitive performance compared to ours.\\n\\n\\n2. The GESM model is not fully understandable since it is missing a formal description for computing $\\\\alpha$. It is only said that $\\\\alpha$ is computed using the concatenation of features from the central node and its neighbors. Can you elaborate how exactly you compute $\\\\alpha$, especially since the concatenation of neighboring features results in a non-permutation invariant architecture? In addition, in contrast to the reported results in Tables 3 and 4, Figure 4 indicates that the benefits of GESM are negligible. \\n\\nA)\\nSorry for missing a formal description the attention\\n$\\\\alpha = \\\\text{softmax}(W1H1+W2H2)$, where $H1$ denotes a central node, $H2$ denotes a neighbor node. \\n\\nIn order to maintain the permutation invariant, $\\\\alpha$ is computed based on only one direction of nodes (keeping the order of nodes). \\n\\nThe results in Figure 4b (Figure 5 in revision) might be slightly different from the results in the table because we do not use early stops and averaging of multiple runs used on the table. But GESM converges faster than GSM and has overwhelming results in inductive learning.\"}",
"{\"title\": \"Author response to Review #3 (part2)\", \"comment\": \"4. Some experimental results are comparable to existing methods, as shown in Table 2 and 3. Maybe the time complexity is the major contribution of this paper. A head-to-head running time comparison with SOTA in Table 4 will be helpful.\\n\\nA)\\nThank you for your valuable comments. We measured a head-to-head running time in terms of inference with GCN[5], GAT[7], and our GSM, GESM on Cora dataset as displayed in reply 3 and revised manuscript Figure 6. The running time of our base GSM model is comparable to very fast GCN, and our attention-enhanced GESM is much faster than GAT.\\n\\n\\n5. Fewer methods are compared in Table 3 than in Table 2. Can authors add more in Table 3 to give a better demonstration? \\n\\nA)\\nThank you for raising this issue. Only a few papers conducted experiments with low label rates, so a small number of methods were compared through public reports. For a better demonstration according to your comments, we added experimental results of SGC[1], APPNP[2], and JK-Net[3] by our own implementation. The updated Table 3 still confirms the competitiveness of our method. \\n\\n\\nWe will update our manuscript by reflecting the comments and responses as soon as possible.\\n\\n[1] Wu et al.: Simplifying Graph Convolutional Networks\\n[2] Klicpera et al.: Predict then Propagate: Graph Neural Networks meet Personalized PageRank\\n[3] Xu et al.: Representation Learning on Graphs with Jumping Knowledge Networks\\n[4] Sitao Luan et al: Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks \\n[5] Kipf and Welling: Semi-Supervised Classification with Graph Convolutional Networks\\n[6] Shchur et al.: Pitfalls of Graph Neural Network Evaluation\\n[7] Petar Veli\\u010dkovi\\u0107 et al: Graph Attention Networks\\n[8] Hoang NT and Takanori Maehara: Revisiting Graph Neural Networks: All We Have is Low-Pass Filters\"}",
"{\"title\": \"Author response to Review #3 (part1)\", \"comment\": \"We appreciate R3 for the constructive feedback. Thanks to your comments, we found that there was a shortage of empirical proofs on oversmoothing. We agree with your opinion and thus conducted more experiments regarding oversmoothing.\", \"the_responses_for_each_comment_are_as_follows\": \"1. The oversmoothing problem has been mentioned many times in this paper, yet little has been demonstrated through experiments that the new model can solve the oversmoothing issue. It would be great to show the performance improvement while oversmoothing is mitigated.\\n\\nA) \\nAbout the oversmoothing issue, we have already presented experimentally that the accuracy does not degrade as the step increases in Figure 4b (Figure 5 in revision). However, as R3 pointed out, to give a more explicit demonstration, we compared ours with other competitive methods on how training accuracy changes depending on the step size. The results are as follows :\\n\\n(Train ACC)\\n+-----------------+-------+-------+-------+-------+-------+\\n| model/step | 2 | 5 | 8 | 15 | 20 |\\n+-----------------+-------+-------+-------+-------+-------+\\n| GCN[5] | 1.00 | 1.00 | 0.92 | 0.23 | 0.20 |\\n| GAT[7] | 1.00 | 1.00 | 0.89 | 0.25 | 0.22 |\\n| SGC[1] | 1.00 | 1.00 | 0.87 | 0.77 | 0.72 |\\n| GSM (ours) | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\\n+-----------------+-------+-------+-------+-------+-------+\\n\\nAs shown in the above results, the other methods such as GCN[5], SGC[1], and GAT[7] suffer from oversmoothing; GCN and GAT show severe degradation in accuracy since the 8th step; SGC is better than GCN and GAT, but accuracy gradually decreases as the step size increases. These results means other models cannot train from the data due to the oversmoothing. Unlike others, the proposed GSM maintains performance without any degradation, because no rank loss[4] occurs and oversmoothing is overcome by step mixture.\\n\\n\\n 2. The proposed idea is very similar to the following paper: \\u201cRevisiting Graph Neural Networks: All We Have is Low-Pass Filters\\u201d. Both use low-pass filtering (via transition matrix) to propagate the information on the graph. I suggest a detailed discussion with this work. \\n\\nA)\\nThank you for your insightful feedback. As R3 stated, there are similarities between our model and gfNN[8] in that propagation and embedding are separated. However, different from gfNN, we gather all the progressed blocks and consider all of them for the final prediction. This is what we call step-mixture, which allows our model to adaptively select global and local information from all graphs. \\n\\n\\n3. The major concern of this work is the weak novelty. It combines GAT with multiple random walk under the GNN framework. While this is working well on most GNN datasets, it is not very new by itself. \\n\\nA)\\n(Inference Time(s))\\n+-----------------------+---------+----------+---------+---------+---------+\\n| Model/Step | 2 | 5 | 8 | 15 | 20 |\\n+-----------------------+---------+----------+---------+---------+---------+\\n| GCN[5] | 0.028 | 0.037 | 0.049 | 0.073 | 0.092 |\\n| GAT[7] | 0.136 | 0.201 | 0.315 | 0.577 | 0.781 |\\n| GSM | 0.035 | 0.039 | 0.043 | 0.060 | 0.071 |\\n| GESM | 0.131 | 0.143 | 0.153 | 0.178 | 0.211 |\\n+-----------------------+---------+---------+----------+---------+---------+\\n\\n(Test ACC)\\n+---------------------------------+------------------+--------------------------+---------------------+-----------------------+\\n| Model/Test ACC | Coauthor CS | Coauthor Physics | Amazon Computers | Amazon Photo |\\n+---------------------------------+------------------+--------------------------+---------------------+-----------------------+\\n| GCN[5] | 91.1\\u00b10.5 | 92.8\\u00b11.0 | 82.6\\u00b12.4 | 91.2\\u00b11.2 |\\n| GAT[7] | 90.5\\u00b10.6 | 92.5\\u00b10.9 | 78.0\\u00b119.0 | 85.7\\u00b120.3 |\\n| GSM (our base) | 91.8\\u00b10.4 | 93.3\\u00b10.6 | 79.2\\u00b12.1 | 89.3\\u00b11.9 |\\n| GESM (GSM+attention) | 92.0\\u00b10.5 | 93.7\\u00b10.6 | 79.3\\u00b11.7 | 90.0\\u00b12.0 |\\n+---------------------------------+-----------------+---------------------------+----------------------+----------------------+\\n\\nWe agree that the novelty of our method might not be very large. However, it is never trivial to integrate two methods to address the oversmoothing issue with smaller computation time. Our method provides much faster and more accurate performance compared to GAT[7], which can be done by the sophisticated design of our method using mixture of random walk steps. Our contribution is to robustly handle the oversmoothing issue while spending much smaller computation costs than GATs as shown in Table above (A new experiment is conducted on [6] by the suggestion of Reviewer 2).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a new GNN model to address the common issue \\u201coversmoothing\\u201d, namely, Graph Entities with Step Mixture via random walk (GESM). Basically, it integrates both mixture of various steps through random walk, and graph attention network, and demonstrates that it can overcome the SOTA on popular benchmarks.\", \"detailed_comments\": [\"The oversmoothing problem has been mentioned many times in this paper, yet little has been demonstrated through experiments that the new model can solve the oversmoothing issue. It would be great to show the performance improvement while oversmoothing is mitigated.\", \"The proposed idea is very similar to the following paper: \\u201cRevisiting Graph Neural Networks: All We Have is Low-Pass Filters\\u201d. Both use low-pass filtering (via transition matrix) to propagate the information on the graph. I suggest a detailed discussion with this work.\", \"The major concern of this work is the weak novelty. It combines GAT with multiple random walk under the GNN framework. While this is working well on most GNN datasets, it is not very new by itself.\", \"Some experimental results are comparable to existing methods, as shown in Table 2 and 3. Maybe the time complexity is the major contribution of this paper. A head-to-head running time comparison with SOTA in Table 4 will be helpful.\", \"Fewer methods are compared in Table 3 than in Table 2. Can authors add more in Table 3 to give a better demonstration?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents two models, namely GSM and GESM, to tackle the problem of transductive and inductive node classification. GSM is operating on asymmetric transition matrices and works by stacking propagation layers of different locality, where the final prediction is based on all propagation steps (JK concatenation style). GESM builds upon GSM and introduces a multi-headed attention layer applied on the initial feature matrix to guide the propagation layers. The models are evaluated on four common benchmark datasets and achieve state-of-the-art performance, especially when the training label rate is reduced.\\n\\nOverall, the paper is well-written and its presentation is mostly clear and comprehensible (see below). The quantitative evaluation looks good to me, especially since an ablation study shows the contributions of all of the proposed features.\\n\\nHowever, there are a few weak points which should explain my overall score:\\n\\n1. The proposed GSM model is not new and only re-uses building blocks from the related work. [1] shows that removing non-linearities is an effective procedure for node classification. [2] investigates the massively stacking of propagations. The procedure of feature concatenation from different locality has been studied in [3]. Applying asymmetric normalization is a standard aggregation scheme for GNNs, e.g., in [4].\\n\\n2. The GESM model is not fully understandable since it is missing a formal description for computing $\\\\alpha$. It is only said that $\\\\alpha$ is computed using the concatenation of features from the central node and its neighbors. Can you elaborate how exactly you compute $\\\\alpha$, especially since the concatenation of neighboring features results in a non-permutation invariant architecture? In addition, in contrast to the reported results in Tables 3 and 4, Figure 4 indicates that the benefits of GESM are negligible.\\n\\n3. The final prediction layer with weight matrix W_1 operates on all propagation layers, resulting in a parameter complexity of $O(s h c)$, where $c$ is the number of classes. With $s=30$ and $h=512$, this results in 15.360 c parameters (!!!), whereas GCN [5] only uses 16 c parameters. Hence, I do not think it is fair to promote your model as efficient as vanilla GCN. In addition, the final matrix multiplication results in a computational complexity of $O(n s^2 h^2 c)$ which does not nearly match your reported complexity.\\nFurthermore, I do wonder why your model is not heavily overfitting with such an amount of parameters. For example, this is the reason [3] does evaluate its model on a larger training split instead of a smaller one.\\n\\n4. As your work is quite similar to [1, 2], it would be beneficial to also include the respective results of those methods in Tables 2 and 3. In addition, their differences and similarities should be discussed in detail.\\n\\n5. Since the used benchmark datasets are already reasonably explored, authors are advised to include evaluation on other datasets as well, e.g., from [6].\\n\\n6. The transition matrix $P$ is missing self-loops to match with the results of Figure 1. Since you already define $P$ analogously to $\\\\hat{\\\\tilde{A}}$, you should focus on one notation for consistency reasons.\\n\\n[1] Wu et al.: Simplifying Graph Convolutional Networks\\n[2] Klicpera et al.: Predict then Propagate: Graph Neural Networks meet Personalized PageRank\\n[3] Xu et al.: Representation Learning on Graphs with Jumping Knowledge Networks\\n[4] Hamilton et al.: Inductive Representation Learning on Large Graphs\\n[5] Kipf and Welling: Semi-Supervised Classification with Graph Convolutional Networks\\n[6] Shchur et al.: Pitfalls of Graph Neural Network Evaluation\\n\\n----------------------------\", \"update_after_the_rebuttal\": \"The authors have addressed several issues and improved their manuscript. I greatly appreciate the effort and the new experimental results. However, the main weak point that the novelty of the approach is limited remains valid of course. Therefore, I am still more inclined to rejecting the paper. I have raised my score from \\\"1: Reject\\\" to \\\"3: Weak Reject\\\".\\n\\nI have raised my score from \\\"5: Weak Reject\\\" to \\\"6: Weak Accept\\\".\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a graph neural network model that aims at improving the feature aggregation scheme to better handle distant nodes, therefore mitigating the \\u201csmoothing\\u201d problem of classic averaging.\\n\\nI find the paper clearly motivated and easy to follow, although some sentences could be streamlined and some repetitions could be removed. For instance last paragraph in page 3, this should be clear already and should be restated.\\n\\nOne thing that is not clear is how does the model cope with the increase in feature dimensionality due to the concatenation over different steps. Isn\\u2019t this leading to overfitting? Did the authors experiment with other schemes such as averaging or gating? If so it would be nice to see the results for each of the configurations as it is not clear to me what should be chosen a-priori.\\n\\nExperiments are nicely executed and the proposed approach is compared against a rich array of other models. Results are state-of-the-art and also the analysis of the model is interesting, i.e. it doesn\\u2019t diverge when increasing # steps at test time.\\nHow does the attention vector look like? Does it tend to peak at a given k, or is it more uniformly distributed? \\n\\nHow does the model compare to having k GAT layers, each constrained to use neighboring nodes at step k as input for the attention computation? Did the authors experiment on this?\\n\\nOverall I like the work but find the novelty quite limited, more effort could have been put into motivating the soundness of the use of multiple random walks. Perhaps some theory could be developed to make the paper stronger.\"}"
]
} |
rkglZyHtvH | Refining the variational posterior through iterative optimization | [
"Marton Havasi",
"Jasper Snoek",
"Dustin Tran",
"Jonathan Gordon",
"José Miguel Hernández-Lobato"
] | Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive. In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it. Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families. We demonstrate theoretically that our method always improves a bound on the approximation (the Evidence Lower BOund) and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10. | [
"uncertainty estimation",
"variational inference",
"auxiliary variables",
"Bayesian neural networks"
] | Reject | https://openreview.net/pdf?id=rkglZyHtvH | https://openreview.net/forum?id=rkglZyHtvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HU6BjGlvDl",
"B1e_yO12jr",
"rkgkfz39sr",
"r1gWlfvuor",
"HJxI_WBOjS",
"HJgWtgS_oB",
"rJeLMyHdjB",
"SyxEr0E_ir",
"BkgJiljxjB",
"BylUth6tqS",
"HygKbYcl5S",
"r1gLsnqAKS",
"ryeMUJ66tS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798725754,
1573808095626,
1573728775097,
1573577193477,
1573568878077,
1573568633408,
1573568269887,
1573568060290,
1573068950992,
1572621437836,
1572018433196,
1571888285976,
1571831626369
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1531/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1531/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1531/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1531/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1531/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1531/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"In this paper a method for refining the variational approximation is proposed.\\n\\nThe reviewers liked the contribution but a number reservations such as missing reference made the paper drop below the acceptance threshold. The authors are encouraged to modify paper and send to next conference.\\n\\nReject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comments on the rebuttal\", \"comment\": \"Thanks for your response.\\n\\n1. On the guarantee of improvement. Thanks for adding the formal proof, which is helpful and addresses my doubts. The discussion on the improvement over ELBO_init is still a bit hard to follow, due to the different notations involved and the order in which things are presented. I would therefore suggest to refactor this part as follows: start by stating that ELBO_aux = ELBO_init if q_phi1(w) = q_phi(w|a_1), then ELBO_aux >= ELBO_init would follow thanks to optimizing ELBO_aux over phi_1, and finally discuss the necessary conditions to initialize q_phi1(w) with q_phi(w|a_1).\\n\\n2. On the generality. The paper only treats the case where the conditional posterior is accessible analytically, and it would benefit from discussions/analysis of situations where q(w|a1) is implicit, i.e., we cannot evaluate its density. Ideally, experiments under the latter setting should also be included, especially since the inequality ELBO_aux >= ELBO_init may no longer hold. Otherwise, it would be hard to convince one to consider the proposed method beyond settings where the conditional posterior is accessible. \\n\\n3. On missing baselines. HVM, for instance, can be reasonably considered in the experiments of sections 4.2 and 4.3. As you mentioned it has been applied to Deep Exponential Families (DEFs), which generalizes Bayesian feedforward neural nets by using stochastic layers. In other words, under comparable architectures, inference is even more challenging with DEFs than Bayesian neural nets. Moreover, I could not find empirical comparisons with HVM or Auxiliary Deep Generative models in (Louizos and Welling, \\u200e2017).\"}",
"{\"title\": \"Reponse 2\", \"comment\": \"Thanks for the answers\\n\\n\\n\\\"MCMC methods can show strong performance on small scale regression tasks, and given enough time, they might converge to a better posterior approximation (May we ask for a citation for exact numbers on the UCI benchmarks?)\\\"\\n\\nI trawled google scholar for a refence but actually couldn't find anything useful - I'll rest my case on this one then :)\\n\\nThe refinement steps have to be repeated for each ensemble member meaning that with K auxiliary variables and M ensemble members, one has to refine MK times after training the initial mean-field approximation. This amounts to a ~25% computational overhead compared to standard VI.\\n\\nCan you clarify how you arrive and 25% overhead here e.g. what type of standard VI are you comparing against?\\n\\n\\n\\\"(Heek and Kalchbrenner, 2019) shows very strong results, however, as the review also mentions, a direct comparison is problematic, since they use a significantly larger model (ResNet56), proposed changes to the architecture, and use significantly more compute (1000 epochs). They are also a concurrent ICLR submission. Nevertheless, we are impressed by the results and we are eager to see this paper being published soon.\\\"\\n\\nYes some can definitely be attributed to model differences. I know it is a lot of work but I think it would make your paper a lot stronger if you could show encouraging results on a reasonably competitive model e.g. ResNet56 on Cifar10? Otherwise I think rephrasing your text a bit to be less focused on SOTA results and more on the conceptual contribution is also a reasonable option as you suggests your self.\"}",
"{\"title\": \"Revision\", \"comment\": \"We appreciate all the reviews and feedback. We made the following changes to address the concerns expressed in the reviews:\\n\\nReview #2: Brief discussion of (Guo et al., 2016) \\u2018Boosting variational inference\\u2019 in the related works section.\\n\\nReview #3: Included a formal proof of the claim that ELBO_aux >= ELBO_init in the appendix.\\n\\nReview #4: Rephrased the misleading claim \\u2018It sets a new state-of-the-art in uncertainty estimation at ResNet scale on CIFAR10\\u2019 as suggested.\", \"minor\": \"Fixed the notation of the iteration indices in Section 2.2.\"}",
"{\"title\": \"Reply to Review #4\", \"comment\": \"Thank you for the review and feedback. We appreciate that you found the paper interesting.\\n\\n1. The refinement steps can be computationally demanding, but any amount of refinement is guaranteed to improve the approximate posterior. In our experiments, the computational cost of the refinement steps is roughly 25% of the cost of training the initial mean-field approximation, which represents a non-trivial, but feasible computational overhead. For the LeNet-5 experiments, the refinement steps amount to 50 epochs, while in the ResNet experiments, we have more relaxed computational constraints so the refinement steps add up to 200 epochs. See Section 4.5 for cost comparisons, where the SOTA in expressive posteriors (multiplicative normalizing flows) is more compute-expensive.\\n\\n2. MCMC methods can show strong performance on small scale regression tasks, and given enough time, they might converge to a better posterior approximation (May we ask for a citation for exact numbers on the UCI benchmarks?). Gaussian processes and deep Gaussian processes (Salimbeni and Deisenroth, 2017) are also known to be competitive on these benchmarks. In our paper, the regression experiments are meant to serve as a comparison to other variational approaches and we do not claim SOTA performance.\\n\\n3. We believe that the core contribution of our paper is an interesting and original approach to VI, rather than the specific SOTA results.\\n\\nThe baselines that we compare against at ResNet scale are Deep Ensembles (Lakshminarayanan et al., 2017), Variational Inference (Ovadia et al., 2019) and Variational Gauss-Newton (Osawa et al., \\u200e2019) which can be considered state-of-the art. Refined VI outperforms these on ResNet20. We agree that our wording, \\u201cIt sets a new state-of-the-art in uncertainty estimation at ResNet scale on CIFAR10\\u201d suggests a more general result and we are going to adjust the phrasing to reflect the model size and the setting. We are changing the phrasing of our introduction and conclusion sections to put less emphasis on the specific results and focus on the benefits of the core idea instead.\\n\\n(Heek and Kalchbrenner, 2019) shows very strong results, however, as the review also mentions, a direct comparison is problematic, since they use a significantly larger model (ResNet56), proposed changes to the architecture, and use significantly more compute (1000 epochs). They are also a concurrent ICLR submission. Nevertheless, we are impressed by the results and we are eager to see this paper being published soon.\\n\\n4. The refinement steps have to be repeated for each ensemble member meaning that with K auxiliary variables and M ensemble members, one has to refine MK times after training the initial mean-field approximation. This amounts to a ~25% computational overhead compared to standard VI.\\n\\n5. The method is not sensitive to the variances of a1,..,a5. We set their variances so that they form a decreasing geometric sequence with factor 0.7, but any factor between 0.3 and 0.9 performed similarly.\\n\\nIf this reply addressed your main comments, please consider revising your score, otherwise let us know the remaining concerns you might have.\\n\\n(Salimbeni and Deisenroth, 2017) Salimbeni, Hugh, and Marc Deisenroth. \\\"Doubly stochastic variational inference for deep Gaussian processes.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n(Lakshminarayanan et al., 2017) Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. \\\"Simple and scalable predictive uncertainty estimation using deep ensembles.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n(Ovadia et al., 2019) Ovadia, Yaniv, et al. \\\"Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift.\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n(Osawa et al., \\u200e2019) Osawa, Kazuki, et al. \\\"Practical Deep Learning with Bayesian Principles.\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n(Heek and Kalchbrenner, 2019) Heek, Jonathan, and Nal Kalchbrenner. \\\"Bayesian Inference for Large Scale Image Classification.\\\" arXiv preprint arXiv:1908.03491 (2019).\"}",
"{\"title\": \"Reply to Review #3\", \"comment\": \"Thank you for the review and feedback. We appreciate that you found the paper interesting.\\n\\n1. The argument is that there exists a phi_1, for which ELBO_aux = ELBO_init. Therefore, by further optimizing phi_1, we can ensure that ELBO_aux >= ELBO_init. If the optimizer were to produce a phi_1 for which ELBO_aux < ELBO_init (since it is a stochastic optimizer), we can simply discard the result of the optimizer and proceed with the value of phi_1 for which ELBO_aux = ELBO_init. This guarantees that ELBO_aux >= ELBO_init.\\n\\nTo further clarify the claim, we are going to include a formal proof in the appendix stating that max(ELBO_aux(phi_1), ELBO_aux(optimized(phi_1))) >= ELBO_aux(phi_1) = ELBO_init.\\n\\nThe phi_1 where ELBO_aux = ELBO_init can be analytically computed (formula given in the Appendix) by ensuring that q_phi(w|a1)=q_{phi_1}(w). In the experiments, we use this formula to initialise phi_1 for the SGD training.\\n\\nRegarding Figure 2, the drops are due to optimiser artefacts. When we initialise phi_1, the momentum term of Adam is reset and it takes a few iterations to find a good local optima. If we used a smaller learning rate these drops would not occur, but convergence would be slower. \\n\\n2. One of the benefits of the approach is that it is very general. It can be applied to any probabilistic model with any joint distribution p(w, a1, \\u2026, aK) (obeying the constraints specified in the paper), meaning that the auxiliary variables do not have to be additive or independent, they can have arbitrary distributions. We showcased the method in Bayesian neural networks, because they have a challenging posterior distribution. Application to variational autoencoders, latent variable models etc. with different variational distributions is certainly interesting and we are considering it for future work.\\n\\nThe existence of the analytical conditionals is only used once in the paper when we show the guarantee of improvement (ELBO_aux >= ELBO_init). In the case when the conditional posterior is not analytically computable, this guarantee does not hold, but the algorithm can still be applied and it might provide an improvement.\\n\\n3. [1,2] are indeed relevant and interesting papers and we discuss them in the related works section. However, an open challenge is how to apply auxiliary variables for Bayesian neural networks, which have a significantly large parameter space . (HVM is applied to Deep exponential families and Auxiliary deep generative models is used for generative models where posterior dimensions are <1000.) Multiplicative Normalizing Flows (Louizos and Welling, \\u200e2017) builds on those papers for Bayesian neural networks and is a method we compare to. Other SOTA is represented by Deep Ensembles (Lakshminarayanan et al., 2017) and VOGN (Osawa et al., \\u200e2019).\\n\\n\\n(Lakshminarayanan et al., 2017) Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. \\\"Simple and scalable predictive uncertainty estimation using deep ensembles.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n(Osawa et al., \\u200e2019) Osawa, Kazuki, et al. \\\"Practical Deep Learning with Bayesian Principles.\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n(Louizos and Welling, \\u200e2017) Louizos, Christos, and Max Welling. \\\"Multiplicative normalizing flows for variational Bayesian neural networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\"}",
"{\"title\": \"Reply to Review #2\", \"comment\": \"Thank you for the review and feedback.\\n\\n1. We share your intuition that at the limit of infinitely many auxiliary variables, the variational posterior can be very flexible and might approximate the exact posterior exactly in some cases but we were unable to prove this. The strongest statement we can state is that each refinement step is guaranteed to make the refined posterior better. Figure 2 depicts how the ELBO improves with the introduction of each new auxiliary variable (K=1..5). \\n\\n2. In our experiments, we train independent Gaussians for all weights (for both the initial and refined posteriors) parameterized by their means and variances. The dimensionality of w, a1, a2, \\u2026, ak is the same as the number of weights. The correlations and multi-modality are introduced through the refinement steps as demonstrated in the toy example. The method requires no full matrix inversion (only inversion of diagonal matrices).\\n\\n3. Thank you for bringing our attention to this related paper. We are updating the paper to include a brief discussion on it.\\n\\nIn this work, we propose an algorithm for refining the variational posterior of a Bayesian neural network. The refinement enables the variational posterior to capture complex, multi-modal distributions at the cost of a small computational overhead. We present theoretical guarantees as well as empirical evidence that the method provides a significant improvement over standard VI.\\n\\nIf this reply addressed your main comments, please consider revising your score, otherwise let us know the remaining concerns you might have.\"}",
"{\"title\": \"Reply to Review #1\", \"comment\": \"Thank you for the review and feedback. We appreciate that you found the paper interesting.\\n\\nRegarding the complexity, it is indeed the case that phi_k needs to be optimised for each sample w. There are O(MK) refinement steps, where each refinement step amounts to 200 steps of stochastic gradient descent which represents a ~25% computational overhead. For empirical numbers, see Section 4.5. \\n\\nThe initial mean-field q(w) is trained before any refinement step takes place and it is not changed after the refinement steps.\\n\\nThe dimensionality of a is the same as the dimensionality of w. Any number of a1, ..., ak can be used for the refinement, it is invariant of the dimensionality of w. In our experiments we used a1, \\u2026, a5 because each new auxiliary variable comes with further computational overhead.\\n\\nIt is true that the ELBO_init is a lower bound to the marginal likelihood but, this lower bound is only tight when the initial q(w) is the true posterior. In this case, the refinement steps would not provide any improvement, resulting in log p(y|x)=ELBO_ref=ELBO_aux=ELBO_init. \\n\\nlog p(y|x) >= ELBO_ref holds, because the refined VI is still a variational inference approach, so the ELBO of the refined distribution is still a lower bound to the marginal likelihood.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presented an iteratively refined variational inference for Gaussian latent variables. The intuition is straightforward and makes sense to me. However, I have some concerns.\", \"detailed_comments\": \"1. In theoretical justification, only K=2 is discussed. My intuition is that as K increases, the approximation of the true posterior should be closer. The summation of multiple Gaussian distributions can arbitrarily approximate any distribution given enough base distributions. I would like to see some theoretical discussion about K. At least in the experiment, the author should provide the performance of different Ks.\\n2. The toy example in the paper is simply 1D Gaussian. I want to see more discussion for high dimensional latent variables. So in the experiments, how you parameterized the distribution for each weight? Totally independent? or allowing structural correlations? I am not sure the details of the implementation in this paper, but I also have a naive question for high dimensional Gaussian. Does it require to compute the matrix inverse when sampling a_k?\\n3. Another related paper \\\"Guo, Fangjian, et al. \\\"Boosting variational inference.\\\" arXiv preprint arXiv:1611.05559 (2016).\\\" should be discussed as well.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes to improve standard variational inference by increasing the flexibility of the variational posterior by introducing a finite set of auxiliary variables. Motivated by the limited expressivity of mean field variational inference the author suggests to iteratively refine a \\u2018proposal\\u2019 posterior by conditioning on a sequence of auxiliary variables with decreasing variance. The key requirement to set the variance of the auxiliary variables such that the integrating over them leaves the original model unchanged. As noted by the authors this is a variant of auxiliary variables introduced by Barber & Agakov. The motivation and theoretical sections seems sound and the experimental results are encouraging, however maybe not completely supporting the claim of new \\u2018state of the art\\u2019 on uncertainty prediction.\\n\\nOverall i find the motivation and theoretical contribution interesting. However I do not find the experimental section completely comprehensive why I currently think the paper is borderline acceptance. \\n\\nComments\\n1) The computational demand using the method seems quite large by adding O(NumSamples * NumAuxiliary) additional computational cost on top of the standard VI. Here each factor M is quite large e.g. 200 epochs for CIFAR10 (if i understand the algorithm correctly?)\\n2) For the UCI experiments the comparison is only made against DeepEnsembels or other VI methods, however to the best of my knowledge MCMC methods are superior in this setting given the small dataset size? \\n3) The results on CIFAR10 do seem to demonstrate that the proposed method is superior to DeepEnsembles and standard VI in one particular setting where VI is only performed over a small subset of layers in a ResNet (why doesn\\u2019t it work for when doing VI on all the parameters?). However generally looking at the best obtained results of ~86% acc this is quite far from current best probabilistic models (see e.g. Heek2019 that gets 94% acc). Some of this can probably be attributed to differences in data-augmentation and model architecture however in general it makes it very hard to compare with other methods when the baselines are not competitive.\", \"minor_comments\": \"In relation to comment 3) above I think you should reword the sentence \\u201cIt sets a new state-of-the-art in uncertainty estimation at ResNet scale on CIFAR10\\u201d in the conclusion.\\n\\n\\n\\u201cIn order to get independent samples from the variational posterior,we have to repeat the iterative refinement for each ensemble member\\u201d: Does this imply that if we want M samples we first have to optimize using the standard VI and then to M optimizations to get q_k(w)?\\n\\n\\nHow sensitive is the method to sequence of variances for a?\\n\\n[Heek2019]: Bayesian Inference for Large Scale Image Classification\"}",
"{\"title\": \"Typo in section 2.2\", \"comment\": \"In Section 2.2, we repeatedly referred to the iteration number with letter 'i' instead of 'k'. We apologise for the confusion that this typo may have caused. We are going to correct it as soon as possible.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new way to improving the variational posterior. The q(w) is an integral over multiple auxiliary variables. These auxiliary variables can be specified at different scales that can be refined to match different scales of details of the true posterior. They show better performance regression and classification benchmark datasets. They also show that the training time is at a reasonable scale when being parallelized.\\n\\nI think the idea is quite interesting. They did a good illustration of the difference between their model and related works. They also compare with the state-of-art variational approximation methods. \\n\\nOne concern I have is the complexity. I don't think it's just O(MK) since it has to optimize for each phi_k for each posterior sample w. This could be quite large depending on problems. \\n\\nAlso is the refining only run for once or run after each update of the mean field q(w)? If it's the latter, the overhead would be much larger.\\n\\nWhen w is very high-dimensional, the number of auxiliary variables should be exponentially larger. Is that true? Or it's actually invariant to the dimensionality of the posterior distribution?\\n\\nThe paper proves that the refined ELBO is larger than the auxiliary ELBO which is larger than the initial mean field. But the initial ELBO should be a tight lower bound of the true log likelihood. Would that be a problem that ELBO_ref actually spill over the true log likelihood which is a dangerous sign?\\n\\nOverall I think the paper is well written. The experiments are carefully designed. The idea is interesting and useful.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary.\\n\\nThis paper describes a method for training flexible variational posterior distributions, which consists in making iterative locale refinements to an initial mean-field approximation, using auxiliary variables. The focus is on Gaussian latent variables, and the method is applied to Bayesian neural nets to perform variational inference (VI) over the weights. Empirical results show improvements upon the performance of the mean-field approach and some other baselines, on classification and regression tasks.\\n\\nMain comments.\\n \\nOverall, this paper is well written and easy to follow. It tackles an important topic in VI and proposes an interesting idea to improve the flexibility of the approximate distribution. I have the following comments/questions.\\n\\n- On the guarantee of improvement. I still have some doubts regarding the inequality \\u201cELBO_aux >= ELBO_init\\u201d. Can you please elaborate more on this and provide a detailed formal proof? Figure 2 shows that ELBO_aux can go below ELBO_init.\\n- The focus of the paper is on Gaussian variables and a configuration where some key distributions, q(a_1) and q(w|a_1), are accessible in closed from. The generalization of the proposed method beyond these settings should be discussed and explored in experiments. \\n- Important baselines are missing in the experiments. I would recommend including at least the other VI techniques relying on auxiliary variables to build flexible variational families [1,2]. This would help to better assess the impact/importance of the proposed method.\\n\\n[1] Ranganath, Rajesh, Dustin Tran, and David Blei. \\\"Hierarchical variational models.\\\" ICML. (2016). \\n[2] Maal\\u00f8e, Lars, et al. \\\"Auxiliary deep generative models.\\\" ICML (2016).\"}"
]
} |
B1xeZJHKPB | Aggregating explanation methods for neural networks stabilizes explanations | [
"Laura Rieger",
"Lars Kai Hansen"
] | Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.
Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method..
Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes. | [
"explainability",
"deep learning",
"interpretability",
"XAI"
] | Reject | https://openreview.net/pdf?id=B1xeZJHKPB | https://openreview.net/forum?id=B1xeZJHKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PvcZsr_j-",
"Hkev67lhsS",
"HklB3JnsjH",
"HyxN6xcVjH",
"H1lAol5Eor",
"Hke7ve94jS",
"Bkxoi19EsB",
"HJeX4JqEoS",
"Bye0ht1B9B",
"S1eXG9ujFr",
"Bkejrz1otS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725727,
1573811134917,
1573793708637,
1573327035868,
1573327014304,
1573326938692,
1573326755026,
1573326635021,
1572301237976,
1571682827289,
1571643970551
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1530/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1530/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1530/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1530/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1530/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper describes a new method for explaining the predictions of a CNN on a particular image. The method is based on aggregating the explanations of several methods. They also describe a new method of evaluating explanation methods which avoids manual evaluation of the explanations.\\n\\nHowever, the most critical reviewer questions the contribution of the proposed method, which is simple. Simple isn't always a bad thing, but I think here the reviewer has a point. The new method for evaluating explanation methods is interesting, but the sample images given are also very simple -- how does the method work when the image is cluttered? How about when the prediction is uncertain or wrong?\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Final score\", \"comment\": \"After reading the response of the authors (thank you for them) I think I can increase my score.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your comprehensive response. Due to the provided clarifications, and having the mentioned novelty concerns in mind, I have changed my score accordingly.\", \"just_a_small_point\": \"\\\"Additionally, humans are notoriously easy to fool and are biased towards explanations that look crisp. See f.e. examples in [1]. At present we have not seen experimental designs that can leverage the complexity of working with human subjects, this is the topic of ongoing work.\\\"\\n\\nThis does not sound like a valid reason for not conducting human-subject experiments. The goal of explainability research is to provide human-understandable interpretations.\"}",
"{\"title\": \"Author's response\", \"comment\": \"We would like to thank the reviewer for their thoughtful comments. We address their comments below.\\n- \\u201cIn the latter case, it happens that the aggregated explanation explains a bit worse than the best non-aggregated explanation (from the graph it seems a very small difference though). This is odd because I would have assumed to see an improvement (at least a small one) using information from more than one system.\\u201d\\nIn fig. 4 we show that Agg-Mean and Agg-Var slightly outperform all vanilla methods for FashionMNIST, which is equally low-dimensional as MNIST. In contrast to FashionMNIST, MNIST is unique due to the binary data distribution (black and white). By nature of this binary distribution, removing a pixel (e.g., setting it to black) is \\\"informative\\\". We hypothesize that this forced choice makes Sensitivity-n, proposed by [1], less reliable as an evaluation method for MNIST. This is not likely to be an issue in most real world data sets .\\n- \\u201c The caption in Figure 1 says \\\"The decrease of the class score over the number of removed segments is reported.\\\", but I cannot find where this decrease is reported.\\u201d\\nThank you! The IROF score is the integration of the class score over the number of segments. Based on your suggestion we changed the caption to \\u201cThe IROF score of an explanation method is the integrated decrease in the class score over the number of removed segments.\\u201d\\n- \\u201c- On page 5, in the parentheses it is reported that a given explanation method is no better than random choice. I would say that it is better (or not worse) than random choice, otherwise the methods would not provide any useful information.\\u201d\", \"the_quote_in_question\": \"\\u201cA good evaluation method should be able to reject the null hypothesis (a given explanation method is no better than random choice) with high confidence. \\u201d\\nIn this section we evaluate the quality of IROF as an evaluation method. Intuitively, a good evaluator should be able to differentiate between a random baseline and an explanation method with high certainty. Motivated by this, we evaluate IROF with a two-sided t-test on a number of explanation methods. \\n- On page 6, forty images are considered in the text while fifty images are considered in the caption of Table 1. Please align the number with the correct one.\\nThank you for pointing this out. We fixed the table caption.\\n[1] Ancona, Marco, et al. \\\"Towards better understanding of gradient-based attribution methods for Deep Neural Networks.\\\" 6th International Conference on Learning Representations (ICLR 2018). 2018.\"}",
"{\"title\": \"Author's response (continued)\", \"comment\": \"\\u201cThe authors talk about a ''true explanation''. This concept needs to be discussed more clearly and extensively. What does it mean to be a true evaluation? It is also important to prove that the introduced evaluation metric of IROF would assign perfect score for a given true explanation.\\u201d\\n\\nThe existence of a true explanation is an assumption that is implicitly made in all papers about explainability. Roughly speaking, all explanation methods aim to rank input dimensions/features according to their importance for classification. IROF moves away from measuring importance in the (arbitrary) pixel representation and is a first step towards explanation evaluation based on objects instead.\\nThe precise value of a perfect score depends on the dataset and the neural network being evaluated. \\n\\nWe talk about the criterium of a good evaluation method in 4.2. We do not mention the concept of a \\u2018true evaluation\\u2019. \\n\\n\\u201cThe qualitative results in the text and the appendix do not show an advantage. It would be more crips if the authors could run simple tests on human subjects following the methods in the previous literature. \\u201c\\n\\nWe agree that we did not give great weight to human evaluation of explanations. This is by design. For one, the evaluation of neural network explanations hinges on the assumption that the human subject and the neural network rely on the same features for classification, i.e. that the explanations should align. As recent works show, this assumption may not hold up [2,3]. \\n\\nAdditionally, humans are notoriously easy to fool and are biased towards explanations that look crisp. See f.e. examples in [1]. At present we have not seen experimental designs that can leverage the complexity of working with human subjects, this is the topic of ongoing work.\\n\\n\\u201cMany of the introduced heuristics are not backed by evidence or arguments. One example is normalizing individual saliency maps between 0-1 which can naturally be harmful; e.g. aggregating a noisy low-variance method with almost equal importance everywhere (plus additive noise) and a high-variance one which does a good job at distinguishing important pixels - AGG-VAR will not mitigate this issue. \\u201c\\n\\nYou are absolutely correct in that normalizing between 0-1 would give higher weight to low-variance methods, which is why we don\\u2019t do this.\\nAs noted in section 3.1, we normalize all input heatmaps such that the sum of all positive relevance is one. \\nAdditionally we clip all heatmaps at 0. As we note in section 4.1, this resulted only in negligible difference between the clipped and non-clipped version. \\n\\n\\n\\u201cThere are many many grammatical and spelling errors in the paper.\\u201d\\n\\nWe are sorry that we left the reviewer with the impression that the paper has \\u201cmany many grammatical and spelling errors\\u201d. We will definitely prioritize this issue in the revisions.\\nGiven the international aspect of our community, the majority of authors are not native English speakers. We hope the reviewer will let us know if there are mistakes that impede understanding.\\n\\n\\u201cThe font size for Figures is very small and unreadable unless by zooming in.\\u201d\\nThank you for pointing this out. We increased the font size in the revised manuscript. \\nWe supply large version of the figures in the supplements. In the updated version of the papers we increased the font size for the figures.\\n\\n\\u201cThe authors introduced aggregation as a method for a ''better explanation''. It has been known that another problem with saliency maps is robustness: one can generate adversarial examples against saliency maps. It would be an interesting question to see whether aggregation would improve robustness rather than how good the map itself is.\\u201d\\nThank you for the insightful suggestion. We note that attacks on explanation methods typically concern specific explanation methods, thus there is some hope that the aggregation would be much more robust to attacks.\\n\\n[1] Adebayo, Julius, et al. \\\"Sanity checks for saliency maps.\\\" Advances in Neural Information Processing Systems. 2018.\\n[2] Geirhos, Robert, et al. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" (2018).\\n[3] Ilyas, Andrew, et al. \\\"Adversarial Examples Are Not Bugs, They Are Features.\\\" arXiv preprint arXiv:1905.02175 (2019).\\n[4] Gohorbani, Amirata, et al. \\\"Towards automatic concept-based explanations.\\\" (2019).\\n[5] Hooker, Sara, et al. \\\"Evaluating feature importance estimates.\\\" arXiv preprint arXiv:1806.10758 (2018).\"}",
"{\"title\": \"Author's response I\", \"comment\": \"We thank the reviewer for their detailed review. We address their concerns below.\\n\\n\\u201cThe first message of the paper is trivial and cannot be considered as a novel contribution: the ''proof'' is basically the error of the mean is smaller than the mean of the errors. Additionally, this could have been useful if the case was that there was a need for playing the safe-card: that is, all of the existing methods have equal saliency map error and averaging will decrease the risk. Not only authors do not provide any evidence but also both the experimental results of the paper itself (results in Table 2 and Fig 4 are disproving this assumption) and the existing literature disprove it.\\u201d\\n\\nAs we clearly state and as noted by the reviewer, the bias-variance decomposition in Eq. (1) shows that the error of the mean is less than or equal to the mean error over methods. We can not say anything about the error of the mean relative to the minimum error method. Please note, the result does not rely on all methods having equally large errors nor does it make assumptions about correlation structure of the explanations. We agree with the reviewer that this \\\"proof\\\" is well-known in other contexts. In particular we note, that Eq. (1) is an instance of standard bias-variance theory. Yet, its relevance for explainability is new. We decided to give a detailed derivation in the Appendix A1, following comments from readers of earlier versions of the manuscript. \\n\\nWe then go on to show, that in practice an aggregate also outperforms the best unaggregated method, i.e. has an error smaller than the minimum error. \\nAs another reviewer noted, an interesting idea for the future would be to study which minimum set of methods are useful to an aggregate. We agree with you that an interesting direction for that would be to give theoretical guarantees based on the similarity of the methods.\\n\\n\\u201cEven considering this assumption to be correct, the contribution is minimal to the field and benefits of averaging saliency maps have been known since the SmoothGrad paper. \\u201c\\n\\nBoth, SmoothGrad and AGG-Mean aggregate explanations. At first glance, we agree that they do look similar. However, the two approaches differ in one vital aspect: \\nSmoothGrad averages explanations from the same explanation method. This reduces the variance. AGG-Mean and AGG-Var averages over different explanation methods. This reduces the bias. As seen in our experiments, both AGGMean and AGGVar comfortably outperform SmoothGrad.\\nInterestingly, SmoothGrad is outperformed by the unaggregated saliency map on MNIST. On MNIST, AGGMean and AGGVar also do not outperform the best unaggregated method. It seems as if aggregation is more beneficial on high-dimensional datasets than on low-dimensional datasets.\\n\\n\\n\\u201cThe second contribution is an extension of existing evaluation methods (e.g. SDC) where instead of removing (replacing by mean) individual pixels, the first segment the image and remove the segments.\\u201d\\n\\nWe cite several other approaches to evaluation in our paper. A the time [4], which introduced SDC, had not been published yet (now at NeuRIPS 2019). \\nAs both other reviewers noted, we validate our results with extensive experiments, including on the validity of IROF as an evaluation method. \\nThe only other work that we are aware of that evaluates the evaluation method they introduced is [5]. \\n\\nThe reviewer asks what is meant by a \\u2018true explanation\\u2019 and a \\u2018true evaluation\\u2019. We agree that these are concepts mainly needed for theoretical analysis and for gaining intuition. There is ample need for clarifying the objectives and evaluation in this\\nfield. See additional comments below. We feel that our contribution makes an important step in this direction by proposing and evaluating a systematic and objective way to evaluate a new explanation method. Importantly, our work also does not have a large overhead involved, an aspect that becomes more and more important. \\n\\n\\n\\u201cTheir definition of a feature, though, are segments generated by simple segmentation methods. There is a long line of literature showing the incorrectness of this assumption; i.e. a group of coherent nearby pixels does not necessarily constitute a feature seen by the network and does not necessarily remove the mentioned problem of the high correlation of pixels.\\u201d\\n\\nWe respectfully disagree with you about this. We are far from the only ones using segments as a rough approximation for features. The most recent in the context of neural networks is in fact the paper you mentioned on SDC [4].\"}",
"{\"title\": \"Author's response\", \"comment\": \"We thank the reviewer for their positive feedback and ideas for further work.\\n\\u201cThe only obvious downside of AGG-Mean and AGG-Var is that one would have to implement and run all constituent evaluation methods, which is expensive. \\u201c\", \"in_regards_to_needing_to_implement_and_run_all_constituent_explanation_methods\": \"The backpropagation-based methods we use for the aggregation are known for being comparatively fast to compute. To illustrate, obtaining an explanation with LIME, which is a sample-based explanation method, takes more time than obtaining an aggregation with all considered backpropagation -based methods.\\n\\u201cJust as an idea for future work: given N explanation methods, one could ablate away one method at a time, thus getting an idea of whether any of the N explanations are redundant in the presence of others. Recommending a minimal set of useful explanation methods to the NLP community would then decrease the overall complexity of replicating the end-to-end explanation system.\\u201d\\nWe also agree with the reviewer that more complete ablation studies are interesting, complementing the mini-ablation study we have presented in Appendix A2, where we evaluate combinations of two methods.\"}",
"{\"title\": \"Changes in the revision\", \"comment\": \"We would like to thank all reviewers for their time and effort. We have responded to their concerns below, and made the following changes to the manuscript as a result:\\nincreased font size for figure 2 and 4\\nmoved second row of figure 2 to supplements due to space\\nchanged caption for figure 1\\ncorrected number in table 1\\ncorrected typos\\nchanged first paragraph of section 4.2\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper, inspired by the established technique of model ensembling, proposes two methods (AGG-Mean and AGG-Var) for aggregating different model explanations into a single unified explanation. The authors mathematically prove that the derived explanation is guaranteed to be more truthful than the average performance of the constituent explanations. In practice, the aggregation consistently outperforms *all* individual explanations, not just their aggregated performance. Additionally, the paper introduces a new quantitative evaluation metric for explanations, free of human intervention: IROF (Incremental Removal of Features) incrementally grays out the segments deemed as relevant by an explanation method and observes how quickly the end-task performance is degraded (good explanations will cause fast degradation). Solid validation confirms that the IROF metric is sound.\\n\\nI support paper acceptance. The experimental section is particularly strong, and makes a convincing argument for both the aggregation methods and the IROF metric. Even though I am not very familiar with the explainability literature and I would not be able to point out an omitted baseline for instance, the wide range of model architectures and aggregated explanation techniques makes a solid case. I appreciate the experiments on low-dimensional input, where the authors are deliberately showing a scenario in which their method does not score huge gains; this brings even more credibility to the paper. The presentation itself is clear, and there are no language or formatting issues.\\n\\nThe only obvious downside of AGG-Mean and AGG-Var is that one would have to implement and run all constituent evaluation methods, which is expensive. Just as an idea for future work: given N explanation methods, one could ablate away one method at a time, thus getting an idea of whether any of the N explanations are redundant in the presence of others. Recommending a minimal set of useful explanation methods to the NLP community would then decrease the overall complexity of replicating the end-to-end explanation system.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper has two main messages: 1- Averaging over the explanation (saliency map in the case of image data) of different methods results in a smaller error than an expected error of a single explanation method. 2- Introducing a new saliency map evaluation method by seeking to mitigate the effect of high spatial correlation in image data through grouping pixels into coherent segments. The paper then reports experimental results of the methods introduced in the first message being superior to existing saliency map methods using the second message (and an additional saliency map evaluation method in the literature). They also seek to magnify the capability of the 2nd message's evaluation method by showing its better capability at distinguishing between a random explanation and an explanation method with a signal in it.\", \"i_vote_for_rejecting_this_paper_for_two_main_reasons\": \"the contributions are not enough for this veneue, and the paper's introduced methods are not backed by convincing motivations. The first message of the paper is trivial and cannot be considered as a novel contribution: the ''proof'' is basically the error of the mean is smaller than the mean of the errors. Additionally, this could have been useful if the case was that there was a need for playing the safe-card: that is, all of the existing methods have equal saliency map error and averaging will decrease the risk. Not only authors do not provide any evidence but also both the experimental results of the paper itself (results in Table 2 and Fig 4 are disproving this assumption) and the existing literature disprove it. Even considering this assumption to be correct, the contribution is minimal to the field and benefits of averaging saliency maps have been known since the SmoothGrad paper. The second contribution is an extension of existing evaluation methods (e.g. SDC) where instead of removing (replacing by mean) individual pixels, the first segment the image and remove the segments. The method, apart from being very similar to what is already there in the literature, is not introduced in a well-motivated manner. The authors claim that their evaluation method is able to circumvent the problem with removing individual pixels (which is the removed information of one pixel is mitigated by the spatial correlations in the image and therefore will not result in a proportional loss of prediction power) by removing ''features'' instead. Their definition of a feature, though, are segments generated by simple segmentation methods. There is a long line of literature showing the incorrectness of this assumption; i.e. a group of coherent nearby pixels does not necessarily constitute a feature seen by the network and does not necessarily remove the mentioned problem of the high correlation of pixels. This method does not remove \\\"the interdependency of inputs\\\" for the saliency evalatuion metric. Even assuming the correctness of this assumption, the contribution over what already exists in the literature is not enough for this venue.\", \"a_few_suggestions\": [\"The authors talk about a ''true explanation''. This concept needs to be discussed more clearly and extensively. What does it mean to be a true evaluation? It is also important to prove that the introduced evaluation metric of IROF would assign perfect score for a given true explanation.\", \"The mentioned problem of pixel correlations that IROF seeks to mitigate is also existing in other modalities of data and the authors do not talk about how IROF could potentially be extended.\", \"The qualitative results in the text and the appendix do not show an advantage. It would be more crips if the authors could run simple tests on human subjects following the methods in the previous literature.\", \"There are many many grammatical and spelling errors in the paper. The font size for Figures is very small and unreadable unless by zooming in.\", \"Many of the introduced heuristics are not backed by evidence or arguments. One example is normalizing individual saliency maps between 0-1 which can naturally be harmful; e.g. aggregating a noisy low-variance method with almost equal importance everywhere (plus additive noise) and a high-variance one which does a good job at distinguishing important pixels - AGG-VAR will not mitigate this issue.\"], \"one_question\": \"The authors introduced aggregation as a method for a ''better explanation''. It has been known that another problem with saliency maps is robustness: one can generate adversarial examples against saliency maps. It would be an interesting question to see whether aggregation would improve robustness rather than how good the map itself is.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a study on explanation methods, proposing an interesting way to aggregate their results and providing empirical evidence that aggregation can improve the quality of the explanations.\\n\\nThe paper considered only methods using CNN for classifying images, leaving other applications for future investigation.\\n\\nThe results show aggregation of different explanation methods leads to better explanations, and is therefore exploitable, for high dimensional images while degrades with low-dimensional ones. In the latter case, it happens that the aggregated explanation explains a bit worse than the best non-aggregated explanation (from the graph it seems a very small difference though). This is odd because I would have assumed to see an improvement (at least a small one) using information from more than one system.\\n\\nThe paper also presents a score for evaluating explanation methods, which shows good results.\\n\\nThe paper is interesting and well written, the experimental campaign extensive enough and the methods are presented clearly.\", \"there_are_some_minor_problems_to_be_solved\": [\"In the abstract, there are two periods before the last sentence.\", \"When defining the size of matrices I suggest using \\\\times instead of x to improve readability\", \"The caption in Figure 1 says \\\"The decrease of the class score over the number of removed segments is reported.\\\", but I cannot find where this decrease is reported.\", \"On page 5, in the parentheses it is reported that a given explanation method is no better than random choice. I would say that it is better (or not worse) than random choice, otherwise the methods would not provide any useful information.\", \"On page 6, forty images are considered in the text while fifty images are considered in the caption of Table 1. Please align the number with the correct one.\"]}"
]
} |
Byl1W1rtvH | Recurrent Hierarchical Topic-Guided Neural Language Models | [
"Dandan Guo",
"Bo Chen",
"Ruiying Lu",
"Mingyuan Zhou"
] | To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation. Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences. For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes. Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent. | [
"Bayesian deep learning",
"recurrent gamma belief net",
"larger-context language model",
"variational inference",
"sentence generation",
"paragraph generation"
] | Reject | https://openreview.net/pdf?id=Byl1W1rtvH | https://openreview.net/forum?id=Byl1W1rtvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PELV3SSef0",
"datcB2GmQi",
"X5Xuht5sQS",
"D2RzZ26W8h",
"c9TwXMtqy4",
"HygW6Tz7ar",
"SJxPxG6G6H",
"B1e7g2pjjS",
"B1xyV5aior",
"SyezH_ajoH",
"HJlLML6ooB",
"HJgqifHA5r",
"B1x_ZuxnqH",
"ByleVGe25S",
"BJeimxqd9S",
"BJxNohr45S",
"rkxufFDxqS",
"H1xrSldCKr",
"H1xzIs-2PH"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576881576599,
1576875322794,
1576875301915,
1576874582047,
1576798725697,
1575329208747,
1575305710845,
1573800938741,
1573800487227,
1573799993677,
1573799437764,
1572913826267,
1572763648426,
1572762152093,
1572540450528,
1572261019958,
1572006160357,
1571876925344,
1569622857576
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1529/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1529/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1529/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper1529/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1529/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1529/AnonReviewer2"
],
[
"~pankaj_gupta1"
]
],
"structured_content_str": [
"{\"title\": \"Timeline of the reviewing process\", \"comment\": \"The timeline of the reviewing process for our paper is as follows:\", \"oct_31\": \"The Area Chair (AC) posted public comments in OpenReview (before we could see the reviews)\", \"nov_3\": \"We responded to the AC\\u2019s comments in OpenReview (the AC replied to our response on Nov 4)\", \"nov_5\": \"All three reviews were released, with 8,8,8 ratings\", \"nov_15\": \"We uploaded the revised paper and posted our response to the three reviewers and the AC\", \"dec_2\": \"Two additional reviews were posted. Both were very negative and provided the lowest possible rating of 1\\nDec 2/3: We expressed concerns to the program chairs, who kindly clarified that no additional author rebuttals are allowed before the final decision\\nDec 2/3: Multiple tweets appeared in Twitter pointing to a Reddit post.\", \"dec_5\": \"We wrote our response to Reviewers 4 and 5 but could not post it in OpenReview\", \"dec_19\": \"The decision of \\\"Rejection\\\" was announced\", \"dec_20\": \"We posted our response to Reviewers 4 and 5\\n\\nWhile none of the authors were active in social media, the reviewing process of our paper that appeared out of the ordinary had caught the attention of social media, when two additional reviews, whose ratings were significantly different from the previous ones, were posted on December 2, 17 days after the rebuttal deadline and 4 days before the meta-review deadline. \\n\\nWe were extremely surprised by the comments of Reviewers 4 and 5. As we were not able to respond in OpenReview before the release of the final decision, we had thought about posting our response in Reddit and engaging in these discussions. An important reason that we decided not to do so, after interacting with a program chair, was that ICLR 2020 does not allow post-rebuttal interactions between the authors and the reviewers & AC, and there was no guarantee that any of the reviewers & AC had not participated or would not participate in these discussions. Thus if we did post our response and engage in these discussions before the final decision, it is possible we would have violated the rule by going around OpenReview to interact with the reviewers & AC. \\n\\nRejection is a normal part of life and it is not uncommon for reviewers from different research fields to clearly disagree with each other. The AC did raise several good concerns that we appreciated and had tried our best to respond given the time constraint. We feel that the AC could have rejected our paper based on his/her own expertise and judgment, to which we would probably have little to complain. \\n\\nWe hope how our paper had been reviewed could provide an example to help refine the reviewing process of ICLR and other open-reviewed machine learning conferences. For example, allowing the authors to respond to additional reviews posted after the rebuttal deadline. For all four authors, we prefer to focusing on moving our research forward, rather than getting distracted by unexpected/unwanted social media attentions that may have little to do with the quality of the submission.\"}",
"{\"title\": \"Response to Review #4 (Part 2)\", \"comment\": \"4. R4 questioned the choice of evaluation datasets.\", \"response_4\": \"We report the perplexity results in Table 1, where the datasets are available online. We choose these datasets mainly to make direct comparison with existing topic-guided language models. Note we did consider evaluating rGBN-RNN and GBN-RNN on PTB. Unfortunately, as PTB has been pre-processed to a set of sentences, with the document boundary information discarded, we are not able to apply rGBN-RNN and GBN-RNN to PTB since they both need to know which sentences come from which documents.\\n\\n5. R4 commented: \\u201cThe inference part is not particularly self-contained.\\u201d\", \"response_5\": \"We note we have described a hybrid variational/sampling inference for rGBN-RNN in Algorithm 1 and provided the details about sampling Phi and Pi with TLASGR-MCMC in Appendix C. We also note our code is publicly available.\\n\\n6. R4 commented: \\u201cEvaluation of the induced topic hierarchy (Figure 4) is only done through qualitative samples, and the paper does not really explain how to pick the samples (i.e. possible cherry-picking). I am not very familiar with the topic modelling literature, but it would be nice if the induced hierarchy can be evaluated quantitatively.\\u201d\", \"response_6\": \"We refuse to accept the hypothetical accusation of \\u201cpossible cherry-picking.\\u201d In page 7 we have described how we visualize the topic hierarchy: \\u201cIn Fig. 4, we select a large-weighted topic at the top hidden layer and move down the network to include any lower-layer topics connected to their ancestors with sufficiently large weights. Horizontal arrows link temporally related topics at the same layer, while top-down arrows link hierarchically related topics across layers.\\u201d We\\u2019d be glad to provide more analogous plots, each of which is a topic hierarchy rooted at a node of the top hidden layer (we could provide a code to automatically generate these topic hierarchy plots given the inferred Phi and Pi matrices).\\n\\nWe had responded to both a public comment by Pankaj Gupta and Review # 3 about why quantitatively evaluation for the topic modeling part had not been performed. Please see these responses for details.\\n\\nWe emphasize while we focus on guiding (stacked-)RNN with GBN or rGBN, the same idea can be adapted to potentially improve a Transformer based model with the help of GBN or rGBN. This extension, which is non-trivial at all due to the size of Transformer, is beyond the scope of this paper. With that said, motivated by the AC\\u2019s comments, we have been working on GBN-Transformer and rGBN-Transformer and we hope we can report comprehensive results now, but Transformer is so computationally demanding to train that our pace of progress has been limited by our current computational resource.\"}",
"{\"title\": \"Response to Review #4 (Part 1)\", \"comment\": \"We strongly argue against the comments of Reviewer 4 (R4).\\n\\n1. R4 dismissed the whole idea of using a topic modeling approach to model document-level information because \\u201cPretty much every LM paper \\u2026 uses LSTMs/Transformers incorporate cross-sentential, document-level information as context, through a very simple approach of just concatenating all the sentences and adding a unique token to mark sentence boundaries.\\u201d\", \"response_1\": \"Yes, this is such a simple approach, but does it work well and solve all the problems? If this simple approach works so well, why should Dieng et al. (ICLR 2018) even bother to propose topic-RNN, Wang et al. (AISTATS 2018, NAACL 2019) propose topic guided language models, and Dai et al. (ACL 2019) propose Transformer-XL? We recommend R4 to at least take a look at Related Work section of Dai et al. (ACL 2019), from which we quote one sentence: \\u201cMore broadly, in generic sequence modeling, how to capture long-term dependency has been a long-standing research problem.\\u201d In other words, simply concatenating sentences has not yet solved the problem of capturing long-term dependency.\\n\\n2. R4 commented that \\u201cPrior work has shown that LSTMs/Transformers with cross-sentential context can, and in fact do, make use of information from previous sentences,\\u201d provided two evidences, and claimed that \\u201cthese prior works defeat the paper\\u2019s motivation of why it claims to need topic models in the first place (i.e. to model cross-sentential context), while just concatenating multiple sentences as context would do, and in fact has been done many times.\\u201d\", \"response_2\": \"Related to Response 1, concatenating multiple sentences as the input is indeed simple, but at what cost and how effective? One cost is the model size may quickly increase with the input sequence length, making it hungrier for computation, memory, and data size, and more difficult to train. We don\\u2019t understand why the existence of a remedy to model cross-sentential context can be used to defeat our motivation of using topic models to capture document-level semantic information, with which each input to the proposed lager-context language model can be as short as a single sentence.\\n\\n2.1. Prior work (mostly in Transformer-land) has come up with ways to make use of very long-range context, from Transformer-XL to the more recent compressive Transformer (https://openreview.net/forum?id=SylKikSYDH) that can condition on entire books. While these are done for Transformers, in principle one can also apply similar techniques to LSTMs.\\n\\nResponse 2.1: While in principle one can also apply similar techniques to LSTMs, we have no comments on something that has not yet been done and validated.\\n\\n3. R4 commented: \\u201cWhile Transformer-XL has the potential to make use of word orders in the preceding sentences, it seems that this paper\\u2019s approach cannot do that, since they only take the bag-of-words from the preceding sentences. It thus seems that their bag-of-word approach is less expressive, and hence less powerful, than the simpler alternative of concatenating sentences.\\u201d\", \"response_3\": \"We are again surprised that R4 is so determined to completely dismiss the idea of using the bag-of-words representation just because Transformer-XL \\u201chas the potential\\u201d to make use of word orders in the preceding sentences. If the key goal is to capture document-level semantic information, discarding the word order could help better capture long-range word dependencies. As discussed in TopicRNN (Dieng et al, ICLR 2018), probabilistic topic models are a family of models that can be used to capture global semantic coherency (D. M. Blei and J. D. Lafferty. Topic models. Text mining: classification, clustering, and applications, 10(71):34, 2009), providing a powerful tool for summarizing, organizing, and navigating document collections. One basic goal of such models is to extract document-level word concurrence patterns into latent topics from a text corpus. Documents are then represented as mixtures over these latent topics. Through posterior inference, the learned topics capture the semantic coherence of the words they cluster together (Mimno et al., ACL 2011). Most topic models are \\u201cbag of words\\u201d models in that the word order is ignored, and this makes it easier for topic models to capture global semantic information compared with conventional RNN-based language models.\\n\\nIn addition, given the learned rGBN-RNN, we can generate the sentence/paragraph given the key words of a topic, which is a unique feature of topic-guided language models.\"}",
"{\"title\": \"Response to Review #5\", \"comment\": \"We strongly argue against the comments of Reviewer 5 (R5).\\n\\n(1) To make R5 feel more comfortable about our claim that the RNN-based language model component is used to capture syntactic information, we'd like to point out similar claims in the literature: TopicRNN [1] claims in its abstract that \\\"TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics;\\u201d TCNLM [2] claims in its conclusion that \\\"the topic model part captures the global semantic meaning in a document, while the language model part learns the local semantic and syntactic relationships between words.\\\" We could have revised this claim if R5 could explain why he/she felt uncomfortable about it.\\n\\n[1] Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. TopicRNN: A recurrent neural network with long-range semantic dependency. In ICLR, 2017. \\n\\n[2] Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. Topic compositional neural language model. In AISTATS, pp. 356\\u2013365, 2018\\n\\n(2) We are very surprised to read R5's comment: \\\"I have no idea what a BoW vector looks like or how it is constructed,\\\" especially considering that R5 claimed: \\\"I have published in this field for several years.\\\" Please allow us to explain a well-known concept in text analysis and retrieval: denoting V as the vocabulary size, a bag-of-words (BoW) vector is a V-dimensional term-frequency count vector, whose $v$th element counts the number of times the $v$th term in the vocabulary appears in a document (or a set sentences); while it completely ignores the word order, it is well suited to capture document-level word concurrence patterns (topics).\\n\\n(3) A Dirichlet prior imposes a simplex constraint (nonnegative elements + unit L1 norm of the vector) and often encourages sparsity (to aid both interpretability and identifiability). It also facilitates posterior inference as the conditional posterior of each column of \\\\Phi also follows the Dirichlet distribution after performing appropriate variable augmentation (this property has been exploited by [3] to derive Gibbs sampling and [4] to derive TLASGR-MCMC, an efficient stochastic-gradient MCMC under the simplex constraint). We consider these as well-known concepts in topic modeling related literature and hence unnecessary to provide that detailed explanations.\\n\\n[3] Mingyuan Zhou, Yulai Cong, and Bo Chen. Augmentable gamma belief networks. J. Mach. Learn. Res., 17(163):1\\u201344, 2016.\\n\\n[4] Yulai Cong, Bo Chen, Hongwei Liu, and Mingyuan Zhou. Deep latent Dirichlet allocation with topic-layer-adaptive stochastic gradient Riemannian MCMC. In Proc. of ICML 2017 \\n\\n(4) We don't understand why you consider Eq. (5) as wrong. d_j will be the BoW vector extracted from all sentences in a document except sentence s_j. We model d_j under the Poisson factor analysis likelihood as p(d_j | ...) = Poisson(d_j; \\\\Phi^(1) \\\\theta_j^(1)). While Eq. 5 is not the marginal likelihood of an \\\"exact\\\" generative model, due to the overlap between the d_j's and the words y_jt's, it is perfectly valid to help introduce the objective function (the ELBO in Eq. 6) to be optimized in this paper.\\n\\nWe also note at the testing stage, if the task is solely for language generation, then the topic modeling component in the decoder will be discarded, and d_j will only consist of s_1,...,s_{j-1}.\\n\\nWe further note we submitted the code, with which you can verify the technical details and experimental results.\\n\\nWe emphasize while we focus on guiding (stacked-)RNN with GBN or rGBN, the same idea can be adapted to potentially improve a Transformer based model with the help of GBN or rGBN. This extension, which is non-trivial at all due to the size of Transformer, is beyond the scope of this paper. With that said, motivated by the AC\\u2019s comments, we have been working on GBN-Transformer and rGBN-Transformer and we hope we can report comprehensive results now, but Transformer is so computationally demanding to train that our pace of progress has been limited by our current computational resource.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper was a very difficult case. All three original reviewers of the paper had never published in the area, and all of them advocated for acceptance of the paper. I, on the other hand, am an expert in the area who has published many papers, and I thought that while the paper is well-written and experimental evaluation is not incorrect, the method was perhaps less relevant given current state-of-the-art models. In addition, the somewhat non-standard evaluation was perhaps causing this fact to be masked. I asked the original reviewers to consider my comments multiple times both during the rebuttal period and after, and unfortunately none of them replied.\\n\\nBecause of this, I elicited two additional reviews from people I knew were experts in the field. The reviews are below. I sent the PDF to the reviewers directly, and asked them to not look at the existing reviews (or my comments) when doing their review in order to make sure that they were making a fair assessment. \\n\\nLong story short, Reviewer 4 essentially agreed with my concerns and pointed out a few additional clarity issues. Reviewer 5 pointed out a number of clarity issues and was also concerned with the fact that d_j has access to all other sentences (including those following the current sentence). I know that at the end of Section 2 it is noted that at test time d_j only refers to previous sentences, but if so there is also a training-testing disconnect in model training, and it seems that this would hurt the model results.\\n\\nBased on this, I have decided to favor the opinions of three experts (me and the two additional reviewers) over the opinions of the original three reviewers, and not recommend the paper for acceptance at this time. In order to improve the paper I would suggest the following (1) an acknowledgement of standard methods to incorporate context by processing sequences consisting of multiple sentences simultaneously, (2) a more thorough comparison with state-of-the-art models that consider cross-sentential context on standard datasets such as WikiText or PTB. I would encourage the authors to consider this as they revise their paper.\\n\\nFinally, I would like to apologize to the authors that they did not get a chance to reply to the second set of reviews. As I noted above, I did try to make my best effort to encourage discussion during the rebuttal period.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"[Additional review]\\nThis paper proposes a technique to incorporate document-level topic model information into language models. \\n\\nWhile the underlying idea is interesting, my biggest issue is with the misleading assertions at the very beginning of the paper. In the second paragraph of Section 1, the paper claims that RNN-based LMs often make independence assumptions between sentences, hence why they develop a topic modelling approach to model document-level information. Some issues with this claim:\\n\\n1. Pretty much every LM paper that evaluates on language modelling benchmark (PTB, WT-103, Wikitext-2) uses LSTMs/Transformers incorporate cross-sentential, document-level information as context, through a very simple approach of just concatenating all the sentences and adding a unique token to mark sentence boundaries.\\n\\n2. Prior work has shown that LSTMs/Transformers with cross-sentential context can, and in fact do, make use of information from previous sentences.\\n\\na. Evidence 1: Khandelwal et al. (2018) showed that LSTMs memorise word orders from the past ~50 tokens, and retain semantic information from the past ~200 tokens; both of which extend far beyond the length of an average sentence, suggesting that information from the previous sentences is used in the predictions of the current sentence.\\n\\nb. Evidence 2: Language models that operate on single sentences typically do worse than language models that take into account cross-sentential context, e.g. the language model of Kim et al. (2019) that operates on single sentences gets ~90 ppl. on PTB test set, while LSTMs that condition on multiple sentences get a much better ~50-something ppl. on Mikolov PTB.\\n\\nCrucially, these prior works defeat the paper\\u2019s motivation of why it claims to need topic models in the first place (i.e. to model cross-sentential context), while just concatenating multiple sentences as context would do, and in fact has been done many times.\\n\\n2. Prior work (mostly in Transformer-land) has come up with ways to make use of very long-range context, from Transformer-XL to the more recent compressive Transformer (https://openreview.net/forum?id=SylKikSYDH) that can condition on entire books. While these are done for Transformers, in principle one can also apply similar techniques to LSTMs.\\n\\n3. While Transformer-XL has the potential to make use of word orders in the preceding sentences, it seems that this paper\\u2019s approach cannot do that, since they only take the bag-of-words from the preceding sentences. It thus seems that their bag-of-word approach is less expressive, and hence less powerful, than the simpler alternative of concatenating sentences.\\n\\n4. The perplexity results (Table 1) are not done on very standard datasets (no PTB evaluation for instance). It is thus hard to evaluate the strength of the baseline models. In the paper's defense, it seems that they were following the experimental setup of Wang et al. (2019), but the paper should elaborate more on the choice of evaluation datasets.\\n\\n5. The inference part is not particularly self-contained. The paper simply refers the TLASGR-MCMC method (which is an important part to make inference scalable) to prior work (Cong et al., 2017; Zhang et al., 2018), yet does not explain (even briefly) how the approach works, and how it can be combined with their recurrent topic model formulation. \\n\\n6. Evaluation of the induced topic hierarchy (Figure 4) is only done through qualitative samples, and the paper does not really explain how to pick the samples (i.e. possible cherry-picking). I am not very familiar with the topic modelling literature, but it would be nice if the induced hierarchy can be evaluated quantitatively.\", \"references\": \"1. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp nearby, fuzzy far away. In Proc. of ACL 2018.\\n2. Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gabor Melis. Unsupervised recurrent neural network grammars. In Proc. of NAACL 2019.\\n3. Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. Topic-guided variational autoencoders for text generation. In Proc. of NAACL 2019.\\n4. Yulai Cong, Bo Chen, Hongwei Liu, and Mingyuan Zhou. Deep latent Dirichlet allocation with topic-layer-adaptive stochastic gradient Riemannian MCMC. In Proc. of ICML 2017\\n5. Hao Zhang, Bo Chen, Dandan Guo, and Mingyuan Zhou. WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In Proc. of ICLR 2018\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"The model description is confusing and lots of statements are presented without appropriate or enough justification. For example, (1) in the last paragraph of page 2, they claimed that the language component is used in their model to capture syntactic information, which I do not feel comfortable to accept; (2) in the first paragraph of page 3, it says \\\"we define d_j as the BoW vector summarizing only the preceding sentences\\\", without further information, I have no idea what a BoW vector looks like or how it is constructed; (3) in the last paragraph of page 3, it says using Dirichlet priors to make \\\"the latent representation more identifiable and interpretable, but also facilitates inference\\\", which I really don't know what it means. There are a few more examples like these.\\n\\nMore importantly, I think Eq. (5) is wrong, which makes me question their whole methodology. To be specific, in their definition, d_j refers to a summary of all the sentences other than s_j. That means, \\n- for s_1, d_1 is defined on s_2, s_3, s_4, ..., s_J; and \\n- for s_2, d_2 is defined on s_1, s_3, s_4, ..., s_J. \\nIn other words, there is a huge overlap between any two d_j and d_{j'}. Therefore, I am not sure the decomposition on the right hand side of equation 5 (particularly, the decomposition of p(d_j | ...) ) is valid. \\n\\nAlthough they have some interesting results and the lowest PPLx comparing to other models, I do not think this paper is ready to be accepted.\"}",
"{\"title\": \"We have clarified our claims\", \"comment\": \"We appreciate your comments and suggestions. Given the time constraint, we leave comprehensive comparisons between rGBN-RNN and Transformer based language models for future study. We have followed your suggestion to revise our claim to be: \\\"the proposed model not only outperforms state-of-the-art larger-context RNN-based language models, but also...\\\"\"}",
"{\"title\": \"We have made the suggested improvements\", \"comment\": \"Thank you for your comments and suggestions. We have revised our paper accordingly and highlighted the main changes in red.\", \"q1\": \"It seems strange not to mention all of the recent high-profile work on LM-based pre-training, since my impression is that these models operate effectively with large multi-sentence contexts. Do models like BERT and GPT-2 fail to take into account inter-sentence relations, as the paper claims most LMs do? I would like to see more discussion of how this work fits with that.\", \"a1\": \"Thanks for your suggestion and we have now added related discussion about the recent high-profile work on LM-based pre-training (e.g., in the third paragraph of Section 3.1). First, while Transformer based LMs have been shown to be powerful, they often have significantly more parameters and require significantly more computation to train than RNN based LMs. Second, we do not view rGBN-RNN and Transformer based LMs as directly comparable given their differences in sizes and structures; after carefully comparing rGBN-RNN with Transformer based LMs, we believe a promising future extension of rGBN-RNN is to replace its stacked RNN with a Transformer-based LM, i.e., constructing a rGBN guided Transformer (rGBN-Transformer). Note the AC had made related comments, to which we had provided detailed response; please see that response for more details.\", \"q2\": \"I don\\u2019t know that it makes sense to highlight as the contribution of this model that it can \\\"simultaneously capture syntax and semantics\\\". It\\u2019s not clear to me that other language models fail to capture semantics (keeping in mind that semantics applies within a sentence and not just at a global level) \\u2013 rather, it seems that the strength of this model is in capturing semantic relations above the sentence level. If this is correct, that should be expressed more precisely.\", \"a2\": \"Thank you for your insightful comments. We have changed \\u201csemantics\\u201d to \\u201cglobal semantics\\u201d to more precisely express the idea that rGBN-RNN can capture semantic relations above the sentence level.\", \"q3\": \"It\\u2019s not clear to me what we learn from Figure 3. The claim is that \\\"the color of the hidden states of the stacked RNN based language model at layer 1 changes quickly ... because lower layers are in charge of learning short-term dependencies\\\", but looking at the higher layers I\\u2019m not seeing clear evidence of capturing of long-distance dependencies, or even clear capturing of syntactic constituents. The takeaways from that figure should be made clearer and should be sure to correspond to what we can actually confidently conclude from that analysis.\", \"a3\": \"Thank you for your suggestion. We have carefully revised the first paragraph of Section 3.2 to explain in detail how to interpret Figure 3 (by comparing the segments exhibited in the L2-norm sequence of a hidden layer to the corresponding words in the sentence), which now more clearly supports our claim.\"}",
"{\"title\": \"More explanations and details are now included\", \"comment\": \"Thank you for your comments and suggestions. We have revised our paper accordingly and highlighted the main changes in red.\", \"q1\": \"Though the paper is relatively well written, it would have been good to explain some points on architecture and inference. It would have been better to provide the rationale behind some architectural decisions like associating \\\\theta^1 and g^1 as against g^3. Related to this, Figure 1 has a typo where \\\\theta^2 is associated with g^3.\", \"a1\": \"Thank you for your insightful comments, which make us realize that the upward arrows in Fig. 1(b) and/or the upward red arrows in Fig. 1(c) can be flipped, leading to three additional architectural variations of the proposed rGBN-RNN. Since the current rGBN-RNN already clearly outperforms other RNN based models, given both the space and time constraints, we leave these three architectural variations of rGBN-RNN to future study. Also thank you for catching the typo! We have fixed it.\", \"q2\": \"An explanation on combining all the latent representation in the RNN model used for language modeling will be helpful, though this is motivated by previous approaches.\", \"a2\": \"We have added two reasons to explain why combining all the latent representation to help language modeling. Please see the paragraph below Equation (4) for detail.\", \"q3\": \"A proper explanation the TLASGR-MCMC approach for sampling from the posterior of rGBN parameters is missing in the main paper. It would be good to provide some details of this in the main paper.\", \"a3\": \"We have added more details on TLASGR-MCMC to Appendix C. For now, we have kept all the details of TLASGR-MCMC in Appendix C due to the space constraint (we find it difficult to move only one or two equations back to the main paper while maintaining the clarity of the technical details). We are open to your further suggestions on this.\", \"q4\": \"Experimental section compares the proposed approach against many SOTA approaches for the language modeling task. It would have been good to provide a quantitive evaluation of the topic modeling task also in addition to demonstrating them qualitatively.\", \"a4\": \"We have not provided quantitative evaluation of the topic modeling task mainly for two reasons. First, as our paper is focused on improving an RNN-based language model with a deep dynamic topic model, adding that evaluation may distract it from the main purpose. Second, our topics learned by rGBN-RNN are hierarchical between different layers and recurrent at the same layer. Thus it is unclear whether existing quantitative measures (e.g., topic coherence), which are often designed to evaluate conventional single-layer topic model without temporal structure, would be appropriate to evaluate the quality of the rGBN topics that are both hierarchical and temporally linked.\"}",
"{\"title\": \"Complexity analysis has been added to the Appendix\", \"comment\": \"Thank you for your positive feedback. We have revised our paper accordingly and highlighted the main changes in red.\", \"q\": \"One suggestion is that the authors didn\\u2019t include computational analysis about the complexity and loads of the proposed method as compared with the baseline methods.\", \"a\": \"Thank you for your suggestion. We have revised our paper to include a comprehensive complexity analysis, as shown in Appendix E. Examining Table 1 and the newly added Table 4 in Appendix E, one can find that the proposed rGBN-RNN achieves better performance with fewer parameters than comparable baselines. We note that when rGBN-RNN is used as a language model after training (which means the inferred topics $\\\\Phi$ are no longer needed), the number of parameters of the rGBN topic model component is dominated by that of the RNN language model component.\"}",
"{\"title\": \"Thank you, but still think claims need to be clarified.\", \"comment\": \"Thank you for the clarification. In the abstract of the paper and elsewhere, it says that the proposed model \\\"outperforms state-of-the-art larger-context language models\\\". I would argue that state of the art larger context language models are based on Transformers, and without a comparison this claim has not been validated. Given that claims of state-of-the-art results seem to be a major selling point for the paper, I think these claims can either be toned down (\\\"outperforms other RNN-based language models\\\"), or an empirical comparison could be performed.\"}",
"{\"title\": \"The focus is on language model\", \"comment\": \"Dear Pankaj,\\n\\nThank you for suggesting both Larochelle & Lauly (2012) and your own publication in ICLR 2019.\\n\\nAs this paper is focusing on improving a language model with a deep dynamic topic model, we think it would be unnecessary, even distracting, to evaluate topic coherence in this paper. We note there are publications, such as [3] and [4], that have evaluated the topic coherence for the topics produced by gamma belief networks (GBNs); we encourage you to check these publications for details. If we submit a paper in the future that is focused on improving a topic model with the help of a language model, we will cite your paper and add comparison, if appropriate. \\n\\n[3] He Zhao, Lan Du, Wray Buntine, and Mingyuan Zhou. \\\"Dirichlet belief networks for topic structure learning.\\\" In NeurIPS 2018.\\n\\n[4] He Zhao, Lan Du, Wray Buntine, and Mingyuan Zhou. \\\"Inter and intra topic structure learning with word embeddings.\\\" In International Conference on Machine Learning, pp. 5887-5896. 2018.\"}",
"{\"title\": \"rGBN-RNN and Transformer-XL are not directly comparable\", \"comment\": \"Thank you for bringing these two papers to our attention and suggesting Transformer-XL for comparison. While we are studying both papers carefully to see whether we can design additional experiments to provide meaningful comparisons, we have a number of clear reasons to explain why the proposed rGBN-RNN and Transformer-XL are not directly comparable:\\n\\n1) Model size: For language modeling, the model size of Transformer-XL is one or two orders of magnitude larger than that of rGBN-RNN. For example, without considering the word embedding layers, Transformer-XL 12L and 24L have 41M and 277M parameters, respectively, while the proposed rGBN-RNN with 3 stochastic hidden layers has as few as 7.3M parameters. \\n\\n2) Model construction: rGBN-RNN guides stacked-RNN, a sentence-level language model, with rGBN, a deep dynamic topic model, to construct a larger-context language model, which clearly enhances the performance of stacked-RNN. Therefore, a promising extension is to replace stacked-RNN with Transformer (which essentially consists of stacked multi-head self-attention modules), i.e., constructing a rGBN guided Transformer (rGBN-Transformer). In other words, Transformer-XL and rGBN-Transformer would be comparable. However, rGBN-Transformer is beyond the scope of the current paper and will be our future work.\\n\\n3) Interpretability: In comparison to Transformer-XL, rGBN-RNN is much more interpretable and one can clearly understand its underlying mechanism to capture short-range, middle-range, and longe-range word dependencies. For example, rGBN-RNN has the ability to learn interpretable recurrent multilayer topics from the documents, as shown in Figure 4. Besides having the ability to generate sentences from random noises, we can also generate sentences conditioning on a single topic of a certain layer, or a combination of topics from different layers. \\n\\n4) Larger-context language model: Transformer by itself is not a larger-context language model. While Transformer-XL improves Transformer to capture longer-range dependencies, it still does not respect the natural document boundary of the words. By contrast, rGBN-RNN does respect the word-sentence-document structure, using the deep dynamic topic model to guide the language model to capture not only the short-range local word dependencies, but also both the sequential dependencies between the document contexts of sentences, and the long-range document-level word dependencies.\"}",
"{\"title\": \"How does this compare to state-of-the-art Transformer baselines?\", \"comment\": \"This paper looks quite interesting, but recent state-of-the-art results in language modeling and generation are largely based on Transformer-based models [1,2]. However, any comparison or even mention of these models seems to be conspicuously missing from this paper. I wonder: have the authors compared with any models? My suspicion is that these models are already able to capture topic to some extent, and may obviate the need for the methods proposed in this paper (but I would be happy to be proven wrong).\\n\\nIf not, but the authors would be interested in performing a comparison, I would suggest Transformer-XL [2], which as a model that is specifically designed to be able to capture long-distance context.\\n\\n[1] Radford, Alec, et al. \\\"Improving language understanding by generative pre-training.\\\" Preprint (2018).\\n[2] Dai, Zihang, et al. \\\"Transformer-XL: Attentive language models beyond a fixed-length context.\\\" ACL (2019).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a method for natural language generation, using a language model, informed by a topic model.\\nThe topic model is a hierarchical recurrent topic model that attempts to extract document-level word concurrence patterns and topic weight vectors for sentences. \\nThe language model is a stacked RNN model, aiming to capture word sequential dependencies. \\n\\nThe proposed method is a combination of two existing methods, i.e. gamma-belief networks and stacked RNN, where the stacked RNN is improved with the information from recurrent gamma belief network. \\n\\nOverall, this is a well written paper, clearly presented, with certain novelties. The method is well formulated mathematically and evaluated experimentally. The results look interesting especially for capturing the long-range dependencies, as shown by the BLEU scores. One suggestion is that the authors didn't include computational analysis about the complexity and loads of the proposed method as compared with the baseline methods.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes deep recurrent topic model guided language modeling using a stacked RNN, and uses a novel variational recurrent inference network to learn the parameters. The proposed model can capture the dependence across the sentences in language generation though the recurrent latent topics. Moreover, the deep rGBN architecture provides Gamma distributed topic topic weight vectors which can be associated with every layer of the stacked RNN generating the sentence. The parameters of both the hierarchical recurrent topic model and language model are learnt using a hybrid inference algorithm combining variational inference to estimate language model and inference network parameters and MCMC to infer rGBN parameters. The effectiveness of the proposed model on the language modeling task is demonstrated on 3 datasets using Perplexity and BLEU score. The paper also provides a visual representation of the topics and their temporal trajectories. \\n\\nThe proposed model extends previous approaches on topic guided language modeling by using deep rGBN model. Though the novelty of the model is limited, learning and inference with the proposed model is non-trivial. Further, the paper show an improvement in performance on language modeling using the proposed approach over SOTA approaches, demonstrating the significance of the proposed approach. \\n\\nThough the paper is relatively well written, it would have been good to explain some points on architecture and inference. It would have been better to provide the rationale behind some architectural decisions like associating \\\\theta^1 and g^1 as against g^3. Related to this, Figure 1 has a typo where \\\\theta^2 is associated with g^3. An explanation on combining all the latent representation in the RNN model used for language modeling will be helpful, though this is motivated by previous approaches. A proper explanation the TLASGR-MCMC approach for sampling from the posterior of rGBN parameters is missing in the main paper. It would be good to provide some details of this in the main paper. \\n\\nExperimental section compares the proposed approach against many SOTA approaches for the language modeling task. It would have been good to provide a quantitive evaluation of the topic modeling task also in addition to demonstrating them qualitatively.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents rGBN-RNN, a model that integrates a hierarchical recurrent topic model with an RNN-based language model in order to incorporate global semantic information and improve capturing of inter-sentence relations. The proposed model improves in perplexity across the three tested datasets over state of the art models of comparable type, and follow-up analyses show strong performance in sentence and paragraph generation, as well as learning of sensible hierarchical topics.\\n\\nOverall I think this is a clearly-written paper with a well-motivated and interesting model, strong results, and a good range of follow-up analyses. I think that it is a solid paper to accept for publication.\", \"some_areas_for_improvement\": \"It seems strange not to mention all of the recent high-profile work on LM-based pre-training, since my impression is that these models operate effectively with large multi-sentence contexts. Do models like BERT and GPT-2 fail to take into account inter-sentence relations, as the paper claims most LMs do? I would like to see more discussion of how this work fits with that.\\n\\nI don't know that it makes sense to highlight as the contribution of this model that it can \\\"simultaneously capture syntax and semantics\\\". It's not clear to me that other language models fail to capture semantics (keeping in mind that semantics applies within a sentence and not just at a global level) -- rather, it seems that the strength of this model is in capturing semantic relations above the sentence level. If this is correct, that should be expressed more precisely.\\n\\nIt's not clear to me what we learn from Figure 3. The claim is that \\\"the color of the hidden states of the stacked RNN based language model at layer 1 changes quickly ... because lower layers are in charge of learning short-term dependencies\\\", but looking at the higher layers I'm not seeing clear evidence of capturing of long-distance dependencies, or even clear capturing of syntactic constituents. The takeaways from that figure should be made clearer and should be sure to correspond to what we can actually confidently conclude from that analysis.\"}",
"{\"comment\": \"Missing references:\\n[1] Hugo Larochelle and Stanislas Lauly. A neural autoregressive topic model. In NIPS 2012.\\n[2] Pankaj Gupta, Yatin Chaudhary, Florian Buettner, Hinrich Sch\\u00fctze. textTOvec: Deep Contextualized Neural Autoregressive Topic Models of Language with Distributed Compositional Prior. In ICLR 2019.\\n\\n- include the neural network based topic models [1] in introduction \\n- As mentioned in introduction section that traditional topic models ignore word ordering. The recent work [2] addresses the issue by introducing word ordering in topic models via composite modeling of a topic model and an LSTM based language model to deal with BoW issues in topic modeling. \\n- While this work focuses on improving LMs using topics, I would appreciate if you could also show quantitative results on topic modeling portion, such as topic coherence similar to Topic-RNN, TCNLM, etc.\", \"title\": \"References and Additional topic modeling evaluation\"}"
]
} |
BJgkbyHKDS | Invertible generative models for inverse problems: mitigating representation error and dataset bias | [
"Muhammad Asim",
"Ali Ahmed",
"Paul Hand"
] | Trained generative models have shown remarkable performance as priors for inverse problems in imaging. For example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Our formulation is an empirical risk minimization that does not directly optimize the likelihood of images, as one would expect. Instead we optimize the likelihood of the latent representation of images as a proxy, as this is empirically easier.
For compressive sensing, our formulation can yield higher accuracy than sparsity priors across almost all undersampling ratios. For the same accuracy on test images, they can use 10-20x fewer measurements. We demonstrate that invertible priors can yield better reconstructions than sparsity priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images. | [
"Invertible generative models",
"inverse problems",
"generative prior",
"Glow",
"compressed sensing",
"denoising",
"inpainting."
] | Reject | https://openreview.net/pdf?id=BJgkbyHKDS | https://openreview.net/forum?id=BJgkbyHKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7C_BgXicy",
"HyxURBHiiH",
"rkgAISBisB",
"rygi7BBjjS",
"HylG0NrosH",
"SJeomuT_cB",
"Bye3icUIqB",
"rkxWorWy5H",
"SJeePN_AFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725668,
1573766606453,
1573766485736,
1573766435034,
1573766346165,
1572554786832,
1572395683856,
1571915160985,
1571877976041
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1528/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1528/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1528/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1528/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1528/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1528/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1528/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1528/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the empirical performance of invertible generative models for compressive sensing, denoising and in painting. One issue in using generative models in this area has been that they hit an error floor in reconstruction due to model collapse etc i.e. one can not achieve zero error in reconstruction. The reviewers raised some concerns about novelty of the approach and thoroughness of the empirical studies. The authors response suggests that they are not claiming novelty w.r.t. to the approach but rather their use in compressive techniques. My own understanding is that this error floor is a major problem and removing its effect is a good contribution even without any novelty in the techniques. However, I do agree that a more thorough empirical study would be more convincing. While I can not recommend acceptance given the scores I do think this paper has potential and recommend the authors to resubmit to a future venue after a through revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comparison to Unlearned Methods and Other Clarifications\", \"comment\": \"Thank you for your thorough reading of the paper. We agree that a comparison to Deep Image Prior is quite interesting. This has actually been done in another paper submitted to this conference [https://openreview.net/pdf?id=rkegcC4YvS]. Technically, they compare the DCGAN to the Deep Decoder, which is an underparameterized Deep Image Prior with simpler architecture and comparable performance. Figure 1 (lower panels) in that paper demonstrate that the Deep Decoder underperforms the DCGAN when there are few enough measurements (m<500 in the same problem size as our experiments). Also that figure shows the Deep Decoder gives consistently lower PSNRs than we report for INNs across the entire range of undersampling ratios. In the case of significantly under sampled measurements, the Deep Decoder is over 5 dB worse in PSNR. We will add a remark to this effect in the camera ready, if accepted.\\n\\nIt is a great suggestion to determine if out-of-distribution performance is due to invertibility or log-likelihood optimization. A good argument can be made on both sides. We are attempting to do this in time for the camera-ready. Getting adversarial training to converge is challenging, but we will try and will add a remark to the camera ready.\\n\\n\\nWe have clarified the comment about the sublevel sets. The remark was intended to say that because of invertibility of the model G, there are no local minima (aside from global minima) of the data misfit term in z-space (||A G(z) - y||^2). This ensures that the latent optimization we propose has a favorable landscape for convergence.\"}",
"{\"title\": \"Strong empirical results under a principled new framework for inversion\", \"comment\": \"Thank you for the thoughtful comments.\", \"novelty_of_the_paper\": \"The primary novelty of this paper is the proof of concept that invertible neural networks (INNs), out of the box, are surprisingly effective image priors for inverse problems, especially on out-of-distribution images. This behavior was not known before this paper and can not be found in the previous literature either on invertible neural networks or on the literature in signal recovery. There was one paper that uses INNs to directly learn a specific forward map, returning the inverse for free, but this method would need to be retrained for every variation of every inverse problem. As a result of our paper, a practitioner who is building an image prior for a given distribution class aught to carefully consider the option of training an invertible net on their desired signal class. (Naturally, they should also consider other methods too in order to see what works best for their problem). Without this work, it is likely one might not think to give INNs a try because they are a substantially different architecture than everything else in the literature. Other novelties of the paper are: in denoising, we introduce a formulation that directly optimizes image likelihood (demonstrating the strength of INNs in density estimation), and in compressed sensing we introduce formulation (2), which surprisingly works best with no direct likelihood penalization (gamma=0).\", \"comparison_to_other_methods\": \"The purpose of this paper is to point out the promise of an entirely different framework for generative image priors for inverse problems. We specifically did not put bells and whistles on the Invertible Neural Networks we trained because we wanted to show how much better the out-of-the-box performance of INNs was compared to GANs. Much followup work has happened with GANs in an attempt to lower their representation error. These include Image Adaptive GANs and Latent Convolutional Models, which both use ideas from untrained neural networks (such as optimizing the weights of a neural network at inversion time). Similar ideas could also be used for our Invertible Neural Network models, which will similarly make their level of performance even greater.\", \"clarity_of_paper\": \"If accepted, at the camera ready, we will clarify any sentences that the reviewer finds unclear. We will also seek additional eyes before the camera ready in order to identify which sentences need additional clarity.\"}",
"{\"title\": \"Strong empirical results under a principled new framework for inversion\", \"comment\": \"Thank you for the thoughtful comments.\", \"worthiness_of_being_a_full_paper\": \"We argue that this work is worthy of being a full paper because it shows impressive empirical results for a principled signal recovery paradigm that address the central challenge facing generative models as image prior. That central challenge is representation error, and our principled solution is invertible neural networks which can be trained by directly optimizing likelihood of test images. The out-of-distribution performance we report is particularly important: if one were to train a GAN for MRI images, it will be impossible to ensure all possible pathologies are in the training set, and thus images must generalize beyond their training data, which invertible nets do. Given the substantial costs of using INNs, this paper provides a significant contribution to the field by demonstrating feasibility of an approach that might initially appear unlikely to work.\"}",
"{\"title\": \"Thoughts on Representation Error and Other Metrics\", \"comment\": \"Thank you for the careful reading of the paper. We agree that it is indeed quite interesting to see where are the sources of error in the DCGAN prior. We did not include this in the paper because it is already included in the Bora et al. paper. In Section 6.3 of their paper, they show that the dominant source of error is representation error (as opposed to measurement error or optimization error). We have added a remark in Section 3.2 to this effect in the paper so that other readers can be aware of this observation.\\n\\nIf the paper is accepted, in the camera ready supplemental material, we intend to include plots of recovery performance as measured by SSIM for the denoising and compressed sensing problems. We already have presented the results in the MSE metric in the supplemental materials.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper investigates the performance of invertible generative models for solving inverse problems. They argue that their most significant benefit over GAN priors is the lack of representation error that (1) enables invertible models to perform well on out-of-distribution data and (2) results in a model that does not saturate with increased number of measurements (as observed with GANs). They use a pre-trained Glow invertible network for the generator and solve a proxy for the maximum likelihood formulation of the problem, where the likelihood of an image is replaced by the likelihood of its latent representation. They demonstrate results on problems such as denoising, inpainting and compressed sensing. In all these applications, the invertible network consistently outperforms DCGAN across all noise levels/number of measurements. Furthermore, they demonstrate visually reasonable results on natural images significantly different from those in the training dataset.\\n\\nThe idea of using invertible networks for estimating a specific forward process is not new, as the authors also pointed out. The contribution of this paper is that they use a pre-trained invertible model as a prior in various tasks not known in training time and support their technique with experimental results and therefore I would recommend accepting this paper. \\n\\nSince one of the main arguments in the paper is how the lack of representation error benefits the Glow prior compared to DCGAN prior, it would be interesting to see the representation error quantitatively for the DCGAN results and how it contributes to the total error. Moreover, demonstrating the comparison results in other metrics than PSNR (MSE, SSIM) would be interesting and more comprehensive.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Authors extend the invertible generative model of Kingma et. al. to image inverse problems. Specifically, they use Generator trained within the Glow framework as an image prior for de-noising, inpainting and compressed sensing tasks.\\nDuring training, a heuristic adjustment to the objective is made allowing optimization of latent variable norm instead of image log likelihood. This seemed critical for convergence of image inverse tasks. The use of Glow prior was shown to be beneficial for all inverse tasks. Experiments were limited to face images from celebA database. While the proposal demonstrates improved empirical performance, it seems to be the only contribution of this paper. Taking an existing model and applying it to a problem where similar extensions have been tried (GAN etc) does not seem quite worthy of a full paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to employ the likelihood of the latent representation of images as the optimization target in the Glow (Kingma and Dhariwal, 2018) framework. The authors argue that to optimize the ''proxy for image likelihood'' has two advantages: First, the landscapes of the surface are more smooth; Second, a latent sample point in the regions that have a low likelihood is able to generate desired outcomes. In the experimental analysis, the authors compare their proposed method with several baselines and show prior performance.\\n\\nThis paper has three major flaws and should be clearly rejected. \\nFirst, the novelty of this paper is trivial, in my opinion, the Eq. 2 is the only contribution of this paper.\\nSecond, the experimental results are not convincing, almost all the methods proposed after 2015 have better performance compared to these baseline methods.\\nThird, there are a lot of claims in this paper have been made without clarification, I have huge troubles in understanding certain sentences.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Update: I have read the other reviews and the author response and have not changed my evaluation.\\n\\nRecent work has shown that GANs can be effective for use as priors in inverse problems for images such as compressed sensing, denoising, and inpainting. A drawback is that GANs may have the problem of inexact reconstruction, and strongly reflect the biases in the training set yielding poor performance on out-of-distribution data. This paper shows that the exact inverses available to normalizing flow models and their broad assignment of likelihoods allows for better reconstructions especially on out-of-domain data.\\n\\nAs far as I know, this is the first work to use normalizing flows for inpainting and compressed sensing. The approach and application is very natural, although it\\u2019s a bit surprising that using the likelihood as a prior term directly did not work very well. The results of this work show that invertible generative models have utility for inverse image problems even when the quality of raw samples is substantially below GANs. In my opinion the main advantage in this method is not on having low reconstruction error on observed pixels, which becomes less of a problem for more powerful GAN models, but rather the good performance on out of domain data which is somewhat surprising. The authors are reasonably thorough, testing their model on a variety of problem settings and perform ablation studies on hyperparameters.\\n\\nAs additional baselines for compressed sensing and denoising, it would be good to compare to the Deep Image Prior since there is effectively no out-of-distribution input for this untrained model and it performs well with moderate image corruption. Additional discussion about the two could be useful, as for the Deep Image Prior a similar patter is observed where denoising requires explicit regularization (early stopping or gradient noise for DIP) but image completion and compressed sensing do not. Also, there have been many improvements to DCGAN over the years that might ameliorate the problems that were observed in reconstruction, but I don\\u2019t fault the authors much for this as it can be difficult training models like StyleGAN even at 64x64 sizes.\\n\\nIt might also be interesting to know whether the good performance on out-of-distribution inputs is due to the exact invertibility or the log-likelihood objective, although I would guess that it is the latter. On way to test this would be training the GLOW model with an adversarial objective instead of NLL as done in [1].\", \"minor_comments\": \"Figure 4 would probably be better with a logarithmic scaling for # of measurements\\n\\nI did not understand the comment about sublevel sets of the data misfit term being inverse images of cylinders, maybe this could use some elaboration.\\n\\n[1] https://arxiv.org/abs/1705.08868\"}"
]
} |
HJxyZkBKDr | NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search | [
"Xuanyi Dong",
"Yi Yang"
] | Neural architecture search (NAS) has achieved breakthrough success in a great number of applications in the past few years.
It could be time to take a step back and analyze the good and bad aspects in the field of NAS. A variety of algorithms search architectures under different search space. These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization. This raises a comparability problem when comparing the performance of various NAS algorithms. NAS-Bench-101 has shown success to alleviate this problem. In this work, we propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information. NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms. The design of our search space is inspired by the one used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph. Each edge here is associated with an operation selected from a predefined operation set. For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-201 includes all possible architectures generated by 4 nodes and 5 associated operation options, which results in 15,625 neural cell candidates in total. The training log using the same setup and the performance for each architecture candidate are provided for three datasets. This allows researchers to avoid unnecessary repetitive training for selected architecture and focus solely on the search algorithm itself. The training time saved for every architecture also largely improves the efficiency of most NAS algorithms and presents a more computational cost friendly NAS community for a broader range of researchers. We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms. In further support of the proposed NAS-Bench-102, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms, which verify its applicability. | [
"Neural Architecture Search",
"AutoML",
"Benchmark"
] | Accept (Spotlight) | https://openreview.net/pdf?id=HJxyZkBKDr | https://openreview.net/forum?id=HJxyZkBKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"N9Lwfi06O",
"CJkFRlaAj",
"aD3dFKr1n",
"czU1S_9o0v",
"4Els73P1kQ",
"SJxnpyihoH",
"Hyx3zwWhjS",
"HJew4IW3iB",
"S1lTBO3joB",
"BkguOjcsjH",
"SyePFIlijB",
"S1xT6pIciS",
"SkeS0qLcjr",
"rkxaXqr9iS",
"HkgKKgwLoS",
"S1etfqUEiH",
"H1l32DUEoH",
"Bkxx6rUNsr",
"Bklfj97VjB",
"SJg7AweAtr",
"SkgxgbqpKr",
"SyemyjsPKr"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1579651169894,
1579647979368,
1578805763120,
1578792235682,
1576798725640,
1573855171985,
1573816084451,
1573815855044,
1573795908864,
1573788528036,
1573746302741,
1573707205337,
1573706445022,
1573702181442,
1573445761329,
1573313040946,
1573312435751,
1573311927527,
1573300889580,
1571846091214,
1571819752213,
1571433179050
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"~Philipp_Jamscikov1"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"~Chris_Ying2"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/Area_Chair2"
],
[
"ICLR.cc/2020/Conference/Paper1527/Area_Chair2"
],
[
"ICLR.cc/2020/Conference/Paper1527/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1527/Area_Chair2"
],
[
"ICLR.cc/2020/Conference/Paper1527/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1527/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1527/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"The usage of API\", \"comment\": \"Thanks for your suggestions.\", \"i_should_write_the_readme_in_more_detail_and_i_will_update_the_readme_according_to_your_suggestion\": \") It is welcome to open an issue on Github, usually, I will reply in 24 hours.\\n\\nIn short, please try the get_more_info function in the API and you won't care about \\\"ori-\\\" or \\\"x-\\\"..\\nIt takes args (index, dataset, iepoch=None, use_12epochs_result=False, is_random=True).\\nThe index is the architecture index from 0-15624.\\nThe dataset indicates the name of dataset, 'cifar10-valid' indicates training on the train set of CIFAR-10.\\n'cifar10' indicates training on the train+valid set of CIFAR-10.\\n'cifar100' indicates training on the training set of CIFAR-100.\\n'ImageNet-16-120' indicates training on the training set of ImageNet-16-120.\\nIt will return a dict with the key of 'train-loss' / 'train-accuracy' / 'train-per-time' (per-epoch-time) / 'train-all-time' / 'valid-loss'.\\n\\nNoticed that, sometimes, the value of a key might be None or the key is not available. In this case, for some of them, we can find an alternative way to calculate it, but for some of them, unfortunately, it is not available at the moment. For details, would you mind to open a GitHub issue? The NAS-Bench-201 goes through several changes, and I need to merge the trained data of different internal benchmark versions, which can yield some confusing names. Sorry for the confusion.\"}",
"{\"title\": \"Description and Naming of Train/Val/Test Splits in Paper and Source Code\", \"comment\": \"Dear authors,\\n\\nI am a graduate student relatively new to the field and would have preferably opened an issue on Github, which was not possible at the point in time. As arguably any NAS/AutoMLBench targets persons with a similar non-expert background, I thought you may benefit from this user-oriented review.\\n\\nFor datasets other than CIFAR-10, I find it difficult to link the train/val/test splits described in the paper to the corresponding numbers being reported in the source code (NAS-Bench-201-v1_0-e61699.pth).\\n\\nConceptually, for every random seed, architecture, and dataset, I would expect most importantly something similar to this being reported:\\n- training performance for every of 200 epochs \\n- performance on the validation set for every of 200 epochs - supervision signal during training\\n- performance on unseen test data: select model configuration with lowest validation loss and predict (once and at the very end) the test set\\n\\nAs explained in the paper, one could expect these result for all datasets expect for one case of CIFAR-10, where it is being trained on train+val, and with the default test set serving as a supervision signal. \\n\\nWhen looking in the source code, I generally would proceed as follows: create an ArchResults object, query it for a specific dataset and seed, and look at the \\u201ceval_acc1es\\u201d dictionary which contains the validation performance for 200 epochs and as a last entry the test performance on the test set. \\n\\nE.g., for \\u2019CIFAR-10-valid\\u2019 (which is the not-so-obvious identifier for dataset for CIFAR-10 being trained on train only) we get a performance for \\u2019x-valid@0' - 'x-valid@199\\u2019, and a single more entry in the diction 'ori-test@199\\u2019. This is what I would have expected and corresponds to what I described above.\\n\\nHowever, for the datasets CIFAR-100 and ImageNet16-120, I struggle to understand the corresponding \\u201ceval_acc1es\\u201d dictionary.\\nIn both cases, we now have 200 epochs of \\u2019ori-test@...\\u2019 being reported, followed by two entries: 'x-valid@199' and \\n 'x-test@199\\u2019). \\nIn section 2.2 of the paper, you describe for CIFAR-100 that you split the original test set into a new validation set and test set, which confuses me here even more, as I would expect two and not three metrics being reported in the \\u201ceval_acc1es\\u201d dictionary. This also holds w.r.t. to ImageNet16-120 and its description in the paper.\\nSection 2.3, and especially Table 2 in the paper do not help to clarify these points. From the table I might (misleadingly?) conclude that the NASBench is run twice for ImageNet-16-120 and CIFAR-100, one time with the validation set, the other time with the test set serving as a supervision signal during training.\\n\\nI hope you may find this direct user feedback of your API somewhat useful and thank you for the effort you\\u2019ve put so far into the project.\"}",
"{\"title\": \"Agreed\", \"comment\": \"Thanks a lot for this constructive suggestion.\\nI agreed with your point and it makes sense to me. I will revise the manuscript soon.\"}",
"{\"title\": \"Naming of the benchmark\", \"comment\": \"As the authors of the NAS-Bench-101 benchmark we have discussed the naming of this follow-up benchmark with the authors of this paper and with the AC, and we have unanimously concluded that NAS-Bench-201 would be a slightly better name, as it may lead to less confusion (it is neither a subset nor a superset of NAS-Bench-101) and this also allows to use \\u201cNAS-Bench-20x\\u201d for minor updates and consistent names NAS-Bench-301, NAS-Bench-401, etc for future benchmarks.\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper presents a new benchmark for architecture search. Reviewers put this paper in the top tier. I encourage the authors to also cite https://openreview.net/forum?id=SJx9ngStPH in their final version.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"[Thanks for your constructive comments] Name change; Rephrase claims; More analysis; More experimental results.\", \"comment\": \"Many thanks for your constructive comments and suggestions.\\nEven if the replying period is short (< 10 hours to reply your comments and it was close to sleep time when we received it), we try our best to address most of your points in the paper and this response, and we promise to continue revising the rest in our paper regarding the writing and experimental results.\", \"q1_q2\": \"Mention NAS-Bench-101 in the abstract & introduction. Rephrase some claims of NAS-Bench-101.\\nR1. Thanks for this suggestion. We have revised the manuscript accordingly.\", \"q3\": \"Limitation for bandit-based algorithms.\\nR3. Agreed. We have added a paragraph to discuss this limitation (Sec 6 in Page 10). It remains an open question whether (1) training a network 10 epochs with converged cosine annealing can provide a higher correlation than (2) training a network 10 with unconverged cosine annealing or not. We promise to add experiments to compare the correlation of these two strategies in our revision.\", \"q4\": \"Suggestions for the name.\\nR4. Thanks for the suggestion and we have revised it.\", \"q5\": \"Not the first benchmark that evaluates on multiple datasets.\\nR5. We have revised the paper accordingly (Page 2).\", \"q6\": \"Emphasize that weight sharing algorithms still require several GPU hours. Include the time taken for each NAS algorithm.\\nR6. Thanks for the suggestion. We have added one paragraph and a table in Sec. 5 to emphasize this problem. We need some time to fairly compare the time cost for each NAS algorithm and promise to include it in our revision.\\n\\nQ7.1 Results of 500 runs for algorithms without weight sharing.\\nR7.1. Thanks for the suggestion. We have included new results with 500 runs for REA/REINFORCE/RANDOM/BOHB in Table 5. We have drawn a new figure to show all the results of these 500 runs in Figure 6. We will re-arrange the latex layout later. \\n\\nQ7.2 Plot performance as a function of time.\\nR7.2 Thanks for the suggestion. We are modifying the codes and running the suggested experiments. We promise to include these results in our revision.\\n\\nQ7.3 Using time budget instead of \\\"number of networks\\\".\\nR7.3. Nice suggestion. We promise to revise the experiments (Table 5 and Figure 6) for REA/REINFORCE/BOHB/RANDOM using the time budge.\", \"q8\": \"Using #unique architectures in Table 3.\\nR8. Thanks for the suggestion, and we have revised Table 3.\"}",
"{\"title\": \"Area Chair Comments 2/2\", \"comment\": \"6. Weight sharing algorithms will actually *not* be super cheap to evaluate, even with your benchmark; you couldn't just do this quickly on a laptop (as one of the reviewers thought). The only part of the computation that can be saved is that for their final evaluation step, but the search phase still needs to be carried out manually, and this also takes several GPU hours per run. I assume this is also the reason that you only report 3 runs in your Table 4, rather than the 500 (!) runs that the NAS-Bench-101 reported. Please emphasize this very clearly. Otherwise, this would be misleading, and reviewers of future NAS papers using the dataset would likely think that runs on the dataset should be super-fast and complain that authors don't carry out more runs. This is particularly important since this issue occurs precisely for the weight-sharing algorithms you want to support with this new benchmark. Please include the time taken for each of the algorithms you report (broken up into time the NAS algorithm used internally for the search phase and simulated time for results read from your results table).\\n\\n7. The comparison of NAS algorithms in Table 4 is poor for a paper presenting a new benchmark. \\n 7.1 Why did you only perform 3 runs? The whole point is that your benchmark should be cheap to evaluate and not require a lot of compute. The NAS-Bench-101 paper reported 500 runs per algorithm. At least for the algorithms that only query the table and don't have to train weight-sharing models, please carry out more runs and report statistics.\\n 7.2 Why do you not plot performance as a function of time (simulated time, pretending you actually evaluated the architectures being queried)? The NAS-Bench-101 paper already did this (see Figure 7 (left) there), and you already have the table to evaluate all the found architectures in zero time, so only reporting final performance of the methods is a big step back. Also see the recent checklist for best practices in NAS evaluation, best practice 7: https://arxiv.org/abs/1909.02453\\n 7.3 Even worse, the methods apparently did not receive the same amount of time (sum of actual compute time used + simulated time for querying the table), so you are comparing apples and oranges. That is not OK for a paper proposing a benchmark to the community (everybody else would follow suit and do this wrongly; this would be a step back rather than a step forwards for empirical evaluations in the community). If you really want to put results into a table (rather than in a Figure like suggested above in 7.2), then you should ensure that all methods were allowed the same amount of time (also see best practice 13 in the checklist above).\\n 7.4 For bandit-based algorithms, if you evaluate with a lower budget (e.g., 10 epochs rather than 100), you should only count the simulated time as a fraction of the time stored in your table (in the example 10/100 of that time). It is not the number of table queries that counts, but the sum of the simulated time for the entries queried.\\n \\n8. A minor point: in Table 3, the #architectures of NAS-Bench is 423k after treating isomorphisms, but you include isomorphic graphs in your 15k count; so this should be 12751, or the number for NAS-Bench-101 should be changed.\\n\\n9. Having made all of the points above, I would like to emphasize that new benchmarks are dearly needed for NAS; this was, for example, also identified as one of the most urgent action items in the panel of the AutoML workshop at ICML 2019. I therefore strongly welcome work on this problem. My comments above are critical, and, if left unaddressed, may lead to some reviewers reconsidering their assessment of the paper, but I very much hope that you will respond to this message to alleviate worries about these points and allow us to accept your paper. Due to the short time left in the rebuttal period, I hope that you can adapt the easy-to-address points in the paper, and for the others provide a reply and promise according adaptations in the paper.\\n\\nBest,\\nYour Area Chair\"}",
"{\"title\": \"Area Chair Comments 1/2\", \"comment\": \"Dear authors,\\n\\nI just read this paper myself, and I have several remarks to which I would like to give you an opportunity to reply. \\n(I know time in the rebuttal period is short now, and I don't expect a full reply, but it's still better for you this way than if I only raise these points in the private discussion with the reviewers after the rebuttal period is over.)\\n\\n1. I would like to bring to your attention that there is another parallel ICLR submission called NAS-Bench-1Shot1 (https://openreview.net/forum?id=SJx9ngStPH) that actually shows how to use the original NAS-Bench-101 to benchmark weight sharing algorithms. They reformulate NAS-Bench-101 by mapping nodes to edges and extract 3 different subspaces with 6k, 29k, and 360k architectures, respectively, from the NAS-Bench-101 space that are directly compatible with modern weight-sharing methods, such as ENAS and DARTS.\\nOf course, this parallel work does not reduce the novelty of yours. However, in light of it, I would like to ask you to rephrase some of your claims about the impossibility of using NAS-Bench-101 for weight-sharing algorithms; it *can* be applied, but it cannot be applied *directly* (the latter was also the wording in the NAS-Bench-101 paper). \\n\\n2. The paper follows the NAS-Bench-101 paper in motivation and in most design decisions, and makes some (useful and well-taken, but technically incremental) modifications. \\nNevertheless, the introduction does not even mention NAS-Bench-101, but rather leaves the unknowing reader with the impression that this is the first paper introducing such a NAS benchmark.\\nIt even states \\\"The AA-NAS-Bench has shown its value in the field of NAS research\\\", which is incorrect; I do believe that it *will* show its value, but so far the only benchmark that *has* shown \\nits value is NAS-Bench-101 (which indeed has been used by many groups).\\nI therefore believe that it would be much preferable to state clearly in the introduction that NAS-Bench-101 already exists, and list the useful extensions made here (e.g., in bullet point form):\\n- operations in the edges to enable weight sharing methods\\n- multiple datasets\\n- extra statistics\\nLikewise, abstract & conclusion should point out what's new compared to NAS-Bench-101. Section 3 is too late to first mention NAS-Bench-101.\\n\\n3. A minor point: NAS-Bench-101 also performed runs with a shorter training budget in order to obtain networks for which cosine annealing had converged; the performance of these converged networks is much more likely to correlate highly with the performance after a larger number of iterations than just taking an earlier point of a single cosine annealing trajectory. Since these high correlations are required for bandit-based algorithms (such as Hyperband and BOHB) to perform well, your benchmark may be argued to not support these well and thus be *less* agnostic to the choice of algorithms than the original NAS-Bench-101.\\n\\n4. Concerning the name, I believe the points made by AnonReviewer3 and the points above clearly speak against \\\"algorithm-agnostic\\\" as part of the name. \\nSince the benchmark is very similar to NAS-Bench-101, with some things changed, I believe a very natural name would be NAS-Bench-102. (We should reserve NAS-Bench-201, etc for much larger search spaces.) A corresponding paper title could be \\\"NAS-Bench-102: Extending the Scope of Reproducible Neural Architecture Search\\\" or alike.\\n\\n5. A minor point: technically, this benchmark is not the first that evaluates on multiple datasets. There are the NAS-HPO-Bench Datasets, which the NAS-Bench-101 used to decide about the strategy for fixing hyperparameters. These NAS-HPO-Bench Datasets (https://arxiv.org/abs/1905.04970) evaluate 62208 configurations in the joint NAS+HPO space of a simple feed-forward network, each of them on 3 datasets. So, technically this has been done before, but the architectures in that paper are so simple that this does not take away from the novelty of this contribution of the paper.\"}",
"{\"title\": \"Acknowledged!\", \"comment\": \"Thanks for the links to code and data.\"}",
"{\"title\": \"Naming\", \"comment\": \"Thanks for your comments.\\nAs replied in Q1 above, since (1) non-trivial modifications are required to evaluate all NAS algorithms on NAS-Bench-101 and (2) a subset of NASBench-101 which includes all possible architectures have 4 or fewer nodes, which sum to only less than 60 unique models, we believe NAS-Bench-101 is not qualified as an algorithm-agnostic NAS benchmark. This limitation has already been mentioned in the original paper: \\\"NAS algorithms based on weight sharing (Pham et al., 2018; Liu et al., 2018b) or network morphisms (Cai et al., 2018; Elsken et al., 2018) cannot be directly evaluated on the dataset, so we did not include them\\\". This is the main motivation for our benchmark.\\nWe welcome the discussion about naming. Do you have some candidates for the name of our benchmark?\"}",
"{\"title\": \"Rebuttal resolves the majority of my concerns\", \"comment\": \"Author responses address most of my concerns, except they did not respond to the naming of their dataset.\"}",
"{\"title\": \"Update the manuscript\", \"comment\": \"We thank the AC and all reviewers for their constructive comments. We have updated a revised version of the paper and would like to highlight the changes as follows:\\n\\n1. We add publicly available codes for reproducing the proposed AA NAS benchmark and all 10 NAS algorithms.\\n\\n2. We add instructions on how to use our API in the appendix.\\n\\n3. We add more discussion w.r.t. the applicability of NAS-Bench-101.\"}",
"{\"title\": \"Upload codes and data\", \"comment\": \"Thanks for your recognition. We just uploaded all codes and data to the anonymous links as follows:\\n\\n1. Codes at https://github.com/D-X-Y/NAS-Projects include\\n- instruction on how to re-generate our dataset\\n- usages of 10 re-implemented NAS algorithms\\n- instruction on how to use our API\\n\\n2. The data for API is at https://drive.google.com/file/d/1SKW0Cu0u8-gb18zDpaAGi0f74UdXeGKs/view\"}",
"{\"title\": \"Full Code and Data Release\", \"comment\": \"Dear AC,\\n\\nThanks for your comment. We have uploaded the codes and data to anonymous links as follows:\\n\\n1. Codes at https://anonymous.4open.science/repository/9aa95a13-7e6a-48ed-9c77-4ac9111f7ae9/README.md include\\n- instruction on how to re-generate our dataset\\n- usages of 10 re-implemented NAS algorithms\\n- instruction on how to use our API\\n\\n2. The dataset is at https://drive.google.com/open?id=1qEsEiGnr4HhOoU_2s_z4zRHye7C5LkTi\\n\\nBest regards,\\nAuthors of AA-NAS-Benchmark\"}",
"{\"title\": \"Agreed!\", \"comment\": \"Thanks for the confirmation! Looking forward to the full benchmark.\"}",
"{\"title\": \"More comparison with NASBench-101; Clarification on our benchmark.\", \"comment\": \"Thank you for your constructive and detailed review. We have updated the paper according to your comments and suggestions. Detailed responses are shown in a point-to-point manner below.\\n\\nQ1. More detailed discussion and comparison with NASBench-101.\\nR1. We agree with the reviewer\\u2019s statement: with some modification to both NASBench-101 (a reduced one) and NAS algorithms, most algorithms could also be evaluated on the modified NASBench-101. To the best of our knowledge, such modification is non-trivial and might be beyond the scope of the original NASBench-101 paper. Also, the modifications might need extra tedious effort, which is no longer convenient to use and against the main motivation of the benchmark.\\n\\nA subset of NASBench-101 with all possible architectures included needs to have 4 or fewer nodes, which sum to only less than 500 architectures. This is because a complete DAG with n nodes has n*(n-1)/2 edges and NASBench-101 limits the maximum number of edges to 9, therefore, the number of nodes (n) should be <= 4.\\n\\nQ2. Why using average pooling instead of max pooling?\\nR2. It is inspired by the typical architectures, such as ResNet and ResNeXt (https://github.com/facebookresearch/ResNeXt/blob/master/models/resnext.lua#L38), which use average pooling in their residual blocks.\\n\\nQ3. How do you compute the total architecture number 15,625 in Table 3?\\nR3. There are 6 edges when we use the number of node V=4. Each edge has 5 possible operations. Therefore, the total number is 5^6 = 15625.\\n\\nQ4. Are there any topologically equal architectures in this space? Is the actual number of architectures smaller?\\nR4. Yes, the number of unique architectures is 12751.\\n\\nQ5. How about early stopping on DARTS to avoid finding the architecture with all skip connection.\\nR5. We follow the original training strategy in the DARTS paper. Even if the early stopping may improve the performance, it is not the focus of this paper.\\n\\nQ6. Add ENAS as a baseline NAS algorithm.\\nR6. Thanks for this suggestion. We have included ENAS in Table 4.\\n\\nQ7. Clarify the \\u201coptimal\\u201d column in Table 4.\\nR7. We average the accuracy results of all trials for each architecture. The \\u201coptimal\\u201d means the highest mean accuracy. We have revised Table 4 to clarify it.\\n\\nQ8. Could the author provide another visualization, showing when stabilization happens in between epoch number 150 and 190? \\nR8. We have made a video to show the ranking over training epochs. Please see the video at https://drive.google.com/open?id=1rp58l5FM-3Q-S7tPSYX003BVekWkj8X7\\n\\nQ9. Figure 4, correlation matrix for top 4743 architectures are significantly lower than the full and 1387 ones, is this possible because of repetitive architectures in the space are not pruned? And, what is the reason for the number 4743 and 1387?\\nR9. Thanks for pointing out this problem. Figure 4b and 4c should be exchanged. We have updated it. The number of architectures is derived by the number of top architectures with accuracy > 92% (4743) and 93% (1387).\\n\\nQ10. ResNet (star in Figure 2) seems to perform very well. Does this indicate the proposed search space is not much meaningful, considering there are only 1~2% for NAS to improve?\\nR10. This is also true for NAS-Bench-101: ResNet is competitive and most NAS algorithms just find a worse architecture than ResNet. As shown in Table 4, the best architecture found by 10 NAS algorithms is still far from the best architecture in the search space (> 1% on CIFAR-10 and > 3% on CIFAR-100 and ImageNet-16-120). Also, the stability of the NAS algorithms should also be considered since the evolution-based and RL-based methods usually suffer from high variance. In Section 6, we included the rules to use the benchmark to avoid boosting performance using priors, e.g., hard-code rules.\"}",
"{\"title\": \"Minor English Writing Problem\", \"comment\": \"We appreciate your recognition of our paper and valuable comments regarding writing. Please find our response to each of your questions/comments in the following.\\n\\n- Try to attempt a pun on Ananas in the naming?\\nWe are brainstorming this problem. Do you have some suggestions?\\n\\n- I'm not sure \\\"fairness\\\" as in the abstract is the exact core problem; I would call this comparability.\\nNice correction. We have revised the paper according to your suggestion.\\n\\n- sec2 head: \\\"side information\\\", I suggest diagnostic information\\nThanks for this constructive suggestion. We have replaced all \\u201cside information\\u201d with \\u201cdiagnostic information\\u201d.\\n\\n- sec2.2 \\\"and etc\\\" is redundant: etc stands for \\\"and the others\\\"\\nWe have revised the sentence to \\u201cThe test set is to evaluate the performance of each searching algorithm by comparing the indicators (e.g., accuracy, model size, speed) of their selected architectures.\\u201d.\\n\\n- sec2.4 almost involves almost; target on computation cost; stabability\\nThanks for your comments. We have revised the sentences as:\\n(1) Collecting these statistics almost involves no extra computation cost\\n(2) Algorithms that target on searching architectures with computational constraints, such as models on edge devices, can use these metrics directly in their algorithm designs without extra calculations.\\n(2) the stability\\n\\n- sec 4: has impacts on, parameters keeps the same -> stays, which serves as testing -> to test\\nThanks for your comments. We have revised the sentences as:\\n(1) Results show that a different number of parameters will affect the performance of the architectures, which indicates that the choices of operations are essential in NAS.\\n(2) We also observe that the performance of the architecture can vary even when the number of parameters stays the same.\\n(3) The performance of the architectures shows a generally consistent ranking over the three datasets with slightly different variance, which serves to test the generality of the searching algorithm.\\n\\n- sec6 tricky ways-> insidious?\\nGood suggestion. We have revised \\u201ctricky\\u201d by \\u201cinsidious\\u201d.\"}",
"{\"title\": \"Training details; Supports for algorithms with growing cells; Benchmark Release\", \"comment\": \"We appreciate your constructive comments and suggestions. Please find our response to each of your questions/comments in the following.\\n\\nQ1. More details on the number of trials and whether this was part of the benchmark lookup.\\n\\nR1. In the current version of our AA-NAS-Bench, every architecture is trained at least once. To be specific, 7433 architectures are trained once, 782 architectures are trained twice, 7410 architectures are trained three times with different random seeds. Our API supports returning the metrics of a specific trial. Moreover, we are actively training all architectures with more seeds and will continue updating our AA-NAS-Bench. We have clarified this information in the footnote on Page 4. We plan to finish the training of all architectures for 3 trials in 4 months.\\n\\nQ2. Can the searching algorithms which grow from small to big cells take the advantages of this benchmark?\\n\\nR2. Yes, they can take advantages of this benchmark, because each small cell is equivalent to one big cell by adding some \\u201cskip-connect\\u201d and \\u201czeroize\\u201d operations. Please see the following example.\", \"a_small_cell_with_3_nodes\": \"node-1 -> node-2: 3x3conv\\nnode-1 -> node-3: 3x3conv\\nnode-2 -> node-3: 3x3conv\", \"the_corresponding_big_cell_with_4_nodes\": \"node-1 -> node-2: 3x3conv\\nnode-1 -> node-3: 3x3conv\\nnode-2 -> node-3: 3x3conv\\nnode-1 -> node-4: zeroize\\nnode-2 -> node-4: zeroize\\nnode-3 -> node-4: skip-connect\\n\\nTherefore, our AA-NAS-Bench can also provide the metrics for all small cells, and benefit to searching algorithms that grow from small to big cells, e.g., EFAS and AutoGrow.\\n\\nQ3. Release the benchmark and reference implementations.\\n\\nR3. Of course. We will release all source codes for training each architecture candidate and baseline searching algorithms during the rebuttal period. We would also provide convenient APIs to access our benchmark.\"}",
"{\"title\": \"Code availability\", \"comment\": \"The paper states \\\"All code, data, and architecture information are publicly available.\\\"\\n\\nWhere is it available? Please post an anonymized version of this during the rebuttal phase. This is absolutely crucial for a paper proposing a new benchmark.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Edit after rebuttals: I have read all other reviews and rebuttals and maintain my assessment.\\n----\", \"summary\": [\"Comparison of neural architecture search algorithms is hindered by the lack of a common measurement procedure. This paper describes a publicly available benchmark on which most recent types of NAS algorithms can be evaluated. It does so by exhaustive calculation of performance metrics on the full combinatorial space of select architectures, on two select datasets. NAS algorithms can then perform search without having to perform evaluation on each node, which shrinks the computational cost of experimentation and benchmarking drastically.\", \"I recommend acceptance, as the resource described in the paper has been created thoughtfully and is useful to the research community, as well as to users of NAS algorithms. The paper is clear about restrictions too, which doesn't hurt.\", \"The technical details are laid out clearly especially in sec 2.1. It would be interesting to know the computational cost of producing the data. It is useful in practice to have access to different metrics (validation, training and test) for each node, as well as extra diagnostic information.\", \"The usefulness of the resources hinges on a few elements, which make its strength and also weakness:\", \"choice of tasks and datasets\", \"choice of skeleton architecture, fig 1\", \"choice of hyperparameters, sec 2.3 (I note there is no regularisation, as discussed in the paper)\", \"All of these seem reasonable to me. It is clearly a limitation that hyperparameter search is infeasible to conduct in parallel with architecture search, as pointed out sec 6.\", \"The principal competitor NAS-Bench-101 is only applicable to specific NAS algorithms, which evidences the need for the present resource. The discussion and comparison in sec3 is fair.\", \"The discussion of weaknesses, such as possible overfitting patterns, or technical choices, is balanced.\", \"# Minor\", \"English proofreading is required.\", \"Maybe you can attempt a pun on Ananas in the naming?\", \"I'm not sure \\\"fairness\\\" as in the abstract is the exact core problem; I would call this comparability.\", \"sec2 head: \\\"side information\\\", I usggest diagnostic information\", \"sec2.2 \\\"and etc\\\" is a redundant: etc stands for \\\"and the others\\\"\", \"sec2.4 almost involves almost; target on computation cost; stabability\", \"sec 4: has impacts on, parameters keeps the same -> stays, which serves as testing -> to test\", \"sec6 tricky ways-> insidious?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"--- Updated during response period ---\\n\\nAuthors successfully answers all my questions. I revise my rating to Accept.\\n\\n\\n-----\", \"summary\": \"This paper proposes another benchmark dataset for neural architecture search. The idea is following the NASBench-101 dataset, that in a given search space, densely sampled all existing architectures and train each of them on three tasks for multiple times, and using the obtained metrics as a tool to evaluate an arbitrary neural architecture search algorithm. The paper also presents comprehensive reports on the statistics, revealing a strong performance correlation between tasks, and evaluate some baseline NAS algorithms.\", \"i_think_this_paper_will_be_valuable_to_the_research_community_for_these_reasons\": \"(1) the dataset contains a more geologically complex search space comparing to the original NASBench-101, whose search space is restrained in certain ways; (2) released metrics include more meaningful information rather than single point value in NASbench; (3) it uses 3 datasets rather than 1.\\nMy major concerns, which I will detail later, is the phrasing \\\"algorithm-agnostic\\\" does not truly reflect the difference between their approach and NASBench-101, and about the architecture search space design details. \\n\\nAltogether, I think even the technical novelty is incremental, the work is not trivial considering the computational cost. I am willing to improve my score if my concerns are addressed during the rebuttal period. Nevertheless, this dataset is a strong subsidy of existing NASBench-101 and can benefit the research community and serves as an important baseline to evaluate a NAS algorithm. \\n\\n\\nStrength\\n\\n+ Clear motivation to use an operation-on-the-edge search space that is widely used in NAS domain.\\n+ Extensive experiments on evaluating 15K architectures over 3 datasets\\n+ Detailed statistics on the search space\\n+ Good baseline experiments comparison\", \"main_concerns_about_this_dataset\": \"- Comparing to NASBench-101 in terms of \\\"Algorithm Agnostic\\\", it is in a \\\"more-or-less\\\" game but not a \\\"yes-or-no\\\" one, so that AA-NAS-Bench does not seem appropriate. In my perspective, this dataset has not shown significant differences for the following reasons.\\n\\n1. With proper adaptation, both NASBench-101 and the one in this paper are \\\"algorithm agnostic\\\". For example, original ENAS is training a reinforcement learning sampler that learns to predict a string with encoding [id1, op1, id2, op2] for each node, where id1, id2 is the IDs of the previous node to connect, op1, op2 is the operation choice for each edge. Since NASBench has operation on output node, one could simply make RL sampler to predict [id1, id2, op1], or another string encoding that suits the search space better. In my perspective, Ying et al. mentioned that many NAS algorithms cannot be directly evaluated on NASBench-101 are because the search space is different, but it does not mean using NASBench-101 is impossible. On the other hand, for some other state-of-the-art algorithms, like Proxyless-NAS on ImageNet, the search space is also different from the one proposed in this paper, but likewise, it does not indicate evaluating Proxyless-NAS on this dataset is impossible. \\n\\n2. NASBench-101 does impose constraints on maximum edge number equals to 9 with 7 nodes in their space and results in 423K architectures. However, this constraint is no longer applied if you reduce the number of node to 6 (i.e. all possible architectures can be sampled), yet it still contains around 64K architectures, which is more than 15K in the proposed dataset. In this perspective, NASBench is a larger dataset and \\\"algorithm agnostic\\\".\\n\\nTo summarize, I acknowledge the paper's contribution is using an operation-on-the-edge search space that is widely used in previous NAS algorithms while NASBench-101 is using operation-on-the-node space. However, it only makes the proposed dataset \\\"more algorithm agnostic\\\" with less effort, and it does not make the previous NASBench-101 \\\"not\\\" algorithm agnostic. If using the current name AA-NAS-Bench, I think it is not fair for the NASBench-101, specifically they are 4 times larger after removing the edge number constraints. \\n\\n- Questions about architecture space design\\n1. Why using average pooling instead of max pooling? \\n2. How do you compute the total architecture number 15,625 in Table 3? In your setting, with the number of node V=4 densely connected DAG, it should have 6 edges as depicted in Figure 1, and each edge has 5 possible operations, i.e. total number = 6^5 = 7776. I am confused about this point, could author comment more on this number?\\n3. Is there any topologically equal architectures in this space? For example, let's name the node 1,2,3,4, and the following two architectures should be the same since input edges are summed before passed to the next node. I listed **non-zeroed** edge as, id1->id2: op\", \"architecture_1\": \"1->2: conv3x3\\n2->4: skip\\n1->3: conv1x1\\n3->4: skip\", \"architecture_2\": \"1->2: conv1x1\\n2->4: skip\\n1->3: conv3x3\\n3->4: skip\\n\\nIf the pruning is not effectively conducted, my worry is the actual number of architectures is smaller.\", \"minor_comments\": \"1. DARTS results on the are quite poor as mentioned in this paper that, DARTS will eventually converge to an architecture with all skip connection. However, it could be a simple fix, by tracking the architecture evolution during the search and report the best like early-stopping. Will this improve DARTS results? \\n\\n2. Since ENAS is the first work using parameter sharing on the NAS problem, could the author add it to the baseline?\\n\\n3. In table 4, what is the average (94.37 for CIFAR-10) mean in the \\\"optimal\\\" column? Is this the mean performance of all architectures? If so, it is quite strange to see all the baselines are selecting architectures worse than the average performance. Or it is the best architecture performance as indicated in the caption? This \\\"average\\\" column for \\\"optimal\\\" seems confusing.\\n\\n4. The dynamic ranking of architecture in Figure 5 is very interesting. Architecture ranking seems stable after the 190th epoch. Could the author provide another visualization, showing when stabilization happens in between epoch number 150 and 190? \\n\\n5. Figure 4, correlation matrix for top 4743 architectures are significantly lower than the full and 1387 ones, is this possible because of repetitive architectures in the space are not pruned? And, what is the reason for number 4743 and 1387?\\n\\n6. ResNet (star in Figure 2) seems to perform very well. Does this indicates the proposed search space is not much meaningful, considering there are only 1~2% for NAS to improve?\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nResearch into Neural Architecture Search (NAS) has exploded in recent times. But unfortunately the entry barrier into the field is high due to the computational demands of running experiments on even cifar10/100 let alone ImageNet sized datasets. Furthermore there is a reproducibility and fair comparision crisis due to differences in search spaces, training routine hyperparameters, stochasticity in gpu training, etc. This paper proposes a benchmark cell-search space (resnet backbone, 4-node cell space, 5 possible operations) which is algorithm agnostic. They train all possible architectures (15625) in this search space on cifar10/100/Imagenet-16-120 (a reduced version of ImageNet with 120 classes). Thus anyone can now use this pretrained lookup-table to benchmark their search algorithm in seconds on a tiny laptop instead of having to get access to a cluster with hundreds of gpus. By also proposing reference implementations of training architectures the community can use this to fairly benchmark their search algorithms. \\n\\nThe other such benchmark is NASBench-101 which uses a much more expansive search space but by imposing a limit on the number of edges in the cell (to keep the search space manageable with respect to how many of them they have to train) they leave out algorithms which do weight-sharing (ENAS, DARTS, RANDNas) from being able to use their benchmark. This paper alleviates those constraints and thus brings important algorithm classes to their fold.\", \"comments\": [\"The paper is very well written. Thanks!\", \"Minor clarification question: One nice thing of the NASBench paper was the fact that they also reported variance in training with differnt random seeds. I see a line in the 'Metrics' section saying that this is also done but did not find any details on number of trials and whether this was part of the benchmark lookup. I might have missed it somewhere.\", \"There is another class of search algorithms which grow from small to big cells (if using a cell search space) like EFAS (Efficient Forward Architecture Search by Dey et al and AutoGrow by Wen et al.). Can such algorithms take advantage of this benchmark? I think the answer is yes, because of the 'zeroise' operation but wanted to get the authors' answer.\", \"Overall I think this is an important contribution to the field and I am assuming that the authors plan to release the benchmark and reference implementations if accepted?\"]}"
]
} |
BkgRe1SFDS | Learning World Graph Decompositions To Accelerate Reinforcement Learning | [
"Wenling Shang",
"Alex Trott",
"Stephan Zheng",
"Caiming Xiong",
"Richard Socher"
] | Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning (RL) agents. We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment.The nodes of a world graph are important waypoint states and edges represent feasible traversals between them. Our framework has two learning phases: 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder (VAE) on trajectory data and 2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions. We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail. | [
"environment decomposition",
"subgoal discovery",
"generative modeling",
"reinforcement learning",
"unsupervised learning"
] | Reject | https://openreview.net/pdf?id=BkgRe1SFDS | https://openreview.net/forum?id=BkgRe1SFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"L-SEjQfizT",
"rJlZ3Tisir",
"ByeItaioir",
"SkgqLTGqir",
"Byllc8ecjB",
"Bye-tMJYjH",
"Skl-xMJFiS",
"Sye-N-yYoB",
"HkgB4ykYir",
"Hyg4isEH9H",
"rkgq0aBAtS",
"HJeOJlV3FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725610,
1573793193494,
1573793149565,
1573690706483,
1573680776226,
1573610105093,
1573609960868,
1573609768692,
1573609260814,
1572322203626,
1571868114383,
1571729375750
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1526/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1526/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1526/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1526/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1526/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces an approach for structured exploration based on graph-based representations. While a number of the ideas in the paper are quite interesting and relevant to the ICLR community, the reviewers were generally in agreement about several concerns, which were discussed after the author response. These concerns include the ad-hoc nature of the approach, the limited technical novelty, and the difficulty of the experimental domains (and whether the approach could be applied to a more general class of challenging long-horizon problems such as those in prior works). Overall, the paper is not quite ready for publication at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you again for your detailed feedback! [part 2]\", \"comment\": \"4. We are happy to add more comparisons with various waypoint selection rates and neighborhood sizes. To clarify, \\u2018neighborhood size\\u2019 refers to the size of the neighborhood around a waypoint state (wide goal) within which the WN Manager can propose the narrow goal. Intuitively, if the proportion of waypoint states and/or the size of the neighborhood is too small, it may create \\u201cblind spots\\u201d in the state space such that there are states that the WN Manager cannot select as narrow goals.\\nWe chose the neighborhood size and waypoint selection rate based on this intuition and did not tune them during our experiments. Hence, we expect our algorithm to be fairly robust to the choice of waypoint state density. For lower selection rates (< 20%), one can increase the area considered for \\u201cnarrow goals\\u201d by the Manager (that is, the neighborhood size) to compensate so as to ensure a sufficient coverage of the entire environment. Again, we will experiment with different combinations of of waypoint selection rates and neighborhood sizes and present the results as part of the final version.\\nWe agree that scaling up to other domains may require further refinements of our approach, but we see this as very compelling future work.\\n\\n6. The stochasticity comes from specific tasks. Hence, in the world graph phase, there is no stochasticity in our case. It is only present in the HRL phase (for each specific task that we used). Note that all tasks feature certain type of stochasticity.\", \"answers_to_additional_questions\": \"For discrete state spaces, this is simply a constant-time dictionary lookup. For continuous state spaces, one can use L2-distance epsilon-balls around waypoints. As long as the set of waypoint-neighborhoods covers the state space, and we assume a fixed fraction of neighborhoods / state-space volume, this scales linearly with the volume of the state space; hence, it would likely scale well.\"}",
"{\"title\": \"Thank you again for your detailed feedback! [part 1]\", \"comment\": \"1. Thank you! The shared training code reflects the essence of our algorithm. Many of the imports and helper files have not been formatted to be informative, but are not essential for understanding how the algorithms work. We will share the full cleaned code upon acceptance.\\n\\n2. In fact, it is not practical to compare VAEs used in [1] and [2] to ours head to head because the application, data domain, latent space type, prior, approximated posterior are all different. \\n\\nMore importantly, many recent efforts have devoted to studying the effect of KL term. In particular, [3] contributes a nice ablation study over this topic and recommends a rule of thumb. In general, when choosing the KL weight: \\n- If one\\u2019s goal is to perform log-likelihood maximization and log-likelihood is the measurement of model performance--which is indeed the case in [1] and [2], then KL should be set to 1. Otherwise it violates the consistency between training objective and evaluation objective\\n- However, if one\\u2019s ultimate goal is not log-likelihood, that is, if the model\\u2019s generative property is not the primary goal, the KL-term is often regarded as a regularization term and can be set as application-appropriate. For instance, in the case where one wants to prevent an overly strong decoder that can ignore latent code [3] or if the regularization from KL term ([6] and in our case) is so strong that it can cause posterior collapsing, the KL term is set to smaller than 1. On the other hand, if one desires disentangling property such as in beta-VAE, then the KL term is set to bigger than 1 [4]. \\n\\nLastly, we\\u2019d like to draw R4\\u2019s attention to the last paragraph in Appendix A. In selecting hyperparameters, we in fact leverage a neat technique, Lagrangian Relaxation, which allows the weights between different losses to adaptively balance among one another. In other words, as long as our initial hyperparameter settings for the loss coefficients are within a reasonable range, the coefficients automatically converge to a local optima without manual annealing. For example, for the medium maze, our initial KL term weight is set to be 0.01 and it converges to 0.067 at the end of training. This optimization technique also has been used by previous work [5], which should serve as the most relevant basis for comparison to our approach given their application of the HardKuma distribution and regularization of sequence statistics.\\n\\nNevertheless, we are happy to more exhaustively demonstrate the impact of initial weights given to the KL term for the final version.\\n\\n[1] Generating Sentences from a Continuous Space. Bowman et al.\\n[2] A Recurrent Latent Variable Model for Sequential Data. Chung et al.\\n[3] Fixing a Broken ELBO, Alemi et al.\\n[4] \\u03b2-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, Higgins et al, 2017.\\n[5] Interpretable Neural Predictions with Differentiable Binary Variables, \\n[6] The Pose Knows: Video Forecasting by Generating Pose Futures, Walker et al. \\n\\n3. We are happy to more rigorously evaluate modifications of our approach for learning world graphs that may better accommodate stochasticity, as part of the final version. In particular, we plan to run experiments where, during world graph learning, we ignore action replays for reaching waypoint states and instead use a strategy such as that proposed above (i.e. using the GCP). We expect this to work based on the following intuition:\\n- Our GCP is trained by using a form of exploration bonus, i.e., rewarding diversity in the states that are reached, and so motivates agents to expand the set of visited states. This automatically expands the set of potential goals. Exploration-based reward shaping techniques include, e.g., intrinsic motivation, curiosity, etc, which have been successfully used to learn to play Atari games (e.g., Montezuma\\u2019s Revenge) [1], continuous control [2], and others [3]. In all these applications, it has been shown that rewarding diversity biases agents to discover e.g., states with high rewards faster. \\n- Another intuition is that such GCP agents have visited good waypoint states that could be identified by our world graph algorithm. Consider an agent that can solve Montezuma\\u2019s Revenge thanks to having learned a good exploration policy. Every 10th frame selected from a successful state sequence that ends in winning the game would constitute a good (partial) world graph for the (single) task of Montezuma\\u2019s Revenge. This suggests using GCPs with exploration bonuses would yield world graphs that are useful for multiple tasks as well. Hence, we expect our iterative refinement approach to learn the world graphs would be effective in other applications as well.\\n\\n[1] Unifying Count-Based Exploration and Intrinsic Motivation, Marc G. Bellemare et al, \\n[2] Large-Scale Study of Curiosity-Driven Learning, Buda et al 2018, \\n[3] Exploration by Random Network Distillation. Burda et al 2018\"}",
"{\"title\": \"Thank you for the clarification.\", \"comment\": \"Thank you for the clarification, and the effort to update the paper/provide the code.\\n\\n1. I will trust that authors were using the correct ELBO, and raise my score to weakly accept as promised; however, I want to point out that the new code only added a few new pieces and many imports and libraries used by the scripts are not included.\\n\\n2. Thank you for your explanation regarding the usage of VAE, but I am still not convinced by the usage of small coefficient. It is common to use KL annealing in sequence generation tasks [1] but ultimately the weight is increased to 1. One of the first usage of VAE in sequential modeling [2] also uses lambda = 1 for the KL divergence. I am not an expert in pose estimation so I do not wish to comment on the paper authors refer to (which also uses a form of KL annealing) but as far as I can tell the application is not close to the task this paper tries to accomplish. I would be much more convinced if authors can provide some more relevant references and/or provide an ablation study on the choice of lambda in the final version. Since the method aims to be a generic HRL algorithm, the bar for picking hyperparameters should be higher than application specific papers.\\n\\n3. While your proposal seems reasonable, no experiments (even if the performance is sub-optimal) are provided to back up the claim. In particular, you propose to use GCP to accomplish the same purpose but when GCP is not properly trained (e.g. at the beginning of training), the GCP will not be very good, and this may affect the quality of the final policy adversely, if the policy even learns at all. I find it unconstructive to extrapolate what the algorithm would do in completely different domains. As it stands, I don\\u2019t believe this concern has been addressed.\\n\\n4. Thank you for clarifying this. It seems like 20% is extremely high and may be an obstacle for generalizing this approach to other domains. Can you provide ablation on the proportion? Another problem is that if the policy cannot sufficiently cover the entire state space during the cycle, the $\\\\mathcal{V}_p$ won\\u2019t be good enough. I have minor concerns about whether this would scale. However, this may be addressed with engineering and some sort of curiosity or other form of exploration bonus. \\n\\n5. Fair enough.\\n\\n6. I understand how door-key environment is set up but it seems all of the environments that are not multigoal have some components of stochasticity, hence my confusion. Further, are the environments also stochastic during graph learning? Or are they only stochastic during the hierarchical policy learning?\", \"additional_questions\": \"The assumption that the agent can recognize encountering a waypoint is also interesting. How is this implemented? Does this involve searching over all the waypoints and measure the L2 distance? Will this have scaling bottleneck? Can this be solved with some parametric model?\\n\\nA part of my concerns has been addressed, but some remain. I look forward to hearing your response to the rest.\\n\\n[1] Generating Sentences from a Continuous Space. Bowman et al.\\n[2] A Recurrent Latent Variable Model for Sequential Data. Chung et al.\"}",
"{\"title\": \"Thank you for your clarifications\", \"comment\": \"Thank you for your clarifications\\n\\nThese comments have helped clear up my understanding of some important details.\"}",
"{\"title\": \"Thank you very much for your feedback!\", \"comment\": \"We thank you for the reviews and feedback!\\n\\n1. ELBO formulation: Thank you very much for spotting the typo. The objective used for training uses the correct ELBO. We have corrected the main text and appendix and also uploaded the relevant training code (in PriorLoss.py, we are indeed minimizing the KL divergence). \\n\\n2. KL term: The VAE model presents three essential advantages: (1) it reflects the intrinsic stochasticity that given a trajectory, there can be multiple combinations of intermediate states capable of conveying sufficient information for action recovery. (2) our prior reflects the empirical average selection probabilities for each state, meaning it encodes the average activations yielded by the inference network. Regularizing the approximated posterior with the prior (main text 3.1) encourages each trajectory to choose the combination consisting of frequently activated states to activate. (3) We also leverage the prior mean in selecting the waypoints (last line Algorithm 1).\\n\\nBecause the regularization imposed by the KL term is fairly significant and in our case the prior is learned upon how often a state is selected, an overly aggressive KL risks creating a constrained feedback cycle in the learning dynamics of these network components, which would cause the VAE to prematurely converge. For the same reason, many VAE applications--especially when not involving sampling from the prior--sets the KL term coefficient to a small value, e.g. the STOA poseVAE [1] has it as low as 0.0005. \\n\\n3. The sole purpose of navigating to waypoint states before performing exploration is to allow for the starting positions of exploration rollouts to expand as the agent discovers more of its environment. There are numerous strategies to achieve this when stochasticity limits the usefulness of memorized action sequences. One option is to use the goal-conditioned policy (GCP) to navigate to the target waypoint state and set wherever the GCP ends up as the starting point for exploration, since the precision of starting points is unimportant. All that we wish to ensure is that the range of starting points expands as training progresses.\\n\\nWe have revised the main text (3.2) to make it clear that our choice of replaying action sequences relies on the deterministic dynamics of our testbed environment and that other choices may be appropriate in different settings. We also clarify that the details of such a choice are likely not crucial provided that they achieve the goal of expanding the range of exploration starting points.\\n\\n4. $\\\\mathcal{V}_p$, the set of waypoint states, is updated after every few iterations based on a ranking using the prior mean (last line in Algorithm 1) with the top 20% selected into $\\\\mathcal{V}_p$ (Appendix C.1 specifies this hyperparameter). Waypoint states are not necessarily carried over from one iteration to the next. That is, we re-compute the set of waypoint states based on the prior network updates. We have revised the main text (3.1) to make this clearer.\\n\\n5. We correct the main text and removed \\u201cnormalized\\u201d as our normalization (mean subtraction and standard deviation division) does not affect the actual execution of the algorithm. \\n\\n6. Different minigrid tasks are described in Table 1; particularly, for Door-Key, the location of the door, the key, the exit point and an additional wall block are generated at random. On the small maze alone, there are approximately 70K variations. \\n\\n[1] Walker et al. The Pose Knows: Video Forecasting by Generating Pose Futures.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank you for your feedback. Please see the general comment that clarifies our contributions and how our ablation and comparative analysis shows the impact of the various components of our approach. We have included the core training code to clarify how training is implemented.\\n\\n\\n\\u201cthere's no real analysis of the actual waypoints that are discovered in the target domains, whether they indeed correspond to intuitively important waypoints in a domain, or whether they are just producing some arbitrary segmentation of the proposed task.\\u201d\\n-In fact, we show samples of learned waypoints in Appendix D. These show that the learned waypoints represent intuitive decompositions of the mazes and important states, e.g., hallway segments, junctions.\\n\\n\\n\\u201cbut it's hard to disentangle the performance of this particular approach versus the performance of any approach that would use (any) intermediate states as goals within an HRL approach.\\u201c\\n-To validate the use of learned waypoints in a world graph, we have 2 comparisons. These show that using learned waypoints is significantly better than other waypoints selection methods.\\n\\nFirst, we compare using world graphs with **randomly** selected waypoints on learning downstream tasks. For example, Table 3 shows that when our HRL approach uses graphs with waypoint states selected by the VAE performance is better and more consistent than when using graphs with randomly selected states as its nodes.\\n\\nSecondly, we compare learned waypoints with a Feudal Network that allows the Manager network to **select any state as a goal** (Table 2). On our tasks, this network only barely does better than a vanilla A2C baseline.\\n\\n\\n\\u201cAnd the impression I'm left with, given the level of detail included in the paper, is that I would have no idea how to apply or extend this process to any other RL domains.\\u201d\\n-We have updated Appendix C, detailing step-by-step procedures of our HRL algorithm for the task-specific phase. We also kindly encourage the reviewer to inspect the shared code for the implementation of our approach. Although our results stand on their own as validation of our approach, we believe that extending our approach to other domains is a fruitful direction for future work.\\n\\n\\n\\\"Thus, I'm overall left with the impression that it's quite difficult to assess the contribution of this approach, and determine precisely which of the different proposed aspects is really contributing most to the improved performance. I know there is some ablative analysis in the paper comparing the pi_g-init and G_w-traversal independently and together, but I'm more questioning the basic question of what each portion of the network is really learning.\\\"\\n-Please see the general comment for an overview of our work. Our core contributions are a framework to learn world graphs (Section 3) and how to use them effectively for structured exploration (Section 4). We analyzed the impact of each aspect of our framework in our ablation studies. \\n\\nWe understand the complete framework (graph learning + HRL) has a number of novel methodological features, which could be hard to disentangle. We found empirically that each part of our approach is essential to \\u201cmake it work\\u201d, i.e., learning useful world graphs and effectively utilizing them to accelerate HRL. Our ablation studies thoroughly show how each component impacts performance.\\n\\nOf course, each component in our work can be further developed. We very much hope to see our work stimulating future research endeavors and provide a testbed and baseline benchmark. \\n\\nWe will clarify the writing to show how the comparative analysis inspects the various parts of our framework.\"}",
"{\"title\": \"Rebuttal Part 2 :)\", \"comment\": \"-Last paragraph of intro and nodes (i.e. waypoints) updates:\\nThe description in the intro is meant to convey the high-level idea, with specifics provided in the methods. Method details are in section 3: waypoint identification 3.1 and edge formation 3.3. Particularly, the set of waypoint states, is updated after every few iterations based on a ranking using the prior mean (last line in Algorithm 1) with the top 20% selected. The set of edges are forged after the set of waypoints is finalized. \\n\\n-Figure1:\\nThank you for pointing this out. Figure1 intends to provide an intuitive example and guide the readers to concretize our proposed framework. We have updated the figure and specify in the caption where in the main text details the important concepts. \\n\\n-SoRB:\\nWe agree it is relevant and cite in the Related Work. In principal, their approach could be applied to control the low-level Worker behavior in our hierarchical setup, but it is unclear how their method would adapt to the stochastic elements of the tasks we study. Our work differs by focusing on identifying a single, generically-useful graph that can be applied to many tasks under a persistent environment. In contrast, SoRB builds a graph on the fly based on an internal distance estimator and the set of states available in the replay buffer.\\n\\n-Waypoint selection guarantee: \\nThere is no theoretical guarantee. The graph is meant to provide a convenient and task-agnostic abstraction of the structure and dynamics of the environment, which we argue provides a scaffold for rapid and structured exploration. Recovering action sequences from subsequences of states amounts to identifying those states that can be used to summarize some rollouts. By extension, we regard these states as the best subset for summarizing the environment\\u2019s structure. Table 3 demonstrates that a world graph using states identified in this manner is more useful than one constructed around randomly chosen states. As such, while our motivation does not come with guarantees, we present significant empirical validation.\\n\\n-Go-Explore: \\nThank you and we have updated the related works. Our waypoints share similar spirits with their \\u201ccells\\u201d but are automatically learned by the binary latent model instead of using heuristics specifically defined for different tasks as in Go-Explore. \\n\\n-Explore during VAE training:\\nWe attempt to ensure that the starting points of exploration rollouts (which are used to train the VAE) expand as training progresses. For our testbed environment, we achieve this by having the agent navigate to one of the current iteration\\u2019s waypoint states before collecting the exploration rollout. Feeding the reconstruction error as a curiosity bonus also addresses this potential issue by encouraging the exploration policy to produce trajectories that confuse the VAE.\\t\\n\\n-$\\\\mu_0$\\nThe derivation of equation (2), where $\\\\mu_0$ appears, is detailed in Appendix A, equation (8)-(11). We have updated the main text to make the interpretation of this term clearer.\\n\\n-Coverage of waypoints:\\nOur learned waypoints are visualized in Appendix D Figure 5, which have an even and comprehensive coverage over the environment. During graph learning, action reconstruction rate is our criterion in determining the sufficiency of exploration. The follow-up RL experiments further validate the coverage of the learned graph is sufficient to improve learning on a variety of downstream tasks. One future direction, although beyond the scope of this work, is to record a buffer of areas concerning trajectories with high reconstruction errors during VAE training and focus exploration more on those areas. \\n\\n-Curiosity with RL baselines:\\nNo, this intrinsic reward is only used for the task-agnostic VAE phase (i.e. for learning the graph). A portion of the baseline Feudal Network is initialized from the goal-conditioned policy trained while learning the VAE, so it inherits some of the behavior shaped by the intrinsic reward.\\n\\n-Samples used by VAE training:\\nWe have updated the main text and brought forward the training information regarding sample complexity from Appendix C. In comparison to the RL stage, the amount of agent-environment interactions are much more efficient and the learning results are reused for all different downstream tasks in the RL stage. \\n\\n-Graph update during policy training and off-policy on graph training rollouts:\\nIn theory, updating graph in midst of policy training is possible and it can be an interesting future direction. Using VAE training rollouts as off-policy data is not straightforward in many commonly occurred cases. For example, for the Door-Key task, the data collected when training the VAE would not include the door or key.\\n\\n-Table 3\\nWe reference Table 3 in \\u201cBenefits of Learned Waypoints\\u201d Section 5.1, where we compare our results to results obtained using graphs with randomly chosen waypoint states. We have updated the table legend to make the meaning of the presented values clearer.\"}",
"{\"title\": \"Thank you very much! Rebuttal Part 1\", \"comment\": \"We thank you for your feedback and for acknowledging the importance of the problem tackled in our work.\\n\\nWe respectfully disagree that our benchmark tasks, especially those with more challenging setups, can be \\u201ceasily\\u201d solved via standard baselines. For instance, Table 2 shows that the baselines A2C and its hierarchical variant Feudal Network only manage to solve MultiGoal and MultiGoal-Stochastic on the small mazes. For those small mazes, the final performance is worse than when using graph learning. The baselines fail to solve the larger mazes within the number of samples. Figure 4 further highlights that incorporating graph learning significantly speeds up the baselines. \\n\\nWe also thank R4 for the environment suggestions. However, we feel that the current suite of evaluation tasks provide sufficient evidence of the merits of our approach. Our used maze environment and tasks validate a key hypothesis: that our learned world graph approach enables RL agents to solve a **diverse suite of tasks** that require understanding of the structure of the environment (e.g., how to navigate in a maze). \\n\\nMoreover, the grid-world environment enables clear exposition, which is crucial in evaluating the various parts of our framework. Note that other work that introduces new learning approach, e.g. [1], similarly use clean environments to remove confounding factors for analysis. \\n\\n[1] T. Kipf Compositional Imitation Learning: Explaining and Executing One Task at a Time.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes an interesting method to construct a world graph of helpful exploration nodes to provide \\u201cstructured exploration\\u201d. This graph is used in an HRL structure based on the feudal net structure. While the method is very interesting the proposed method is designed to help learn good policies via a better exploration structure. This is a very important problem but I find that the environments this method is tested on in the paper can be easily solved using normal RL methods. It would be very important to evaluate the methods progress on more interesting problems with complex temporal structure. Potentially some of the tasks from the HIRO paper or better yet an assembly tasks or version of the clevr objects environment where multiple items need to be rearranged into a goal. One of the more advanced tasks from the Hierarchical Actor-Critic paper would also be a good option. It is also important to include more analysis of the amount of data needed to train the VAE and create the graph. This amount should be included in the evaluation results for the method.\", \"more_detailed_comments\": [\"The last paragraph of the introduction that begins to explain the method is a bit confusing. More detail here would be helpful. How frequently do binary latent variables need to be selected for them to become nodes? Similar for adding edges.\", \"In Figure 1, there are many terms that have not been defined yet, \\\"pivotal state\\\", \\\"world graph traversal\\\"... It would help in understanding the figure if these were explained beforehand. The figure text is also very small.\", \"This work seems very similar to Search on the replay buffer (Eysenbach et al 2019). That work created a graph over the data in the replay buffer based on the Q value of different states. These act as waypoints in planning. Could this method not be used to also construct a more sparse waypoint graph to use such as what is described in this work?\", \"It is said that the primary goal of the graph is to accelerate downstream tasks. Yet, the graph is constructed with states that are most critical in recovering action sequences. Is there some guarantee that this selection criterion will help downstream tasks?\", \"The method also seems to have a similarity to the GoExplore paper that keeps around an exploration frontier, that is similar to the world graph, of states as it is making progress on the task. This paper should be discussed in more detail in the related work.\", \"The VAE is trained over data that is collected from the policy during exploration. Is there an issue with collecting data that will extrapolate to explore areas of the state space that are outside of the data collected for training the VAE.\", \"More detail should be included in the use of \\\\mu_0. As it is written now it is difficult for the reader to understand how the method works without some of the additional information in the appendix.\", \"The method to collect enough data to learn and represent a graph the covers the state space well. How well does this method work? Essentially this method is making progress on the exploration problem. Is there some analysis on how well this method is at collecting enough data to use on downstream tasks?\", \"The method uses curiosity to help explore the state space by using the reconstruction error from the VAE as an intrinsic reward. Is a version of this intrinsic reward use for the baseline A2C method in the paper?\", \"It is said that the world graph helps accelerate learning via structured exploration. However, there is there a significant amount of compute and environment interaction to compute the world graph? This should be taken into consideration when performing any comparisons.\", \"Can the graph be updated during the policy training phase? Also, in the first phase where data is collected to fit the VAE, can this data be used to train an off-policy method? It seems like this data would work very well for training a policy.\", \"Table 3 does not seem to be referenced in the paper. They could also use some additional explination as to what the values represent that are being presented.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This paper proposes an approach to identifying important waypoint states in RL domains in an unsupervised fashion, and then for using these states within a hierarchical RL approach. Specifically, the authors propose to use a binary latent variable VAE to identify waypoint states, then an HRL algorithm uses these waypoints as intermediate goals to better decompose large RL domains. The authors show that on several grid world tasks, the resulting policies substantially outperform baseline approaches.\", \"comments\": \"I have mixed opinions on this paper, though ultimately feel that it is below the bar for acceptance. There are a lot of aspects to the proposed approach, and overall I'm left with the impression that there are some interesting and worthwhile aspects to the proposed method, but ultimately it is hard to disentangle precisely what is the effect of each contribution.\\n\\nFor example, let's consider the waypoint discovery method. The basic approach is to use a binary latent variable VAE, using a recently proposed Hard Kumaraswamy distribution to model the latent state. This seems like a reasonable approach, but there's no real analysis of the actual waypoints that are discovered in the target domains, whether they indeed correspond to intuitively important waypoints in a domain, or whether they are just producing some arbitrary segmentation of the proposed task.\\n\\nThe other elements of the paper have similar issues for me. The whole HRL process, using these waypoint states as intermediate goals, seems reasonable, but it's hard to disentangle the performance of this particular approach versus the performance of any approach that would use (any) intermediate states as goals within an HRL approach. And the impression I'm left with, given the level of detail included I the paper, is that I would have no idea how to apply or extend this process to any other RL domains.\\n\\nI looked at the provided code hoping it would help to clarify some of the implementation details, but the code is not at all a complete collection of routines that could re-create the experiments. Rather, the code just includes a few of the model architectures, which aren't really the important aspects of this work. \\n\\nThus, I'm overall left with the impression that it's quite difficult to assess the contribution of this approach, and determine precisely which of the different proposed aspects is really contributing most to the improved performance. I know there is some ablative analysis in the paper comparing the pi_g-init and G_w-traversal independently and together, but I'm more questioning the basic question of what each portion of the network is really learning.\\n\\nI'd be curious if the authors are able to clarify any of these points during their rebuttal.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel approach to hierarchical reinforcement learning approach by first learning a graph decomposition of the state space through a recurrent VAE and then use the learned graph to efficiently explore the environment. The algorithm is separated into 2 stages where in the first stage random walk and goal conditioned policy is used to explore the environment and simultaneous use a recurrent binary VAE to compress the trajectory. The inference network is given the observation and action and the reconstruction is to, given the hidden state or hidden state+observation, reconstruct the action taken. The approximate posterior takes on the form of a hard Kumaraswamy distribution which can differentiably approximate a binary variable; when the approximate posterior is 0, the decoder must reconstruct the action using the hidden state alone. The nodes of the world graph are roughly states that are used to reconstruct the trajectories in the environment. After the graph is constructed, the agent can use a combination of high-level policy and classical planning to solve tasks with sparse reward.\\n\\nPersonally, I quite like the idea of decomposing the world into important states -- it is closely related to the concept of empowerment [1] which the authors might want to take a further look into. I believe extracting meaningful abstraction from the environment will be a key component for general purpose RL agent. One concept I really like in the paper is using the reconstruction error as the reward for the RL agent, which has some flavors of adversarial representation learning. Further, I also really like the idea of doing structured exploration in the world graph and I believe doing so can help efficiently solve difficult tasks.\\n\\nHowever, I cannot recommend accepting this paper in its current draft as there might be potential major technical flaw and I also have worries about the generality of the algorithm. My main concerns are the following:\\n 1. The ELBO given in the paper is wrong -- the KL divergence should be negative. I want to give the paper the benefit of doubts since this could be just a typo and some (very rare) researchers use a reverse convention; however, this sign is wrong everywhere in the paper including the appendix yet the KL between Kuma and beta distributions uses the regular convention. I tried to check the source code provided by the authors but the code only contains architecture but not the training objective, training loops or environments. As such, I have to assume that the ELBO was wrongfully implemented, unless the author can provide the full source code, or, if the ELBO is indeed incorrectly implemented, rerun the experiments with the correct implementation.\\n\\n 2. The proposed method for learning the graph does not have to be a VAE at all. The appendix shows that the paper uses a 0.01 coefficient on the KL, which is an extremely small value for VAE (in fact, most VAE\\u2019s have beta larger than 1 for disentanglement). Thus, I suspect the KL term is not actually doing anything; instead, the main reason why the model worked might be due to the sparsity constraints L_0 and L_T. In other words, the model is simply behaving like a sequence autoencoder with some sort of hard attention mechanism on the hidden code, which might explain why the model still worked well even with the wrong ELBO. To clarify, I think this is a perfectly acceptable approach for learning the graph and it would still be very novel, but the manuscript should be revised accordingly to reflect this. If the (fixed) VAE is important, then this comparison (0 KL regularization) would be a nice ablation regardless.\\n\\n 3. Algorithm 1 requires navigating the agent to the key points from \\\\mathcal{V}_p. This assumption is quite strong. When the transition dynamic is deterministic and fully reversible like the ones considered in the paper, using the reverse of replay buffer can indeed take the agent back to s_p, but in settings where the transitions are stochastic or the transitions are non-linear or non-reversible, how should the algorithm be used?\\n\\n 4. It is not clear how \\\\mathcal{V}_p are maintained. If multiple new nodes are added every iteration, wouldn't there be more than necessary nodes in \\\\mathcal{V}_p? It seems to me some pruning criteria were used unless the model converged within small number of iterations? Are the older ones are discarded in favor of newer ones?\\n\\n 5. How are the actions sequences \\u201cnormalized\\u201d?\\n\\n 6. In what way are the Door-Key environment stochastic? It seems like the other environments also have randomness, so is the only difference the lava pool?\\n\\nI believe the propose method is sound, so if the revision can address either 1 or 2, I am willing to raise my score to weakly accept. If the revision in addition addresses 3, 4, 5, 6 in a reasonable manner, I am willing to raise my score to accept.\\n\\n=======================================================================\", \"minor_comments_that_did_not_affect_my_decision\": [\"I think mentioning the names of the environment in the abstract might be uninformative since the readers do not know what they are a priori.\"], \"reference\": \"[1] Empowerment -- An Introduction, Salge et al. 2014\"}"
]
} |
H1gCeyHFDS | Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems | [
"Tianle Cai*",
"Ruiqi Gao*",
"Jikai Hou*",
"Siyu Chen",
"Dong Wang",
"Di He",
"Zhihua Zhang",
"Liwei Wang"
] | First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks. Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information. In this paper, we propose a novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss. Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel (NTK). Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD. We also give theoretical results to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic. Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence result for the mini-batch version of a second-order method on overparameterized neural net- works. Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD. | [
"Deep learning",
"Optimization",
"Second-order method",
"Neural Tangent Kernel regression"
] | Reject | https://openreview.net/pdf?id=H1gCeyHFDS | https://openreview.net/forum?id=H1gCeyHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"23KomIHTGN",
"rkgi-irhiB",
"SkxosqS2sB",
"BkgxE9rhoH",
"B1gDZ5B3jS",
"Bkew6KBhsr",
"HJxKytr2oS",
"BJxhNaz2qH",
"BJeshEY55r",
"rJxaGCqN9S",
"rygJfehGqB",
"SkgoyRG6KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725578,
1573833474905,
1573833378709,
1573833256508,
1573833214860,
1573833150977,
1573832928910,
1572773171949,
1572668594716,
1572281877498,
1572155398518,
1571790306790
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1525/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1525/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1525/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1525/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1525/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The article considers Gauss-Newton as a scalable second order alternative to train neural networks, and gives theoretical convergence rates and some experiments. The second order convergence results rely on the NTK and very wide networks. The reviewers pointed out that the method is of course not new, and suggested that comparison not only with SGD but also with methods such as Adam, natural gradients, KFAC, would be important, as well as additional experiments with other types of losses for classification problems and multidimensional outputs. The revision added preliminary experiments comparing with Adam and KFAC. Overall, I think that the article makes an interesting and relevant case that Gauss-Newton can be a competitive alternative for parameter optimization in neural networks. However, the experimental section could still be improved significantly. Therefore, I am recommending that the paper is not accepted at this time but revised to include more extensive experiments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your valuable comments. We have addressed the most common issues in the general response above. Here we answer the additional questions raised by the reviewer.\\n\\n--Activation functions. We believe it is possible to change our proof to ReLU activations based on the techniques in [1].\\n\\n--Proof techniques of mini-batch GGN. As mentioned in Section 1, though conventional wisdom may suggest that applying mini-batch scheme to second-order methods will introduce a biased estimation of the accelerated gradient direction, we can prove that mini-batch GGN converges on overparametrized networks. Our proof only entails the decrease of the loss after performing a whole cycle of updates. This is significantly different from the former techniques used to prove the convergence of SGD, which uses a small learning rate to force the decrease of expected loss at each step.\\n\\n[1] Gradient descent provably optimizes over-parameterized neural networks, Du et al.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your valuable comments. We have addressed the most common issues in the general response above. Here we answer the additional questions raised by the reviewer.\\n\\n--The RL paper. Thanks for a good reference. The independent work on reinforcement learning aims to precondition the Q-learning update rule with linear approximation, so similar to natural gradient analyzed in [1] , there is still a learning rate term $\\\\alpha$ in the algorithm. However, our method is motivated by solving NTK regression, which does not introduce the step size term (or can be understood as suggesting the learning rate to be 1 as mentioned in the related work section). We added the reference to related work section in the revision.\\n\\n--Convergence result considering a large learning rate. Thanks for pointing out the misleading expression on large learning rate. As a second-order method, GGN does a Newton-type update without learning rate. Thus, unlike the papers mentioned by the reviewer which need to bound the learning rate by a quantity related to the smoothness to ensure a similar behavior as gradient descent, we show that mini-batch GGN can converge without forcing a specific small step size, which is totally different from the convergence of gradient descent. We modified the expression in the revision.\\n\\n[1] Fast convergence of natural gradient descent for overparameterized neural networks, Zhang et al.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your valuable comments. Most of the issues are addressed in the general response above.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your valuable comments. Most of the issues are addressed in the general response above.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your valuable comments. Most of the issues are addressed in the general response above.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all the reviewers for the valuable comments. We address the major issues here which are mentioned multiple times by the reviewers.\\n\\n--Novelty of the algorithm. The equivalence of natural gradient descent and Gauss-Newton algorithm has been well studied. However, exact solving wasn\\u2019t tractable before and people had to rely on approximation methods like K-FAC. We believe that the novelty of GGN lies in the specific implementation of an exact solution by the Gram matrix over a mini-batch scheme, which can be practically useful. \\n\\n--Limitations of the algorithm. We acknowledge that the current GGN algorithm is still limited to the single-output regression problem. Then again, regression is a fundamental problem in machine learning and already has a large number of application scenarios. As for classification and multivariable regression tasks, as discussed in section 5, the direct application of GGN requires a linear scaling of the size of Jacobian w.r.t. the number of classes. There are possible ways to address this issue, like making some modifications of the network output. This is an important future work, and we are already doing experiments on classification tasks like CIFAR and Imagenet. \\n\\n--Computational complexity and implementation. Though modern frameworks like PyTorch and TensorFlow don't give an easy way to compute per-example derivatives efficiently, we re-implement the backpropagation process for different type of layers, e.g. convolutional layers, linear layers which makes the computation of Jacobian efficient. We note that a concurrent work [https://openreview.net/forum?id=BJlrF24twB ](Sec 2.2) gives some examples of efficient implementation. We\\u2019re still working on making the code cleaner and will release the code if the paper is accepted. \\n\\n--Experimental results. As requested by the reviewers, we have added the comparison with Adam and K-FAC, as well as the generalization result of AFAD-LITE, in Appendix D.\\n\\nIn general, our paper aims to propose an algorithm that makes use of second-order information to accelerate convergence without much computational overhead, and both the theoretical and experimental results demonstrate its effectiveness. We agree with the reviewer that we should do more experiments that scale and generalize to different tasks in order to demonstrate the full potential of GGN, and we are still working hard on it.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"Summary: The authors propose the Gram-Gauss-Newton method for training neural networks. Their method draws inspiration from the connection between the neural network optimization and kernel regression of neural tangent kernel.\\n\\nTheir method is described in Algorithm 1, but to summarize it, they use the Gauss-Newton method to train neural networks, and prove quadratic convergence for the full-batch training. They also have a mini-batch version of GGN, the practical version, and this is proven to have linear convergence.\\n\\nThe authors also provide experiments that includes the usual loss v. epoch, but also loss v. wallclock time (which is nice when proposing second-order-like methods where extra computations are necessary), and a test error v. epoch (which is again nice for second-order methods as explained below).\", \"strengths\": \"The paper has nice proofs of the theorems, and they show a method with quadratic convergence (but full-batch training) without having to invert the full Jacobian matrix whose size depends on the number of parameters, but rather inverting the Gram matrix, whose size depends on the number of training data.\\n\\nDue to the seeming extra computational cost of the method, (the method requires computing the full Jacobian matrix which depends on the number of neural network weights) I am grateful that they provided comparisons with wallclock time to SGD.\\n\\nAnd there is this notion that second-order methods have been shown to not generalize as well as first-order methods, and thus it was nice to see that they had an experiment where they tested generalization.\\n\\nThe background information was also nice to read.\", \"weaknesses\": \"They do not compare it with other methods optimization methods, such as Adam (a first-order method) or natural gradient (a second-order method), and I would have thus liked to have seen comparisons to these.\\n\\nI would have also liked to see a test loss v. time/epoch for the AFAD-LITE task as well (they only have it for the RSNA Bone Age task), at least provided in the appendix if there was not enough space.\\n\\nIn the references, there are numerous citations of the arXiv versions of papers, but I suggest the authors replace them with the conference/journal versions if those papers were accepted in conferences/journals (and I spot some that were).\", \"other_comments\": \"(i) In the first sentence of 3.3., I think one should replace \\u201cGGN has quadratic convergence rate\\u201d with \\u201cfull-batch GGN has quadratic convergence rate,\\u201d as in the subsequent sections you are discussing mini-batch GGN.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": [\"The authors propose a scalable second order method for optimization using a quadratic loss. The method is inspired by the Neural Tangent kernel approach, which also allows them to provide global convergence rates for GD and batch SGD. The algorithm has a computational complexity that is linear in the number of parameters and requires to solve a system of the size of the minibatch. They also show experimentally the advantage of using their proposed methods over SGD.\", \"The paper is generally easy to read except section 3.1 which could be clearer when establishing the connexion between the proposed algorithm and NTK.\", \"The proposed algorithm seems to be literally a regularized Gauss-Newton with Woodbury matrix inversion lemma applied to equation (7). Additional simplifications occur due to the pre-multiplication by the jacobian and give (9). However, this is not clear in the paper, instead section 3.1, is a bit vague about the derivation of (9).\", \"In terms of theory, the proofs of thm 1 and 2 seem sound. They rely essentially on the convergence results established for NTK in [Jacot2018, Chizat2018]. The main novelty is that the authors provide faster rates for the Gauss-Newton pre-conditioner which leads to second-order convergence. The second theoretical contribution is to extend the proof to batched gradient descent. Both are somehow expected, although the second one is more technical.\", \"However, the convergence rates provided for batched gradient descent (thm 2) rely on a rather unrealistic assumption: the size of the network should grow as n^18 where n is the sample size. This makes the result less appealing as in practice this is highly unlikely to be the case.\", \"The convergence analysis for the NTK dynamics, which is essential in the proof, relies on a particular scaling 1/sqrt(M) of the function with the number of parameters. In [Chizat2018], it is discussed that although it leads to convergence in the training loss, generalization can be bad. Is there any reason to think in this case, things would be different?\", \"Experiments: Experiments were done on two datasets to solve a regression task. They show that training loss decreases indeed faster than SGD and finds better solutions. A more fair comparison would be against other second-order optimizers like KFAC.\", \"How was the learning rate chosen for the other methods? Was the same lr used?\", \"The authors say that the algorithm has the same cost of one backward pass, could they be more specific about the implementation?\", \"What are the test results for the second dataset? Could they be reported somewhere (in the appendix?)\", \"Both tasks are univariate regression, can the method be applied successfully in a multivariate setting?\", \"I don't see how the proposed method is different from exactly doing regularized gauss newton, so to me the algorithm is not novel in itself. Besides the method seems to require a quadratic loss function which limits its application.\", \"----------------------------------------------------------------------------------\"], \"revision\": \"I've read the author's response and other reviews. I think the paper will be stronger if extended to more general cases (multivariate output + more general losses), thus I encourage the authors to resubmit the paper with stronger experiments.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors propose minimizing neural network using kernel ridge regression. (Formula 9 and Algorithm 1). Main difference of this method is compared to Gauss-Newton, is that it uses JJ' as curvature, which has dimensions b-by-by (batch size b), instead of J'J as curvature, which has dimensions m-by-m (number of parameters m).\\n\\nWhen b is much smaller than m, this matrix is tractable to represent exactly. Related approach is taken by KKT (see Figure 1 of https://arxiv.org/pdf/1806.02958.pdf) which also replaces J'J with more tractable JJ'.\\n\\nThere is a long history of authors trying to extend second order methods to deep learning and and finding that curvature estimated on a small batch is extremely noisy, requiring large batches (see papers by Nocedal's group). Authors propose a method that estimates curvature from small batches. Given the history of failures in small-batch curvature estimation, the bar is high to show that small-batch curvature estimation works.\\n\\nBulk of the paper is dedicated to theoretical convergence and connections between concepts. Since the focus of the paper is on a new optimization method for deep learnning, I feel like convergence proofs can be moved to Appendix, and more of the paper should focus on practical aspects of the method. Also the connections to other concepts (ie, tangent kernel) are not essential to the paper and could be better left over for a tutorial paper.\\n\\nI'm not convinced that their method works well enough to have practical impact.\\n\\n- Their method seems to be limited to neural network with one output (ie, univariate regression task). This is a serious limitation and paper should highlight this more on this, given that vast majority of applications and benchmarks involve more than output variable.\\n\\n- Practical implementation details are skimmed over. Section 3.3 brings up that to compute Jacobian, one needs to keep track of the output derivative on per-example basis. How is this accomplished? Modern frameworks like PyTorch and TensorFlow don't give an easy way to compute per-example derivatives efficiently.\\n\\n- Experiments are performed on two tasks that are not well known in the literature. The choice is somewhat understandable given that their method performs for univariate regression, but also this makes it hard to evaluate whether the method works. SGD vs Gram-Gauss evaluation use parameter settings which are not comparable, so it's impossible to tell whether the improvement are due to better choice of hyper-parameters.\\n\\n\\nThe changes needed to make this paper acceptable are extensive, and I would recommed a reject.\", \"i_would_recommend_authors_attempt_the_following_changes_for_future_submission\": \"1. Make it work for multivariate regression. There's a conversion technique to represent multivariate regression in the same form as univariate regression (see Section 2.4 of \\\"Rao, Toutenberg\\\" Linear Models). Essentially it comes down concatenating o output Jacobians (o output classes) along the batch dimension.\\n\\n2. Use this to evaluate the method on standard benchmarks like MNIST and CIFAR and show that it doesn't cause a significant worsening in quality. Given that similar approach (KKT paper) found bigger improvement on RNN task, an RNN task may be useful.\\n\\n3. Give more details on implementation. How was Jacobian calculation implemented? Which framework? How was the per-example computation made tractable? Making small-scale experiments reproducible through anonymous github submission would also help\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a second order optimization algorithm, along with convergence proof of the algorithm in both batch and minibatch setting. The effectiveness of the method is demonstrated on two regression tasks. My overall assessment is that the method is still quite limited and the method itself is not novel, but I am willing to change my score to accept if my concerns have been addressed.\\n\\n(1) The method is not novel. The same algorithm was proposed and applied to the RL setting [1]. \\n(2) The method is still quite limited to 1-output function scenario, where the NTK matrix is easy to compute. This limitation though is not mentioned in the paper. I hope the author should have a discussion on this and admit this limitation.\\n(3) Also due to (2), the experiments shown in the paper are on toy data and hence lack of strong empirical support.\\n(4) The method doesn't scale up to large batch size.\\n(5) In the theoretical section, the paper states \\n\\\"However, to our knowledge, no convergence result considering large learning rate (e.g. has the same scale with the\\nupdate of GGN) has been proposed.\\\"\\nThis is not true. Here are some papers: [2,3,4]\\n(6) Lack of some second order optimization baselines, e.g., KFAC.\", \"misc\": \"(1) For section 3.3, first of all, (B) costs at least half of (A) as it requires a backward pass. \\n(2) For section 3.3, the authors write:\\n\\\" What is different is that GGN also, for every input data, keeps track of the output\\u2019s derivative for the parameters; while in\\nSGD the derivatives for the parameters are averaged over a batch of data.\\\"\\nIs there a simple way of implementing/computing the gradient for *every* input data on GPU? How is that compared to computing the average? I wish to see more evidence of showing they're the same as authors claimed.\\n\\n[1] Towards Characterizing Divergence in Deep Q-Learning.\\n[2] The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning.\\n[3] Fast and Faster Convergence of SGD for Over-Parameterized Models (and an Accelerated Perceptron).\\n[4] Fast Convergence of Stochastic Gradient Descent under a Strong Growth Condition\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Post-rebuttal: I've read author's response and other reviews. As pointed out by other reviewers, the proposed algorithm is restricted to single-output regression and the claim \\\"accelerate convergence without much computational overhead\\\" might not be true in general multi-output regression tasks. I believe the lack of multi-output regression experiments makes the paper a bit weak, therefore I changed my score to 3 and vote for rejection.\\n\\nThat being said, I do find the algorithm interesting and the theoretical results impressive. I encourage the authors to include experiments on multi-output regression tasks (or tone down the claim about computational overhead) and resubmit the paper.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nBased on recent progress on the connection between neural network training and kernel regression of neural tangent kernel, this paper proposes a Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss. For overparameterized shallow networks, the authors proved global convergence of the proposed algorithm in both full-batch and mini-batch setting. To my knowledge, the proof of global convergence in the mini-batch setting is novel and might be of independent interest for other work.\\n\\nOverall, this paper is well-written and easy to follow. It's interesting to see that the proposed algorithm can achieve quadratic convergence while most previous papers only get linear convergence. \\nGiven that, I'd like to give a score of 6 and I'm willing to increase my score if the authors can resolve my concerns below.\", \"concerns\": [\"For the algorithm, if I understand correctly, it's actually same as natural gradient descent with generalized inverse. I think the authors should make the connection clear. I would like to see more discussions with natural gradient descent or Newton methods in the next revision.\", \"The authors claim that the proposed GGN algorithm only has minor computational overhead compared to first-order methods. I doubt if it's true in general. In section 3.3, the authors argue that computing individual Jacobian matrices for every example in the minibatch has roughly the same computation as the backpropagation in SGD. As far as I know, it's not true in practice. In addition, the inverse of the Gram matrix can also be expensive when the output dimension (the dimension of y) is large.\"], \"minor_comments\": [\"In the paper, the theoretical results are based on the assumption of smooth activation function. I wonder if it is possible to include the case of ReLU activation as it's the most popular activation function in deep learning.\", \"I don't have a good understanding about why mini-batch version would converge after reading the paper. To me, second-order methods with mini-batch estimation of the preconditioner would lead to biased gradient estimation. Could you comment on that?\"]}"
]
} |
H1laeJrKDB | Controlling generative models with continuous factors of variations | [
"Antoine Plumerault",
"Hervé Le Borgne",
"Céline Hudelot"
] | Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders. | [
"Generative models",
"factor of variation",
"GAN",
"beta-VAE",
"interpretable representation",
"interpretability"
] | Accept (Poster) | https://openreview.net/pdf?id=H1laeJrKDB | https://openreview.net/forum?id=H1laeJrKDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"RhO3c7PXhF",
"BJeZ7_WXiH",
"B1gfhLb7oH",
"SJeyHrZXiB",
"S1gDj4-7jB",
"r1lnq4gCFB",
"HkxRzJWpFS",
"SJgA8ZLKYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725549,
1573226520529,
1573226154287,
1573225782735,
1573225630518,
1571845268434,
1571782421634,
1571541334176
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1524/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1524/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1524/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Following the revision and the discussion, all three reviewers agree that the paper provides an interesting contribution to the area of generative image modeling. Accept.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Relpy to reviewer 3\", \"comment\": \"Thank you for your time and expertise in your review, we've addressed the key points below:\\n \\n> The paper proposes a new reconstruction error metric that is optimized to embed images into the latent space of the generative models. While this new metric is compared qualitatively to existing methods, quantitative evaluation is lacking. It would be useful to also include a quantitative comparison of methods measuring the perceptual distance between the original image and the embedded image, perhaps by using Learned Perceptual Image Patch Similarity (LPIPS) \\n\\n We added this quantitative comparison in Appendix C using the LPIPS. We agree that it strengthens the article and mention it in Section. Current results are reported with 40 images (for fast feedback) but we plan to compute it on $1,000$ images and update the PDF in a couple of days. We used unbiased mean and standard deviation estimators. (Edit: We updated the results for 1000 images in the latest revision.)\\n \\n> I am not fully convinced of the argument that using a saliency detector makes the method more general-purpose than a dedicated object detector. The majority of high-quality generative models are class conditional, hence requiring a labeled dataset, and therefore an object detector can easily be trained on the same dataset.\\n \\n We understand the argument, but a dedicated object detector requires labeled bounding-boxes coordinates while class conditional generative models only need a categorical label. In any case, our approach remains more generic and less computationally demanding. It is nevertheless worth noting that saliency detection is only useful for the quantitative evaluation of the method: it does not change any \\\"level of generality\\\" of the method itself.\\n \\n> Additionally, Section 3.2 mentions that \\\"We performed quantitative analysis on ten chosen categories for which the object can be easily segmented by using saliency detection approach\\\", which seems to indicate that the saliency detector struggles with some objects. How does the saliency detector perform on more complicated objects?\\n \\n In fact, some categories of ILSVRC are not actual \\\"objects\\\". It is, for example, the case for \\\"beach\\\" or \\\"cliff\\\". Hence, we preferred to choose categories that are actual objects, such as dog, flower or ball. As a consequence, we expected saliency detection to work on it, and it was indeed the case. We selected categories for their \\\"objectness\\\" and then we used the saliency detection but we did not select categories knowing the performance of the saliency detection which turned out to be robust for the categories we experimented on.\", \"we_took_your_remark_into_consideration_and_replaced_the_sentence_to_be_more_explicit\": \"\\\"We performed quantitative analysis on ten chosen categories of objects of ILSVRC, avoiding non-actual objects such as \\\"beach\\\" or \\\"cliff\\\".\\n\\n We also thank you for your other \\\"Minor things to improve the paper\\\", that we took into consideration for the updated version of the article.\"}",
"{\"title\": \"Reply to reviewer 1\", \"comment\": \"Thank you for your review, we have addressed your remarks below:\\n\\n> -\\u201cSampling Generative Models\\u201d (White, 2016, https://arxiv.org/abs/1609.04468) should be cited and discussed, and ideally so should \\u201cLatent constraints: Learning to generate conditionally from unconditional generative models\\u201d (Engel et al, 2017, https://arxiv.org/abs/1711.05772)--both are quite relevant IMO.\\n \\n We thank you for these references that are indeed relevant. They were added to the updated manuscript. Both are presented and discussed in the related work.\", \"minor\": \"> The first sentence ends with an ellipsis. Is this intentional or a draft holdover? Either way, I think it should at least be replaced with an \\u2018etc\\u2019 or ideally an oxford comma and an \\u2018and\\u2019. \\n \\n It has been changed in the updated version of the article.\"}",
"{\"title\": \"Reply to reviewer 2 on concerns raised in 2) and on the minor comments\", \"comment\": \"> I could not follow the reasoning in Section 2.2 and the clarity should be improved, as it is one of the main contributions of the work. In particular, I would like to see the intuition behind the model $t = g(<u, z>)$ better described.\\n\\n Thank you for pointing this out we reformulated the explanation in the updated manuscript. The synthetic explanation is the following:\\n \\n A core hypothesis is that we can modify $t$ by moving along a direction $u$ in the latent space thus the model that predict $t$ from $z$ should be a function of $<z, u>$. However, despite the popularity of a model of the form $t = <z, u>$, it is only adapted if $t$ follows a normal distribution (in the common case where $z$ is sampled from a Gaussian distribution). Indeed, if $z$ follows a normal distribution, the prediction of $t$ will also follow such distribution. It is thus problematic if $t$ does not follow this type of distribution for the images actually generated by the model. Thus we propose to use a more general (parametrized) model of the form $t = g_{\\\\theta}(<z, u>)$. It is coherent with the initial hypothesis while allowing to have a good fit even when $t$ does not follow a normal distribution. \\n \\n> Why does the projection of z follow a normal distribution? Is it because the latent distribution in the GAN is chosen as a normal distribution? \\n \\n Yes, the latent distribution in the GAN usually follows a normal distribution in the literature, thus its projection on a linear space follow a Gaussian too. We reformulated this in the updated manuscript. Combined with the change due to your preceding remark, it indeed clarifies the explanation.\\n \\n> What is the loss for training $f_{\\\\theta,u}$ ? How is the dataset $D$ used here? \\n \\n It is a regression problem, we used the MSE and trained it from the dataset with the tuples $(z_0, z_{\\\\delta t}, \\\\delta t)$. It has been mentioned in the updated version of the article.\\n \\nMinor comments / typos / suggestions (no influence on my rating):\\n \\n> InfoGAN (Chen et al., 2016) does not require a labeled dataset, the corresponding sentence in related work should be reformulated a bit. \\n \\n Indeed, it has been changed in the updated version of the article. We also extended the discussion w.r.t to it to clearly differentiate our work.\\n \\n> Please use operatorname or text in math mode for operators such as Var or text. \\n \\n It has been changed in the updated version of the article.\\n \\n> 'Encodes a the parameter $t$ $-->$ 'Encodes the parameter $t$; Many other typos, please run a spell checker.\\n \\n We proofread the manuscript.\\n \\n> For image translation, what boundary conditions are used? A sensible way would be to impose the reconstruction loss not on the full image but only on the smaller part. \\n \\n In Section 2.1.2 we mentioned:\\n \\\"A transformation on an image usually leads to undefined regions in the new image (for instance, for a translation to the right, the left hand side is undefined). This is why we designed $\\\\mathcal{L}$ to ignore the value of the undefined regions of the image\\\"\", \"we_simplified_the_last_sentence_to_be_more_explicit\": \"\\\"Hence, we ignore the value of the undefined regions of the image to compute $\\\\mathcal{L}$.\\\"\"}",
"{\"title\": \"Reply to reviewer 2 on concerns raised in 1)\", \"comment\": \"We thank you for the fruitful comments and suggestions. In addition to the lightly revised manuscript, we respond directly to the comments below.\\n \\n> There seem to be a lot of errors and typos in the manuscript, which made the paper unfortunately a bit frustrating to review. In particular, I had trouble following and understanding the details of the main procedure used to obtain the linear latent trajectories.\\n \\n We indeed fixed a couple of typos thanks to the three reviewers' feedback. We answer to those you highlighted below the detailed comments you provided.\\n \\n> Considering the recent works (Goetschalckx et al, 2019) and (Jahanian et al, 2019), I also don't see too much novelty in this approach. Therefore, I cannot recommend acceptance of this paper at this point. \\n \\n These two works have been released on arXiv as non-peered-reviewed report. Hence, we thought that they could not be considered as actual articles that are part of the scientific literature yet at the time of the ICLR 2020 deadline. This is implicit in the \\\"dual submission policy\\\" of ICLR and explicit in the reviewer guidelines of conferences such as CVPR. Actually, (Goetschalckx et al, 2019) has been published at ICCV last week and (Jahanian et al, 2019) has been submitted to ICLR 2020 as well (available on openreview).\\n\\n Since we have heard of these two arXiv reports a couple of weeks before the ICLR deadline, we obviously mentioned them and compared our work to the idea of (Jahanian et al, 2019) that is the closest to our work. It is discussed in the related works (Section 4) and some differences are highlighted. It appears that although both works have been developed independently and concurrently, they exhibit some similarities. But since both are submitted to ICLR 2020, we think it rather enforces that the general idea is novel and relevant.\\n \\n> In algorithm 1, there seem to be some typos which makes it difficult understand the method in detail. $z_{\\\\delta t}$ is initialized as $z_0$ and then never changed but always appended into the data set. Should it maybe be $z_{\\\\delta t} <- argmin $... instead of $z_t <- argmin ...$ ? \\n \\n Indeed, the $\\\\delta$ is missing: it is not $z_t$ but $z_{\\\\delta t}$. It has been changed in the updated version of the article.\\n \\n> But then why initialize $z_{\\\\delta t}$ at all? \\n \\n Concerning the initialization of $z_{\\\\delta t}$ we initialize it at $z_0$ as $z_0$ is expected to be close to the solution of the first optimization problem: $argmin_{z}\\\\mathcal{L}(G(z), \\\\mathcal{T}_{\\\\delta t_1}(I_0))$. It is thus the initialization of the recursive procedure presented in Section 2.1.2 (and Equation 5).\\n \\n> Why store tuples of three values in $D$, \\n \\n During the manuscript redaction, we hesitated on that point. Indeed, it would not have been necessary to store the three values to estimate the trajectory only. However later, our method uses all these three values to train the model in Section 2.2. Thus, we chose to present Algorithm 1 as a method to create the full required dataset. We nevertheless admit that one can use the method of Section 2.1.2 to estimate a trajectory only, and thus retain only $z_{\\\\delta t}$ in D only. We added a mention to this in the caption of Algorithm 1.\\n \\n> especially store $z_0$ multiple times? \\n \\n $z_0$ is different for each trajectory and we need to store it along $z_{\\\\delta t}$ and $\\\\delta t$ to be able to train our model later. \\n \\n> While it is clear, formally the method always discards D and one might add a $D_i \\\\leftarrow D$ at the end.\\n \\n Indeed there should be a $D_i$ for each trajectory. Since we only need a dataset of trajectories, we propose to initialize $D$ before the for loop. It has been changed in the updated version of the article.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an algorithm to find linear trajectories in the latent space of a generative model that correspond to a user-specified transformation T in image space. Roughly, the latent trajectory is obtained by inverting the generator at the transformed image and a clever recursive estimation strategy is proposed to overcome difficulties in this nonconvex optimization. Qualitative results of the method, applied to a (pretrained) BigGAN model are shown, where the transformations are chosen as translation, zoom or brightness. A quantitative evaluation is performed on the dSprites and ILSVRC dataset.\", \"my_take\": \"There seem to be a lot of errors and typos in the manuscript, which made the paper unfortunately a bit frustrating to review. In particular, I had trouble following and understanding the details of the main procedure used to obtain the linear latent trajectories. Considering the recent works (Goetschalckx et al, 2019) and (Jahanian et al, 2019), I also don't see too much novelty in this approach. Therefore, I cannot recommend acceptance of this paper at this point.\", \"details\": \"1) In algorithm 1, there seem to be some typos which makes it difficult understand the method in detail. z_{\\\\delta t} is initialized as z_0 and then never changed but always appended into the data set. Should it maybe be z_{\\\\delta t} <- argmin ... instead of z_t <- argmin ... ? But then why initialize z_{\\\\delta t} at all? Why store tuples of three values in D, especially store z_0 multiple times?\\n\\nWhile it is clear, formally the method always discards D and one might add a D_i <- D at the end.\\n\\n2) I could not follow the reasoning in Section 2.2 and the clarity should be improved, as it is one of the main contributions of the work. In particular, I would like to see the intuition behind the model t = g(<u, z>) better described. \\n\\nWhy does the projection of z follow a normal distribution? Is it because the latent distribution in the GAN is chosen as a normal distribution? \\n\\nWhat is the loss for training f_{\\\\theta,u}? How is the dataset D used here? \\n\\nMinor comments / typos / suggestions (no influence on my rating):\\n- InfoGAN (Chen et al., 2016) does not require a labeled dataset, the corresponding sentence in related work should be reformulated a bit. \\n- Please use \\\\operatorname or \\\\text in math mode for operators such as Var or text. \\n- 'Encodes a the parameter t' --> 'Encodes the parameter t'; Many other typos, please run a spell checker.\\n- For image translation, what boundary conditions are used? A sensible way would be to impose the reconstruction loss not on the full image but only on the smaller part.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThis paper proposes methods to find interpretable vectors in the latent space of generative models (similar to finding Smile Vectors [White, 2016]) which control simple object transformations like zoom or translation. The basic idea is that so long as one can apply the desired transformation to an image, one can solve for the latent which minimizes the reconstruction between G(z) and the transformed image; doing this for various parameters of the transformation (i.e. different levels of zoom, varying amounts of brightening or translation) allows one to learn a parametric mapping specifying how to vary the latent to achieve the desired output change. The authors make several changes to the na\\u00efve optimization procedure of vanilla SGD, most notably using reconstruction error on Gaussian-blurred images to encourage matching of low-frequency features rather than high-frequency features. The resulting framework is applied to an ImageNet GAN for a variety of transformation, producing results which qualitatively and quantitatively indicate that the method works for the shown transformations, along with some analysis of the behavior of the model.\", \"my_take\": \"This is a well-reasoned and well-presented paper following in the spirit of Smile Vector type investigations, with compelling results. The core idea is simple, and I like that it doesn\\u2019t require human labeling: one merely needs to be able to simulate some approximation of the desired transform, and one can find the latent space trajectory that corresponds to the model\\u2019s approximation of that transform. I think this is promising next step in this area (there have been a few papers very recently on it, so I think improving constraints and control of generative models is getting a decent amount of intention) and is worthy of acceptance at ICLR2020 (7/10; reasonably clear accept).\", \"notes\": \"-\\u201cSampling Generative Models\\u201d (White, 2016, https://arxiv.org/abs/1609.04468) should be cited and discussed, and ideally so should \\u201cLatent constraints: Learning to generate conditionally from unconditional generative models\\u201d (Engel et al, 2017, https://arxiv.org/abs/1711.05772)--both are quite relevant IMO.\", \"minor\": \"The first sentence ends with an ellipsis. Is this intentional or a draft holdover? Either way I think it should at least be replaced with an \\u2018etc\\u2019 or ideally an oxford comma and an \\u2018and\\u2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to learn and control continuous factors of variations within generative models by finding meaningful directions in the latent space which correspond to specified properties. A new method is proposed for inverting generative models and embedding images in the latent space when an encoder is not available. Specifically, reconstruction error is defined in the Fourier domain such that the weighting on high frequency image components can be reduced. Results are evaluated with qualitative comparison to previous embedding methods. Using this image embedding technique, a dataset of latent space trajectories is created by manipulating a desired property in images (such as position or scale) via affine transformations and recording the latent space vectors of the original and new images. The dataset is then used to learn a simple model of the latent space transformation corresponding to changes in the desired image property, which in turn can be used to manipulate images accordingly. To evaluate the effectiveness of this image manipulation approach, a saliency detector is used to measure the change in position or scale of objects in generated images as the latent codes are changed.\\n\\nOverall, I would tend towards accepting this work. The goal of being able to manipulate continuous factors of variation within generative models is useful for controllable image synthesis, and the proposed method clearly achieves the desired result.\", \"things_to_improve_the_paper\": \"1) The paper proposes a new reconstruction error metric which is optimized to embed images into the latent space of the generative models. While this new metric is compared qualitatively to existing methods, quantitative evaluation is lacking. It would be useful to also include quantitative comparison of methods measuring the perceptual distance between the original image and the embedded image, perhaps by using Learned Perceptual Image Patch Similarity (LPIPS) [1].\", \"minor_things_to_improve_the_paper_that_did_not_impact_the_score\": \"2) In the abstract: \\\"Our method is weakly supervised...\\\". I am not sure if this method would be considered weakly supervised. I might tend more towards calling it self-supervised, since we have exact labels that are derived from transformations applied to the images themselves.\\n\\n3) In the first paragraph of the introduction: \\\"an increasing number of applications are emerging such as image in-painting, dataset-synthesis, deep-fakes... \\\". I find the use of the ellipses here to be a bit strange, since it seems like the sentence is trailing off mid-thought. I would recommend the use of \\\"etc.\\\" over \\\"...\\\".\\n\\n4) In Section 2.2, second paragraph, the dSprite dataset is mentioned but not cited. The reference is not given until Section 3. Should the citation be paired with the first mention of the dataset? Or even just in both places.\\n\\n5) In Section 3, Implementation details: \\\"The first part is injected at the bottom layer while next parts are used to modify the style of the generated image thanks to AdaIN layers (Huang & Belongie, 2017)\\\". BigGAN uses conditional BatchNorm instead of AdaIN, although they are both very similar. I think the proper citation here is [2], which first introduced conditional BatchNorm.\", \"questions\": \"6) I am not fully convinced of the argument that using a saliency detector makes the method more general purpose than a dedicated object detector. The majority of high quality generative models are class conditional, hence requiring a labelled dataset, and therefore an object detector can easily be trained on the same dataset. Additionally, Section 3.2 mentions that \\\"We performed quantitative analysis on ten chosen categories for which the object can be easily segmented by using saliency detection approach\\\", which seems to indicate that the saliency detector struggles with some objects. How does the saliency detector perform on more complicated objects?\", \"references\": \"[1] Zhang, Richard, et al. \\\"The unreasonable effectiveness of deep features as a perceptual metric.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n\\n[2] De Vries, Harm, Florian Strub, J\\u00e9r\\u00e9mie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C. Courville. \\\"Modulating early visual processing by language.\\\" In Advances in Neural Information Processing Systems, pp. 6594-6604. 2017.\\n\\n\\n### Post-Rebuttal Comments ###\\nThanks you for addressing my concerns and for adding the quantitative reconstructions measures. Appendix C looks much more complete now. My overall opinion of the paper remains about the same, so I will leave my score unchanged.\"}"
]
} |
SkxpxJBKwS | Emergent Tool Use From Multi-Agent Autocurricula | [
"Bowen Baker",
"Ingmar Kanitscheider",
"Todor Markov",
"Yi Wu",
"Glenn Powell",
"Bob McGrew",
"Igor Mordatch"
] | Through multi-agent competition, the simple objective of hide-and-seek, and standard reinforcement learning algorithms at scale, we find that agents create a self-supervised autocurriculum inducing multiple distinct rounds of emergent strategy, many of which require sophisticated tool use and coordination. We find clear evidence of six emergent phases in agent strategy in our environment, each of which creates a new pressure for the opposing team to adapt; for instance, agents learn to build multi-object shelters using moveable boxes which in turn leads to agents discovering that they can overcome obstacles using ramps. We further provide evidence that multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods such as intrinsic motivation. Finally, we propose transfer and fine-tuning as a way to quantitatively evaluate targeted capabilities, and we compare hide-and-seek agents to both intrinsic motivation and random initialization baselines in a suite of domain-specific intelligence tests. | [
"agents",
"competition",
"intrinsic motivation",
"emergent tool use",
"autocurricula",
"simple objective",
"standard reinforcement",
"algorithms",
"scale"
] | Accept (Spotlight) | https://openreview.net/pdf?id=SkxpxJBKwS | https://openreview.net/forum?id=SkxpxJBKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"qUy70mxnWz",
"H1lbBl1SjH",
"HylGoCREir",
"BJxWUARNsr",
"B1l0zA0NsH",
"S1eB3pAVsS",
"BJlKi8Lkor",
"SkxqkHzRYH",
"BJlxTZh6Yr",
"rkxcmygpFr",
"Skev0gOVYS",
"ryeCRG40Or",
"SJx87rUm_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798725520,
1573347385034,
1573346970420,
1573346888695,
1573346838247,
1573346732975,
1572984480693,
1571853538109,
1571828152255,
1571778338392,
1571221710946,
1570812630320,
1570100510333
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1523/Authors"
],
[
"~Hassam_Sheikh1"
],
[
"ICLR.cc/2020/Conference/Paper1523/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1523/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1523/AnonReviewer2"
],
[
"~Murray_Shanahan1"
],
[
"ICLR.cc/2020/Conference/Paper1523/Authors"
],
[
"~Murray_Shanahan1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper describes how multi-agent reinforcement learning at scale leads to the evolution of complex behaviors. Actually, \\\"at scale\\\" may be an understatement - a lot of computing power was used here. But the amount of compute used is not the point, rather the point is that complex and fascinating behavior can emerge from a long co-evolutionary process (though gradient-based RL is used here, the principle is the same) where the arms race forms an implicit curriculum. This is the existence proof that people in artificial life and adaptive behavior have been looking for for so long.\\n\\nTwo reviewers were positive about the paper, with a third being negative because the paper does not give any new insights about how to do RL at scale. But that was not the stated aim of the paper, as the authors clarify in a response.\\n\\nThis paper will draw quite some attention and deserves an oral presentation.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper Revisions\", \"comment\": \"In response to reviews, we\\u2019ve made the following updates to the paper:\\n\\n1. Slightly modified language of contribution statement\\n2. Changed figures 1 and 3 to plot the mean across 3 seeds and show the seeds\\n3. Add sentence describing out of bounds condition and reward to Section 3\\n4. Add more description of policy architecture to caption of Figure 2.\\n5. Add more description of how/why box surfing is possible to Section 5\\n6. Remove acknowledgements section (will be added back for camera ready paper)\\n7. Remove Appendix section on multiple seeds as now figures 1 and 3 include this information\\n8. Add footnote describing non-shared weights experiments to Section 4.\"}",
"{\"title\": \"Response to Official Review #3\", \"comment\": \"Thank you for the review and questions!\\n\\n\\u2014 \\u201cHide&seek rules and safety issues: is it not supposed that hiders and the seekers could not get together (i.e., hiders cannot push seekers or as we can see in some videos)? Furthermore, it is surprising (one would say worrying) that hiders identified the barriers as an impediment to the seeker (not only as a way to hide). I wouldn\\u2019t say that this is a \\u201c human-relevant strategies and skills \\u201c as the authors claim. Hider agents even double walled seekers!\\u201d\\n\\nIn the environment as is, the hiders can push the seekers during the preparation phase. It\\u2019s unclear that this is bad, but we agree that we could easily make it not the case, though it likely would not change the skill progression in the main hide-and-seek environment. However, as you note, this can definitely change the resulting skill progression in other game variants (Figure A.8). We also believe finding methods that can make agents converge on safe outcomes is an important direction for future research!\\n\\n\\u201cHave the authors thought about joining the Animal-AI Olympics (http://animalaiolympics.com/) competition?\\u201d\\n\\n\\u2014 We thought this would be outside the scope of our current work, but we agree this challenge is very interesting. However, from the description it feels slightly different than the transfer tasks we propose. They say \\u201cThe goal will always be to retrieve the same food items by interacting with previously seen objects,\\u201d where in our transfer tests agents are given very different objectives from the original objective of hide-and-seek.\"}",
"{\"title\": \"Response to Official Review #2 (Part 2)\", \"comment\": \"\\u2014 \\u201cIs the agent's embedding concatenated with the other embeddings? If so, why and how (concat, sum, multiply, conditional batch norm, etc.)?\\u201d\\n\\nWe concatenate all the entities together, such that the tensor has shape (number entities, entity dimension), run residual self attention, and then average pool getting a fixed sized vector of size (entity dimension). We\\u2019ll add this clarification to the text.\\n\\n\\u2014 \\u201cIn the center and on the right you use a blue block to indicate the agent's embedding and then at the bottom right you seem to use it as a network component or something (between the \\\"LSTM\\\")? If you're trying to signal that this is the agent's perception at different stages in the network, I'd use a different color to separate it from the agent's lidar and pos/vel.\\u201d\\n\\nAll the colored blocks represent activations in this diagram, but we agree re-using the blue coloring could be confusing. We\\u2019ll change the color of the final two blue blocks for the camera ready version of the paper.\\n\\n\\u2014 \\u201cYou don't mention that \\\"x,v\\\" stands for \\\"position, velocity\\\".\\u201d\\n\\nGood catch! Thank you, we\\u2019ll add a clarification to the text.\\n\\n\\u2014 \\u201cFigure 3: \\\"environment-specific\\\" (add dash). Draw skill development boundaries like in Fig.1.\\u201d\\n\\nGood suggestions. Thank you!\\n\\n\\u2014 \\u201cHow exactly does the \\\"surfing\\\" work? The seekers step (not jump, right, since there is no jumping?)\\u201d\\n\\nWe do classify this as an exploit of the rules we designed in the last paragraph of Section 7, but we will add more clarification on how it works and that it is an exploit of our intended game rules in Section 5. You are right in that they more \\u201cstep\\u201d or \\u201claunch\\u201d themselves from a ramp to the box. Once on top of the box, they can still \\u201cgrab\\u201d the box, which keeps the relative orientation and position between agent and box fixed. The agents\\u2019 movement action puts a force on the agent regardless of whether the agent is on the ground or not. So if the agent does this while grabbing the box, they will both move together since they have a fixed relative orientation and position.\\n\\n\\u2014 \\u201cYou mention in footnote 3 that the developmental stage and changes in reward aren't necessarily correlated. The same seems to be true for the metrics in Fig.3, which raises the question how did you come up with those boundaries for the different developmental stages in Fig.1? Did someone look at rollouts from the trained policy every couple of million steps?\\u201d\\n\\nWe used a combination of looking at the reward, behavioral statistics (Figure 3), and watching trajectories. It is a very interesting line of future research to automatically detect large shifts in agent strategy!\\n\\n\\u201cAnd do all agents learn new skills at the same time or is there a delay? From my understanding, they are all using the same policy and critic networks but maybe dependent on the proximity of an agent to an object/obstacle, it's easier or harder to execute.\\u201d\\n\\n\\u2014 All agents have the same weights so they would learn the skill at the same time. However, we\\u2019ve run some experiments where each agent has a different policy and we did not notice any significant differences to the shared weight case. Using shared weights is simpler to implement and cheaper to train, which is why we use them for all of our experiments. Anecdotally, agents\\u2019 seems to learn the skills for easier cases in the environment first; for instance we notice they often learn to construct a 1 block barricade using existing walls before they learn to construct a 3 block fort in the center of the room.\"}",
"{\"title\": \"Response to Official Review #2 (Part 1)\", \"comment\": \"Thank you for the very detailed review and constructive criticisms!\\n\\n\\u2014 \\u201cThe majority of the paper presents essentially a case study of what happened during a single seed of policy training...\\u201d\\n\\nGreat point, we will update Figures 1 and 3 to be the average across the 3 seeds we show in the appendix. We found very little seed dependence throughout the project, which is likely why we made this oversight.\\n\\n\\u2014 \\u201cThe contributions section is overselling the work: (1) states that autocurricula lead to changes in agent strategy - Maybe I'm mistaken here but that sounds like a tautology. In other words, \\\"a self-generated sequence of challenges\\\" (\\\"Autocurricula\\\", according to [Leibo et al., 2019][1]) lead to changes in strategy.\\u201d\\n\\nIn this sentence we were trying to place emphasis on \\u201cdistinct and compounding phase shifts\\u201d \\u2014 we agree with you that autocurricula by definition are causing changes in strategy, but there is no guarantee that they are distinct shifts (they could just as easily be small changes in strategy). Distinct shifts make it easier to see the effects of an autocurriculum, as small shifts can be hard to detect or analyse. We\\u2019ve changed this clause to \\u201cclear evidence that multi-agent self-play can lead to emergent autocurricula with many distinct and compounding phase shifts in agent strategy\\u201d.\\n\\n\\u2014 \\u201cAnd (3) advertises \\\"a proposed framework for evaluating agents in open-ended environments\\\" and also \\\"a suite of targeted intelligence tests for our domain\\\". The former of those two is either not in the paper or you mean your section \\\"6.2 Transfer and Fine-Tuning as Evaluation\\\", which isn't novel (see e.g. [Alain & Bengio, 2016][2])\\u201d\\n\\nAfter re-reviewing this sentence we agree with you in that it was misleading; we did not intend claim transfer as our idea but rather that we would like to use transfer to evaluate skill progression in open-ended environments. The reason we think it is a contribution is that in most MARL settings, progress is evaluated through play against humans or through metrics like ELO against past versions or other populations. We will modify it to \\u201ca proposal to use transfer as a framework for evaluating agents in open-ended environments...\\u201d.\\n\\n\\u2014 \\u201cYour acknowledgments should be anonymized until publication. Otherwise, reviewers might draw conclusions which group published this work, thus violating the double-blind review procedure.\\u201d\\n\\nThank you for pointing this out!\\n\\n\\u2014 \\u201c\\\"evidence that ... competition may scale better with increasing environment complexity\\\" - that's only shown in the appendix\\u201d\\n\\nWe believe Figure 5 is also evidence for this, as you increase the observation space complexity, meaningful interaction with objects goes down when you use intrinsic motivation methods.\\n\\n\\u2014 \\u201cYou mention TD-Gammon as a game, but I think it's an algorithm for the game Backgammon, similarly to how \\\"Go\\\" is the game and \\\"AlphaGo\\\" is an algorithm for playing.\\u201d\\n\\nThank you for catching this!\\n\\n\\u2014 \\u201cArena boundaries: What's the penalty and what's \\\" too far outside the play area\\\"?\\u201d\\n\\nGreat point. We will update the paper with more clear language around this. We give a -10 reward if the agents go outside an 18 meter square (which is 9 times the area of the quadrant game shown in the appendix).\\n\\n\\u2014 \\u201cPolicy network and fusion are underspecified: How do you deal with the varying number of agents, boxes, obstacles? Do you just set the x/v of the missing pieces to zero or is the observation actually of a different shape in case there are more/fewer objects or agents? How's the embedding done that is depicted in Figure 2? Also, I didn't see the embedding being mentioned in the text - any reason for that?\\u201d\\n\\nWe have more detail in appendix section B.7, which answers some of your questions, but we will move some of these details to the main text/caption of Figure 2 and add more clarification. The architecture is an attention and pooling based architecture so it naturally deals with varying numbers of objects. We mask out anything not visible to the agent in the attention and pooling operations so that they do not receive privileged information. The embedding weights are shared within object type, e.g. all box entities pass through the same shared embedding function. We currently show in figure 2 that the observations pass through fully connected layers to create these embeddings, but we\\u2019ll add the comment about shared weights to the caption.\\n\\n\\u2014 \\u201cWhy does the agent's embedding say \\\"1\\\"? Why do the other agents' embeddings have a \\\"-1\\\" at the end of the orange box and the others don't?\\u201d\\n\\nThe agents have an ego-centric architecture, so that \\u201c1\\u201d shows that that entry is the agent\\u2019s observation of itself (the agent itself is just 1 entity as opposed to boxes or other agents which are many entities). There are (# agents - 1) other agents (from the view of any single agent). We\\u2019ll add this clarification to the figure caption.\"}",
"{\"title\": \"Response to Official Review #1\", \"comment\": \"Thank you for your review and constructive criticisms! We\\u2019ll try to address each piece of criticism in turn.\\n\\n\\u2014 \\u201cThe main point of the paper is empirical RL at scale\\u201d\\n\\nThe main point we hope to convey is that large-scale multi-agent reinforcement learning (MARL) can lead to self-supervised autocurricula in which agents learn successively more complex human-relevant skills such as construction and tool use. We absolutely agree that there have been many amazing previous results from MARL at scale, and we acknowledge many of the works you mention and more in our introduction and related work sections. We believe our work differs from these in that our environment is built from very simple components in a physically grounded simulator, making it extremely extensible. It is much more clear how one could add to or modify the hide-and-seek environment to include more human-relevant components than it is how one could modify games like Go, Dota, or Starcraft. \\n\\n\\u2014 \\u201cThere has also been work on object-level RL \\u2026 the observation that RL agents learn human-interpretable uses of objects does not seem surprising.\\u201d\\n\\nWe agree that there has been much work on object-level RL. We didn\\u2019t advertise this as a novel portion of our work, and we\\u2019ve already included many citations that use object-level architectures and attention at the end of Section 4. We also acknowledge that there have been prior works where RL learns human-interpretable uses of objects, which is why we include a paragraph in Section 2 on prior work in tool-use; however, our work can be distinguished from these and the work you cite in that we provide no explicit signal for interacting with objects; the pressure to interact with the objects is solely a result of multi-agent competition. \\n\\n\\u2014 \\u201cThe paper also does not give new insights in how to make large-scale RL work\\u201d\\n\\nThis paper was not on how to make large-scale RL work, but rather on showing the power of current large-scale RL algorithms in a new setting that is more physically grounded and human-relevant than previous settings like DotA, Starcraft, and Go. The main argument of the paper is that multi-agent autocurricula can lead to agents learning many human-relevant skills like tool-use and construction; the fact that we required no new significant algorithmic modifications actually strengthens this point in our opinion, as the results can\\u2019t be confused as a pathology of a new specific algorithm. That being said, we agree that it is a great direction for future research to incorporate methods that can learn faster or better in this environment.\\n\\n\\u2014 \\u201cThe paper also does not introduce new concrete evaluation metrics that can apply to other tasks / RL problems...\\u201d\\n\\nIt\\u2019s very hard to create transfer tasks that are valid across domains. However, we hope that the tasks we proposed can be used as transfer metrics for any future research within our domain (both of which we will open source).\\n\\n\\u2014 \\u201cThere is one actor model, all agents share weights\\u201d\\n\\nUsing shared weights, or at least some portion of training data coming from self-play, is very common (AlphaGo, DotA, Alphastar, Capture-the-Flag, NeuralMMO, etc.), and it doesn\\u2019t alter the multi-agent optimization objective. Each agent still takes a greedy gradient and has its own observations and memory state so that at execution they use no privileged information. Shared weights does not mean uni-brain (one brain many actions), which would indeed reduce this to a single agent problem. That being said, we\\u2019ve run the hide-and-seek experiment with separate weights for each agent and as expected have seen no difference in learned strategy.\\n\\n\\u2014 \\u201call agents use a central value function that can see the entire state. This makes the setting basically a single-agent problem and is far simpler in the multi-agent assumptions from other decentralized multi-agent work\\u201d\\n\\nThis is a commonly used method to reduce policy gradient variance in partially observed settings without letting agents cheat at execution time both for MARL (MADDPG, Counterfactual RL, AlphaStar) and also single agent RL (Dactyl, Asymmetric Actor Critic). We ablate this choice in the appendix and find that it is important at the given scale of compute but agents still learn without it.\\n\\nAs for other MARL algorithms, we cite both of the works you mention in our paper already, and it is an excellent line of future research to incorporate methods like these into setups such as hide-and-seek to see if they bring benefit to learning. However, we don\\u2019t think algorithmic simplicity is a fault of our work but rather a strength. We show that with only standard simple algorithms, multi-agent autocurricula can lead to human-relevant skills like construction and tool-use in physically grounded environments, which we believe provides a good baseline for future algorithmic research.\"}",
"{\"title\": \"Main point of the paper is not RL at scale\", \"comment\": \"Disclaimer: I am neither an author nor in anyway related to OpenAI.\\n\\nI believe that the main point of this paper is NOT to demonstrate RL at scale, though, as everyone has noticed that work done by OpenAI mostly requires stupendous amount of compute power (RAPID framework, 128000 cpus) which they have also used here. After reading this paper several times and being a researcher in MARL myself, I believe that judging this paper just on the basis of scale is entirely unfair. The main idea of this paper is the evolution of complex strategies and emergence of auto-curriculum when agents face evolving competition. \\n\\nThe reviewer has mentioned \\n\\\"There is one actor model, all agents share weights. Hence this is self-play: hiders and seekers use the same agent model. Also, all agents use a central value function that can see the entire state (decentralized execution, centralized learning). This makes the setting basically a single-agent problem, with the only decentralized aspect being each actor model only receiving its own observation.\\\"\\nThis argument might have carried a lot of weight (pun intended) when the goal of this work is to propose a new SOTA MARL algorithm but does the architecture used here matters? Probably not, I reckon that this architecture probably wont even work for any other standard MARL task.\\n\\nSecondly \\\"Note that a large body of multi-agent RL work in fact uses agents that do not share weights, etc.\\\" Isn't the citation (Foerster 2018) mentioned at end actually share parameters? The paper mentions \\\" However, we still assume agents have access to opponents\\u2019 policy parameters in policy gradient-based LOLA. \\\"\\n\\nI would not go on explaining what are the intentions of this paper but I can safely say that reviewer has completely missed the point of the paper and just evaluated it on the basis the technical aspects of the paper which are irrelevant for this work.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"1. Summary\\n\\nThe authors report on an empirical study of emergent behavior of multiple RL agents learning to play hide-and-seek (a sparse reward task). The main point of this paper is that RL agents learning at scale (large number of samples, batch-size 64000). can learn to solve tasks with strategies that are human-interpretable (e.g., using ramps, boxes). Scale also requires various simplifications (e.g., keeping the learning setup as close as possible to a single-agent problem as possible).\\n\\nAgents are grouped in 2 teams (seekers, hiders). Each agent receives a team reward, e.g., it can be punished for events that it did not participate in, e.g., if a team-mate is seen by an opponent. If hiders are hidden, seekers also automatically see reward. The first 40% of the episode there is no reward to let hiders hide.\\n\\nThere is one actor model, all agents share weights. Hence this is self-play: hiders and seekers use the same agent model. Also, all agents use a central value function that can see the entire state (decentralized execution, centralized learning). This makes the setting basically a single-agent problem, with the only decentralized aspect being each actor model only receiving its own observation. Note that a large body of multi-agent RL work in fact uses agents that do not share weights, etc.\", \"other_features_described\": \"- Auto-curricula: e.g. agents find new strategies (using ramps, boxes) that other agents have to counteract.\\n- Human-relevant skills: They report that the agent model learns multiple ways to interact with (objects in) the environment that are semantically interesting (resembles something humans might do).\\n- Authors compare with policies learning via intrinsic motivation.\\n- Evaluation through transfer learning shows some benefit of transfer of hide-seek agents to auxiliary tasks. However, it is not so clear how this evaluation informs future work on transfer learning (e.g., how would you pick evaluation tasks for a given train-task?) \\n\\n1. Decision (accept or reject) with one or two key reasons for this choice.\\n\\nReject.\\n\\nThe main point of the paper is empirical RL at scale. Although the learned behaviors are human-interpretable, this does not seem surprising given the fact that in many (large-scale) RL applications (Atari games, Go, DotA 2, Starcraft), it has been observed that RL agents can learn to manipulate and use their environment (which includes other agents!) in unexpected ways / find creative ways to exploit the reward function (see e.g. demos in https://www.alexirpan.com/2018/02/14/rl-hard.html). There has also been work on object-level RL [Agnew, Domingos 2018], which involves agents interacting with objects in the environment. Compared to this, the observation that RL agents learn human-interpretable uses of objects does not seem surprising.\\n\\nThe paper also does not give new insights in how to make large-scale RL ``'work'. For instance, there are no significant differences in algorithm / model structure from DotA / Starcraft agents that can inform future large-scale experiments.\\n\\nThe paper also does not introduce new concrete evaluation metrics that can apply to other tasks / RL problems, skill detection / segmentation methods to learn the structure of auto-curricula. Furthermore, the setup is very close to a single-agent problem (see above), and is far simpler in the multi-agent assumptions from other decentralized multi-agent work (Foerster 2018, Jacques 2019, etc).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Authors in introduce a new competitive/cooperative physics-based environment in which different teams of agents compete in a visual concealment and search task with visibility-based team-based rewards (although There are no explicit incentives for agents to interact with objects in the environment). They show that, complex behaviour emerge as the episode progresses and agents are able to learn 6 emergent skills/(counter-)strategies (including tool use), where agents intentionally change their environment to suit their needs. Agents trained using self-play\\n\\nIn my opinion, this is an excellent paper which main contribution is to provide experimental evidence that relevant and complex skills and strategies can emerge from multi-agent RL competing scenarios.\", \"minor_comments\": [\"Hide&seek rules and safety issues: is it not supposed that hiders and the seekers could not get together (i.e., hiders cannot push seekers or as we can see in some videos)? Furthermore, it is surprising (one would say worrying) that hiders identified the barriers as an impediment to the seeker (not only as a way to hide). I wouldn\\u2019t say that this is a \\u201c human-relevant strategies and skills \\u201c as the authors claim. Hider agents even double walled seekers!\", \"Have the authors thought about joining the Animal-AI Olympics (http://animalaiolympics.com/) competition? It would be a great opportunity to to test the skills of your agents in a further general testing scenario. They provide an arena (test-bed) which contains 300 different intelligent tests for testing the cognitive abilities of RL agents (https://www.mdcrosby.com/blog/animalaiprizes1.html) which have to interact with the environment.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"# Review ICLR20, Emergent Tool Use...\\n\\nThis review is for the originally uploaded version of this article. Comments from other reviewers and revisions have deliberately not been taken into account. After publishing this review, this reviewer will participate in the forum discussion and help the authors improve the paper.\\n\\nI apologize in advance for being reviewer 2.\\n\\n## Overall\\n\\n**Summary**\\n\\nThe article introduces a new multi-agent physics environment called \\\"hide-and-seek\\\". The authors trained agents in this environment and studied the emergence of and changes in strategies. The authors also study the performance of these same agents in new \\\"targeted intelligence tests\\\" compared to training from scratch and compared to agents trained with curiosity.\\n\\n**Overall Opinion**\\n\\nI think the environment is very appealing and the paper is overall well-structured and demonstrates novel work. Therefore I'd recommend this paper to be accepted. That being said, there are glaring issues with some of the writing that need to be addressed before I think this work conforms to the standards of ICLR. However, if these issues are addressed, I have no issue increasing my review score.\", \"main_problems\": [\"The majority of the paper presents essentially a case study of what happened during a single seed of policy training. For RL literature that's very uncommon and I think it's consensus that DRL is very sensitive to random seeds. I know that you do have additional seeds in the appendix, but why didn't you mention those in the main body of the paper? You seem to have found some robustness against multiple seeds, so why not show it? And also the fact that Figure 1 & 3 only apply to 1 seed is not mentioned. I think this is easy enough to fix - I suggest since you're already at 10 pages, to just bring in the additional seeds from the appendix and average over their performance in Fig.1&3.\", \"The contributions section is overselling the work: (1) states that autocurricula lead to changes in agent strategy - Maybe I'm mistaken here but that sounds like a tautology. In other words, \\\"a self-generated sequence of challenges\\\" (\\\"Autocurricula\\\", according to [Leibo et al., 2019][1]) lead to changes in strategy. And (3) advertises \\\"a proposed framework for evaluating agents in open-ended environments\\\" and also \\\"a suite of targeted intelligence tests for our domain\\\". The former of those two is either not in the paper or you mean your section \\\"6.2 Transfer and Fine-Tuning as Evaluation\\\", which isn't novel (see e.g. [Alain & Bengio, 2016][2])\", \"Your acknowledgments should be anonymized until publication. Otherwise, reviewers might draw conclusions which group published this work, thus violating the double-blind review procedure.\", \"[1]: https://arxiv.org/pdf/1903.00742.pdf\", \"[2]: https://arxiv.org/pdf/1610.01644.pdf\", \"Like I mentioned above, I think these are all easy to address, which should allow acceptance of this work. Here are some additional questions, comments, and nitpicks:\", \"## Specific comments and questions\", \"### Abstract\", \"\\\"evidence that ... competition may scale better with increasing environment complexity\\\" - that's only shown in the appendix\", \"### Intro\", \"You mention TD-Gammon as a game, but I think it's an algorithm for the game Backgammon, similarly to how \\\"Go\\\" is the game and \\\"AlphaGo\\\" is an algorithm for playing.\", \"### Rel. Work\", \"all good\", \"### Hide And Seek\", \"Arena boundaries: What's the penalty and what's \\\" too far outside the play area\\\"? And in all depictions, it looks like the geometry of the arena is elevated around the edges and the agents don't have a jump action, so how would they ever go out of borders? After watching the videos: Apparently, the jagged-looking arena boundary in the videos is purely cosmetic and agents can still access that space. This is unclear from just the paper and the renderings in Figure 1.\", \"### Policy Optimization\", \"Policy network and fusion are underspecified: How do you deal with the varying number of agents, boxes, obstacles? Do you just set the x/v of the missing pieces to zero or is the observation actually of a different shape in case there are more/fewer objects or agents? How's the embedding done that is depicted in Figure 2? Also, I didn't see the embedding being mentioned in the text - any reason for that?\", \"Figure 2 - This diagram is visually appealing but confusing and needs to be improved. Why does the agent's embedding say \\\"1\\\"? Why do the other agents' embeddings have a \\\"-1\\\" at the end of the orange box and the others don't? Is the agent's embedding concatenated with the other embeddings? If so, why and how (concat, sum, multiply, conditional batch norm, etc.)? In the center and on the right you use a blue block to indicate the agent's embedding and then at the bottom right you seem to use it as a network component or something (between the \\\"LSTM\\\")? If you're trying to signal that this is the agent's perception at different stages in the network, I'd use a different color to separate it from the agent's lidar and pos/vel. You don't mention that \\\"x,v\\\" stands for \\\"position, velocity\\\".\", \"### Auto-curriculum and Emergent Behavior\", \"Figure 3: \\\"environment-specific\\\" (add dash). Draw skill development boundaries like in Fig.1.\", \"How exactly does the \\\"surfing\\\" work? The seekers step (not jump, right, since there is no jumping?) onto the boxes and then what? The momentum propels the box forward? Do other seekers push the box? Their movement on top of the box somehow moves the box (this seems to be the case judging by the videos but this is the least physically plausible)? This is a super interesting adaptation but I'd suspect the physics simulation to have a bug/glitch that's being exploited here.\", \"You mention in footnote 3 that the developmental stage and changes in reward aren't necessarily correlated. The same seems to be true for the metrics in Fig.3, which raises the question how did you come up with those boundaries for the different developmental stages in Fig.1? Did someone look at rollouts from the trained policy every couple of million steps? And do all agents learn new skills at the same time or is there a delay? From my understanding, they are all using the same policy and critic networks but maybe dependent on the proximity of an agent to an object/obstacle, it's easier or harder to execute.\", \"### Evaluation\", \"clear and well-written, slightly too much content in the appendix and not enough in the main paper. Weird appendix numbering - A.6 appears in the main paper pages after A.7\", \"### Discussion and Future Work\", \"all good\", \"### Appendix\", \"I appreciate the TOC. I did not look into Appendix B-D because it's another 10 pages on top of the 10 pages of the article.\", \"All in all an interesting work. Good luck with the rebuttal/discussion.\"]}",
"{\"comment\": \"Many thanks for the reply\", \"title\": \"Re: Great task and fascinating results, but a question about object permanence claims\"}",
"{\"comment\": \"Thank you for the praise and questions!\\n\\nAccording to our understanding, object permanence is typically defined as: the understanding that objects continue to exist even when they cannot be perceived (Piaget J., The construction of reality in the child. Basic Books; New York: 1954.). In the proposed task, the objects are only visible for about 20% of the episode, meaning that for the remaining 80% the agent cannot sense the objects in any way and must make the prediction based only on its memory (in our case the memory unit is an LSTM) of where it saw objects going. The agent must remember how many objects went to one side or the other, so in a sense it is required to understand that objects continue to exist in that area after they have been obscured. When training on this task, we keep all of the original policy weights, e.g. embedding weights and LSTM weights, fixed such that we are only evaluating the existing representation the agent has after training in hide-and-seek or with intrinsic motivation (the task would be trivial without keeping these weights fixed).\\n\\nIn our policy architecture, all \\u201centities\\u201d (meaning objects and other agents) are pooled together before going into the LSTM. You are correct that before this, there are an equal number of embedding vectors as there are entities in the environment, but this information is lost after masked pooling. For instance, if there are no visible entities at a given timestep, then the output of that masked pooling operation is a vector of 0\\u2019s with dimension independent of the number of entities in the game, meaning there is no way for the agent to know how many entities exist past what\\u2019s in its memory. We hope this clarifies any confusions on the policy architecture from the current text, and we will try to give more clarification in the next version of the paper.\", \"title\": \"Re: Great task and fascinating results, but a question about object permanence claims\"}",
"{\"comment\": \"Very nice paper. I think the hide-and-seek task is excellent, as it gets at some fundamental common sense concepts (object persistence, obstruction, etc). (It would be especially compelling if the task were solved from pixels.) The sequence of emergent strategies is fascinating. And obviously this is the main result, so the following should be taken in that context.\\n\\nI have a question re the claim that the object-counting transfer task you propose in Section 6.2 really provides evidence that \\\"the agents have a sense of object permanence\\\". Couldn't the classifier you add on to the pre-trained agent simply count the number of leftward (as opposed to rightward) movements of the boxes? What different does it make, in this task, that the boxes eventually become obscured?\\n\\nMore generally, is it not the case that the agents have an architectural prior that builds in exactly how many objects exist? If I understand Figure 2 correctly, the embedding vector that encodes the objects has a dimension whose length is precisely the number of objects that exist, even when parts of it are masked out. So the permanence of all the objects is, in a sense, built in .\", \"title\": \"Great task and fascinating results, but a question about object permanence claims\"}"
]
} |
S1e3g1rtwB | The fairness-accuracy landscape of neural classifiers | [
"Susan Wei",
"Marc Niethammer"
] | That machine learning algorithms can demonstrate bias is well-documented by now. This work confronts the challenge of bias mitigation in feedforward fully-connected neural nets from the lens of causal inference and multiobjective optimisation. Regarding the former, a new causal notion of fairness is introduced that is particularly suited to giving a nuanced treatment of datasets collected under unfair practices. In particular, special attention is paid to subjects whose covariates could appear with substantial probability in either value of the sensitive attribute. Next, recognising that fairness and accuracy are competing objectives, the proposed methodology uses techniques from multiobjective optimisation to ascertain the fairness-accuracy landscape of a neural net classifier. Experimental results suggest that the proposed method produces neural net classifiers that distribute evenly across the Pareto front of the fairness-accuracy space and is more efficient at finding non-dominated points than an adversarial approach. | [
"landscape",
"neural classifiers",
"multiobjective optimisation",
"fairness",
"machine",
"algorithms",
"bias",
"work",
"challenge",
"bias mitigation"
] | Reject | https://openreview.net/pdf?id=S1e3g1rtwB | https://openreview.net/forum?id=S1e3g1rtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Rp7TWiM25",
"Byeg222isr",
"rkxAr3niiS",
"rJxlRjhsoH",
"rkg1g92ssr",
"rJl6sF3osr",
"BJexY-Q4cr",
"rJgwE8_xcS",
"r1xy_56sFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725492,
1573797032408,
1573796933626,
1573796808256,
1573796326730,
1573796261413,
1572249975986,
1572009518732,
1571703398893
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1522/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1522/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1522/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This manuscript investigates and characterizes the tradeoff between fairness and accuracy in neural network models. The primary empirical contribution is to investigate this tradeoff for a variety of datasets.\\n\\nThe reviewers and AC agree that the problem studied is timely and interesting. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty of the results. IN particular, it is not clear that the idea of a fairness/performance tradeoff is a new one. In reviews and discussion, the reviewers also noted issues with clarity of the presentation. In the opinion of the AC, the manuscript is not appropriate for publication in its current state.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3 (Part 3 of 3)\", \"comment\": \"In response to \\\"It would be important to explain the choice of the particular causal estimand, the choice of the hidden layer to put the estimand in, to explore the choice of the objective, and so on. Currently, none of these choices/design aspects are being investigated.\\\"\\n\\nIn the revision we have carefully discussed each of these issues. Regarding the choice of the particular causal estimand, the ATO has the nice interpretation of focusing on subjects with the most overlap in observed covariates. There is also an important practical reason to adopt it as our causal estimand of choice. The overlap weights smoothly down-weigh subjects in the tails of the propensity score distribution, thereby mitigating the common problem of extreme propensity scores. \\n\\nAs for the choice of the hidden layer to put the causal constraint in, we experimented with penalising just one of the internal layers versus penalising all internal layers. The experimental results for the latter are placed in the Appendix. We see that although penalising all layers has the benefit of allowing downstream transfer learning tasks to be fair, the training process encounters more convergence issues as can be seen from Figure 6 in the appendix. We are investigating an approach where we penalise layer by layer so that the training has a better chance of converging. \\n\\nFinally, regarding the choice of the objective, we suppose the reviewer means the choices of the vector objective function in Equation 1? In that case, we think it is important to look at the vector objective because both accuracy and fairness are desirable in the learning algorithm. Our particular choice of the expected cross-entropy loss for measuring fairness is common in classification settings. Our choice of using the ATO for fairness is again because we think a causal estimand could reveal insights that measures like conditional parity cannot. Furthermore, we choose ATO to be the causal estimand because it has a nice interpretation and does not suffer from extreme propensity scores.\"}",
"{\"title\": \"Response to Review #3 (Part 2 of 3)\", \"comment\": \"Next, we respond to the rest of the comments in Review #3, point-by-point.\\n\\n- What is the reason for focusing on 'neural classifiers'?...\\n\\nIndeed, the fairness-accuracy Pareto front can also be estimated for other classifiers. We chose to focus on neural networks because they represent the state-of-the-art in classification approaches these days. Also, at the outset, it wasn\\u2019t immediately obvious that we could use multiobjective optimisation techniques to efficiently find non-dominanted points of a neural network. While our approach can also be useful for non-neural network classifiers we show here that the proposed approach easily integrates into a neural network setup and in particular allows removing the influence of sensitive attributes on all layers of a neural network. \\n\\n- In the Introduction, the authors could cite the works of Amartya Sea, etc., on fairness...\\n\\nWe regret not being more thorough in our references. We now cite the work of Sea on fairness in our revision.\\n\\n- What exactly is a \\\"sensitive attribute\\\"? If we don't want to bias our predictions, then why include it in the analysis?\\n\\nThe sensitive attribute is the attribute we want the algorithm to be unbiased with regards to, as much as possible. We need access to it during training so that we can achieve debiasing. However, importantly, at deployment time, we do not need access to the sensitive attribute to make a fair classification decision. Note that it is well understood that simply removing the sensitive attribute from the entire training process does not promote a fair classifier because there may be other variables highly correlated with the sensitive attribute that the algorithm can still leverage. \\n\\n- It is unclear what is new and what is related work in page 3.\\n\\nThe top of page 3 describes the Pareto front and scalarisation schemes for estimating it which is based on well established concepts in multiobjective optimisation. The bottom of page 3 describes the estimation of the Pareto front specific to our supervised learning setup which is new. \\n\\n- Sec. 4: The claim that \\\"causal inference is about situations where manipulation is impossible\\\"...\\n\\nWe apologise for the confusing way in which we stated this. We have removed the sentence. We were trying to say that in an ideal world, we could intervene on the sensitive attribute by manipulating their values in an experiment and recording the outcomes. However, we usually only have access to observational data. Fortunately, causal inference tools can be used to glean causal effects from observational data.\\n\\n- As mentioned above, why this particular estimand leads to more \\\"fairness\\\" is never explained.\\n\\nBecause we have defined fairness to mean the sensitive attribute has no causal effect on the classification, this means we want the ATO causal estimand to be low. \\n\\n- Do we need a square or abs value in Eq (5)?\\n\\nYes, indeed. Thank you for pointing out this typo which we\\u2019ve fixed in the revision.\\n\\n- The experimental section is weak and does not illustrate the claims well. \\n\\nWe acknowledge the limitations of our current experimental section. For the final version of the paper, we will apply our proposed methodology on five other benchmarking datasets provided in the AI Fairness 360 toolkit. \\n\\nIn the meantime, we added some additional visualisation in the experimental section which shows the visual effect of dialling $\\\\lambda$ between 0 and 1. Namely, for several values of the penalty parameter $\\\\lambda$, we plot the distribution of the final prediction broken down by true class membership $Y$ and sensitive attribute $A$. In addition to reporting the ATO measure of fairness, we also indicate other non-causal fairness metrics including Equalised Odds, Equal Opportunity, and Demographic Parity.\"}",
"{\"title\": \"Response to Review #3 (Part 1 of 3)\", \"comment\": \"The reviewer is correct that we failed to make the assumptions regarding the causal estimand explicit. These necessary assumptions are now clearly stated in the revision:\\n- In adopting the potential outcome framework of Imbens and Rubin 2015, we assume the Stable Unit Treatment Value Assumption\\n- Under unconfoundedness, i.e.\\\\ $A$ is independent of $\\\\{h(0),h(1)\\\\}$ conditional on $X$, WATE is a class of causal estimands that includes the ATO as a special case\\n- In order for the ATO estimate to be consistent, we refer the reader to the set of regularity assumptions (called Assumption 1 to 5) in Hirano et. al 2003. A few of these conditions are regulated to the distribution of $X$ and distribution of $h(0)$ and $h(1)$. There is also a condition on the smoothness of the propensity score $e(x)$ which is even stricter than positivity.\\n\\nThe reviewer is correct that we should\\u2019ve done a better job discussing the subtlety involved when the treatment is actually an immutable characteristic. We have added a brief discussion in the revision that echos similar concerns raised in Kilbertus et. al 2017. Namely, an explicit distinction should be drawn between the sensitive attribute (for which interventions are often impossible in practice) and its proxies. For instance the immutable characteristic of race has proxies such as name, visual features, languages spoken at home that can be conceivably manipulated. \\n\\nNext, the positivity assumption can be checked in the sample, i.e. by checking whether there are observational units that are \\u201ctreated\\u201d ($A=1$) and \\u201cuntreated\\u201d ($A=0$) in each stratum of $X$. If we observe a stratum of $X$ in which there are only treated or only untreated, we need to ask ourselves if this is happening by pure chance due to sampling variability or this is happening because of some structural reason (units with covariates in this stratum are deterministically always \\u201ctreated\\u201d or always \\u201cuntreated\\u201d). The latter is very hard to deal with whereas the former is not, strictly speaking, a violation of the positivity assumption. But nonetheless scarcity of data in certain strata of $X$ does pose a practical issue in identifying the causal effect. This has been studied in the causal inference literature and we may implement some of the suggestions in [2] such as restriction of the data to those observational units who do not violate the positivity assumption or excluding certain covariates responsible for positivity violations. \\n\\nRegarding the specification of the propensity score model, we echo the viewpoint in Li. et al 2018 that for the purposes of estimating the ATO, a good propensity score model is one that leads to covariate balance in the sample, not one that allows us to make inferences about treatment assignment probabilities in the population. Thus it would seem that we can perhaps get away with a misspecified propensity score model as long as it achieves covariate balance in our sample. \\n\\nTo answer the reviewer's question about $U$, we want $U$, the causal effect of the sensitive attribute $A$, to be small because we don't want a sensitive attribute to have a causal effect on the outcome. The possibility of confounding by unobserved variables is of course a real concern; it is part of what makes causal inference such a challenging task. To really deal with this type of problem, domain experts and stakeholders have to be involved to think about how the data was gathered.\\n\\n[2] \\u201cDiagnosing and responding to violations in the positivity assumption\\u201d (Petersen et. al 2012)\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for their careful reading and feedback. We have combed through our original submission to fix imprecision in writing and notation, including the specific points raised above. (Actually, regarding the loss, this is not a typo but really what we mean\\u2026).\\n\\nWe have also added better explanation of why we penalise the average treatment effect for the overlap population (ATO) in the internal layers. Basically, we believe that the best safeguard against unfairness in a neural net classifier is to constrain the network to learn fair intermediate representations. This is because internal representations of neural networks are commonly assumed to contain useful information and may be subsequently employed in transfer learning. Therefore it would be important to constrain internal layers of the neural network to be fair as well. Our experimental results include a setup where all intermediate layers are penalised and a setup where only the next-to-last layer is penalised. The former makes the training more difficult although the estimated Pareto front is still reasonable. We will investigate in future work how to train this setup in a better way. Nonetheless in both setups it is interesting that only constraining intermediate representations to be fair is sufficient to obtain fairness on the final prediction. \\n\\nWe acknowledge the limitations of our current experimental section. We recently became aware of the AI Fairness 360 Tool, a Python package that includes a convenient interface to seven popular datasets in the fairness literature. In the original submission we analysed two of the datasets contained therein \\u2014 the Adult Census Income and the ProPublica Recidivism dataset. Unfortunately there is not enough time during this discussion phase to run our proposed methodology on the other five datasets provided in AI Fairness 360, but we will do this for the final version of the paper. \\n\\nIn the meantime, we were able to add some additional visualisation (Figures 2-4) in the experimental section which shows the visual effect of dialling $\\\\lambda$ between 0 and 1. Namely, for several values of the penalty parameter $\\\\lambda$, we plot the distribution of the final prediction broken down by true class membership $Y$ and sensitive attribute $A$. In addition to reporting the ATO measure of fairness, we also indicate other non-causal fairness metrics including Equalised Odds, Equal Opportunity, and Demographic Parity.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for the opportunity to clarify the paper\\u2019s contributions:\\n- This work is among the first in algorithmic fairness to focus on the fairness-accuracy tradeoff curve. Formulating the trade-off curve as a Pareto front estimation problem, we demonstrate that it is indeed possible to find a significant set of non-dominated points for a neural network, which is not immediately obvious given how difficult it is to train even a scalar objective.\\n- The generality of the proposed methodology framework allows end-users to supply their own fairness and accuracy measures.\\n- This work also investigates a new causal measure for the purpose of assessing algorithmic fairness based on the average treatment effect for the overlap population (ATO) proposed in Li et. al (2018) which can achieve covariate balance and does not suffer from extreme propensity scores.\\n- The proposed methodology can achieve fairness on the final prediction even though it only constrains intermediate representations of the neural network to be fair. This approach may have benefits for downstream transfer learning tasks. \\n\\nIn the original submission, we had discussed, arguably, the two most fundamental fairness concepts in the existing literature \\u2014 demographic parity and conditional parity. The latter envelops several existing fairness metrics, e.g. the concept of equalised odds introduced by Hardt et al. (2016) is an instance of conditional parity. Both concepts are based on the joint distribution of the classifier, the sensitive attribute $A$, the covariate $X$, and the outcome $Y$. This opens the door for using a wide variety of statistical tools to estimate these quantities. Unfortunately as documented by works such as Kilbertus et. al 2017, these approaches are purely observational in nature and cannot distinguish subtle scenarios in which the joint distributions are the same but there is clear unfairness.\\n\\nFor these reasons, we believe causal notion of fairness might provide fresh insights. Our idea is that when the dataset is itself collected under unfair practices, we must correct for the covariate imbalance before assessing fairness. We chose to employ the ATO proposed in Li et. al 2018 because it avoids the instability of weights resulting from extreme propensity scores. \\n\\nRegarding the reviewer's concern about run time, indeed an attempt at identifying the Pareto front can certainly be made by running fewer experiments, but because convergence issues are commonly encountered during the training of a neural network, the quality of the estimated front will likely suffer. \\n\\nWe understand the reviewer\\u2019s concern that the proposed method will be cumbersome to implement if a single iteration takes very long to train. Fortunately, there are more sophisticated methods for selecting the trade-off parameter ($\\\\lambda$) in the multi-objective optimisation literature such as the Normal Boundary Interactive method in Das and Dennis 1997. We have indicated in the paper that we plan to explore such techniques in future work so that a Pareto front can be accurately identified in a more efficient manner.\\u201d\\n\\nFinally, regarding the figure font size, we have fixed this issue in the revision.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"General:\\nThe paper proposed to use a causal fairness metric, then tries to identify the Pareto optimal front for the vectorized output, [accuracy, fairness]. While the proposed method makes sense, I am not sure what exactly their contribution is. It is kind of clear that Pareto optimal exists, and what they did is to run the experiments multiple times with multiple \\\\lambda values for the Chebyshev method and plot the Pareto optimal front.\", \"pro\": \"Ran multiple experiments and drew the Pareto optimal front for the considered dataset. \\n\\nCon & Question:\\nThe so-called causal fairness metric does not seem to be any more fundametal than the other proposed metrics. It seems like they worked with another metric. \\nAfter defining the fairness metric, everything else seems straightforward. Just use test (validation) set to estimate the accuracy & fairness, then plot the results on the 2d plane. \\nCan we identify the Pareto optimal front without running all 1500 experiments? What happens when running a model takes long to train? Then, the proposed method cannot be practical. \\nFigure fonts are very small and hard to see.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a novel joint optimisation framework that attempts to optimally trade-off between accuracy and fairness objectives, since in its general formal counterfactual fairness is at odds with classical accuracy objective. To this end, the authors propose optimising a Pareto front of the fairness-accuracy trade-off space and show that their resulting method outperforms an adversarial approach in finding non-dominated pairs.\\n\\nAs main contributions, the paper provides:\\n* A Pareto objective formulation of the accuracy fairness trade-off\\n* A new causal fairness objective based on the existing Weighted Average Treatment Effect (WATE) and Average Treatment Effect for the Overlap Population (ATO)\\n\\nOverall, I think the paper makes an interesting contribution to the field of fairness and that the resulting method seems quite attractive for a real-world practitioners. However, I found the writing / notation imprecise at times and the experimental section too small (lacking an extensive set of baselines, and only on two datasets). For these reasons, I give it a Weak Accept.\\n\\nSome feedback on notation / writing:\\n* Typo on page 2, the loss L should be defined on X x Y and not Y x Y\\n* In page 5, h is being used without being introduced first \\n* the justification for using ATO in the internal layers of the network is a bit insufficient\\n\\nIn terms of suggestions, I think the experimental section needs to be extended and that the various modelling choices need to be explored and/or be further justified.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to approximate the \\\"fairness-accuracy landscape\\\" of classifiers based on neural networks.\\nThe key idea is to set up a multi-dimensional objective, where one dimension is about prediction accuracy and another \\nabout fairness. The fairness component relies on a definition of fairness based on causal inference, relying on the \\nidea that a sensitive attribute should not causally affect model predictions.\\n\\nI found the causal idea intriguing, since it makes sense that we don't want a sensitive attribute to have a causal effect.\\nHowever, there may be several problems with this approach:\\n\\n1) For a causal estimate to be valid we need several assumptions. For example, we need A (the sensitive attribute) \\nto be independent of potential outcomes conditional on X --- the so-called \\\"unconfoundedness assumption\\\" in causal inference.\\nWe also need \\\"positivity\\\", i.e., that 0< P(A=1|X) <1. \\nThese assumptions are not discussed in the paper. Furthermore, the particular context of the paper, where the treatment is actually an immutable characteristic, makes such discussion much more subtle. \\nWhat will we do, for instance, if there are no A=1 in the sample when X = ...?\\n\\n\\n2) The authors seem to assume that the propensity score model is well specified. This can be tested, e.g., using [1].\\nWhat do we do when this fails? \\n\\n\\n3) Why do we want U to be small, i.e., why do we want the causal effect of A to be small, is never justified.\\nIn particular, its relation to \\\"fairness\\\" is never fleshed out, but just assumed to be so.\\nThis can be problematic when, say, we are missing certain important X that are important for A. \\nThen, there will be a measurable causal effect of A on h().\", \"some_other_problems\": [\"What is the reason for focusing on 'neural classifiers'? There is nothing specific in the method or analysis\", \"that relates to neural networks, except for the use of the causal estimand in a 'hidden layer'.\", \"In the Introduction, the authors could cite the works of Amartya Sea, etc., on fairness.\", \"Certainly the study of fairness problems did not start in 2016.\", \"What exactly is a \\\"sensitive attribute\\\"? If we don't want to bias our predictions, then why include it in the analysis?\", \"It is unclear what is new and what is related work in page 3.\", \"Sec. 4: The claim that \\\"causal inference is about situations where manipulation is impossible\\\" discards\", \"voluminous work in causal inference through randomized experiments. In fact, many scientists would\", \"agree that causal inference is impossible without manipulation.\", \"As mentioned above, why this particular estimand leads to more \\\"fairness\\\" is never explained.\", \"Do we need a square or abs value in Eq (5)?\", \"The experimental section is weak and does not illustrate the claims well.\", \"It would be important to explain the choice of the particular causal estimand, the choice of the hidden layer to put the estimand in, to explore the choice of the objective, and so on. Currently, none of these choices/design aspects are being investigated.\", \"[1] \\\"A specification test for the propensity score using its distribution conditional\", \"on participation\\\" (Shaikh et al, 2009)\"]}"
]
} |
Hke3gyHYwH | Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee | [
"Wei Hu",
"Zhiyuan Li",
"Dingli Yu"
] | Over-parameterized deep neural networks trained by simple first-order methods are known to be able to fit any labeling of data. Such over-fitting ability hinders generalization when mislabeled training examples are present. On the other hand, simple regularization methods like early-stopping can often achieve highly nontrivial performance on clean test data in these scenarios, a phenomenon not theoretically understood. This paper proposes and analyzes two simple and intuitive regularization methods: (i) regularization by the distance between the network parameters to initialization, and (ii) adding a trainable auxiliary variable to the network output for each training example. Theoretically, we prove that gradient descent training with either of these two methods leads to a generalization guarantee on the clean data distribution despite being trained using noisy labels. Our generalization analysis relies on the connection between wide neural network and neural tangent kernel (NTK). The generalization bound is independent of the network size, and is comparable to the bound one can get when there is no label noise. Experimental results verify the effectiveness of these methods on noisily labeled datasets. | [
"deep learning theory",
"regularization",
"noisy labels"
] | Accept (Poster) | https://openreview.net/pdf?id=Hke3gyHYwH | https://openreview.net/forum?id=Hke3gyHYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"aX7CipBXv7",
"BJerRXTjor",
"BkxSSXTsir",
"rJxYvGpojr",
"S1gq-N_pYH",
"S1xDkcCjtB",
"HJengQ5IYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725463,
1573798860691,
1573798716694,
1573798497172,
1571812354136,
1571707359431,
1571361524207
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1521/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1521/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1521/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies the effect of various regularization techniques for dealing with noisy labels. In particular the authors study various regularization techniques such as distance from initialization to mitigate this effect. The authors also provide theory in the NTK regime. All reviewers have positive assessment about the paper and think is clearly written with nice contributions but do raise some questions about novelty given that it mostly follows the normal NTK regime. I agree that the paper is nicely written and well-motivated. I do not think the theory developed here fully captures all the nuances of practical observations in this problem. In particular, with label noise this theory suggests that test performance is not dramatically affected by label noise when using regularization or early stopping where as in practice what has been observed (and even proven in some cases) is that the performance is completely unaffected with small label noise. I think this paper is a good addition to ICLR and therefore recommend acceptance but recommend the authors to more clearly articulate the above nuances and limitations of their theory in the final manuscript.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you for your valuable comments and for appreciating our work! Please see our response to each specific question below.\\n \\n--- Empirical comparison with other methods for noisy labels ---\\nWe compared our results with [Zhang and Sabuncu (2018)] which used the same ResNet-34 architecture, and we found that we can achieve better accuracy on noisy CIFAR-10. See Table 1 in the paper.\\n\\nIn general we found it a bit difficult to have a fair comparison between different methods because different papers may use different network architectures. For example, the reported numbers in [Han et al. (2018)] are worse than ours on CIFAR-10, but they were using a simple 9-layer CNN instead of ResNet-34 that we used. For this reason we only wanted to make comparison with papers using similar architectures. Also as you mentioned the primary advantages of our methods are simplicity and generalization guarantee.\\n\\n[Zhang and Sabuncu (2018)] Generalized cross entropy loss for training deep neural networks with noisy labels. In NeurIPS 2018.\\n[Han et al. (2018)] Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS 2018.\\n\\n--- Weight change v.s. kernel change ---\\nIn the NTK theory, the reason why the kernel doesn\\u2019t change much during training is because the weights in the network don\\u2019t change much. For a large width $m$, the relative change of weights $\\\\frac{\\\\|\\\\theta(t)-\\\\theta(0)\\\\|}{\\\\|\\\\theta(0)\\\\|}$ scales like $O(1/\\\\sqrt{m})$. In Figure 3, we see that the relative change of weights in a particular layer is roughly $2/577=0.003$ which is tiny, an indication that the network is very likely in the NTK regime. We agree that comparing the kernels before and after training would also be useful in verifying that the network is in the NTK regime, and we plan to add this experiment to the final version of the paper. Thanks for the suggestion!\\n \\n--- Experiment directly on infinitely wide networks ---\\nIt would indeed be interesting to perform experiments on the exact NTK for infinitely wide networks, and this is exactly the setting our theory applies to. On the empirical side this approach may not achieve impressive results because: (i) the exact NTK computation on the entire CIFAR-10 dataset for CNN with pooling is very expensive, and is much more expensive than the standard training of a finite network; (ii) the performance of these kernels has reasonable test accuracy but is still not quite as good as trained finite networks - the best figure for clean CIFAR-10 from [Arora et al. (2019)] is only around 77%. Nevertheless, we think this experiment definitely has theoretical values, and we will consider including it in the final version if we obtain sufficient computation resources.\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for your valuable comments and for appreciating our work! Please see our response to each specific question below.\\n\\n--- What if the network is not sufficiently wide, and the loss function is not L2? ---\\nOur theoretical results don\\u2019t apply to networks that are not in the NTK regime or to other loss functions. Nevertheless, our experiments show that the proposed regularization methods are still very effective in these scenarios. We believe that a very interesting direction of future work is to understand this theoretically.\\n\\n--- Auxiliary variables with data augmentation? ---\\nAuxiliary variables can indeed be used together with data augmentation. In our experiment on CIFAR-10, we used data augmentation and just used one auxiliary variable for each sample and its augmented samples. We will mention this clearly in the paper.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thank you for your valuable comments and for appreciating our work! Please see our response to each specific question below.\\n\\n--- Why does the regularization parameter increase with the sample size? ---\\nThis is only due to a difference in normalization. In our definition the regularization is added to the sum of losses on all examples \\u2013 see Eqn. (3). If we average the losses over all examples, the regularization parameter in Eqn. (3) becomes $\\\\lambda^2 / n$ which decreases with the sample size $n$.\\n\\n--- Does Thm 5.2 apply to the noiseless case ($p=\\\\lambda=0$)? ---\\nThank you for pointing out this issue! We have updated Thm 5.2 so that it can cover the case $p=0$ and $\\\\lambda \\\\to 0$. Basically there should be an additional factor of $\\\\sqrt{p}$ in two terms in the bound. (Previously we simply upper bounded $p$ by $1$.) The last term still contains $\\\\log \\\\frac{n}{\\\\delta \\\\lambda}$ which is bad for small or zero $\\\\lambda$, but the $\\\\lambda$ there can actually be replaced with the minimum eigenvalue of the kernel matrix $k(X, X)$ so it is fine. Thm 5.3 can also be modified similarly.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes two regularization methods for learning on noisily labeled data: the first penalizes the distance w.r.t. Euclidean norm from an initial point and the second uses an additional auxiliary variable for each example to learn a noise. In the theoretical part, the paper shows that an original clean dataset can be learned from a noisily labeled dataset based on NTK-theory. Finally, the effectiveness of proposed regularization methods is well verified empirically for 2-layer NN, CNN, and ResNet on image classification tasks (MNIST, CIFAR-10).\", \"contributions\": [\"Propose two simple regularization techniques for learning from a noisily labeled dataset.\", \"Give generalization guarantees for these methods\"], \"clarity\": \"The paper is well organized and easy to read.\", \"quality\": \"The work is of good quality and is technically sound.\", \"significance\": \"Since proposed methods are in some sense related to the early-stopping for the (stochastic) gradient descent, the developed theory is useful in understanding the generalization ability of over-parameterized neural networks falling into NTK-regime. Although an additive noise setting for the regression problem is rather common in statistical learning theory, an artificially flipped label setting for the classification problem may be new except for a few recent studies [Li+(2019)]. A result (Theorem 5.1) for the regression problem with the squared loss is not so surprising because the generalization error of gradient descent in a high-dimensional space (e.g., over-parameterized NNs) or an RKHS (i.e., infinite-dimensional model) has been well studied and the generalization error is composed of the (constant) variance and the distance between the model output and the true label. Thus, existing results of generalization error analyses for the regression are directly applied to bound the prediction error for clean labels. However, I basically like this paper and I think this paper makes a certain contribution to understanding the effect of over-parameterization.\", \"a_few_questions\": [\"Usually, the regularization parameter goes to zero as the number of examples increases. Conversely, the regularization parameter in the proposed methods also increases. Could you explain why this difference happens?\", \"In classification tasks, the original problem setting is recovered by setting $p=lamba=0$. However, the generalization bound by Theorem 5.3 is still affected by $\\\\lambda$. Is this theorem tight?\", \"-----\"], \"update\": \"I thank the authors for the response. I have read the revised version of the paper and have confirmed an improvement of Theorem 5.2. I will keep my score. This paper is of good quality.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, based on the effectiveness of early stopping in the training of noisily labeled samples, the authors proposed two intuitive (and novel) regularization methods: (1) regularizing using distance to initialization (2) adding an auxiliary variable b_i for every input x_i during training. In terms of theory, the authors showed that in the NTK regime, both regularization methods trained with gradient descent are equivalent to kernel ridge regression. Moreover, the authors also provided a generalization bound of the solution on the clean data distribution when trained with noisy label, which was not addressed in previous research.\\n\\nOverall, the paper is very well-organized and well-written. The contribution of the paper is significant, and numerical results also vindicate the theory developed in the manuscript. I recommend accepting the paper.\", \"two_minor_questions_that_are_not_going_to_affect_my_rating\": \"1. The theory developed in the paper depends highly on NTK. What if the network is not sufficiently wide (which is usually the case in practice), and the loss function is not L2?\\n2. In the second method using the auxiliary variable, it seems that every training sample x_i needs a variable b_i. Is this going to cause any problem in practice if data augmentation is used?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the topic of learning with noisy labels, in particular, classification problem where the labels are randomly flipped with some probability. The main technical contributions of this paper are two folds: 1) proof of generalization bounds for kernel ridge regression solutions that depends on the clean labels only. 2) two regularization techniques that are shown to be equivalent to the kernel ridge regression when the neural networks approach the neural tangent kernel regime.\\n\\nNumerical experiments are performed to verify that the proposed methods indeed helps over the baseline at fighting noisy labels. One weak point of the paper is that there is no comparison to any other methods designed to learn under noisy labels. Even though the paper states that the primary advantage of the proposed methods is simplicity, it would still be good to have some empirical comparison for reference.\\n\\nI like that the paper has a section to explicit check whether the neural networks used in the experiments are in neural tangent kernel regime. However, I'm not very sure how to interpret the scale of the Frobenius norm in the change of the weights. Maybe one alternative approach is to compare the difference between the two neural tagent kernel --- one defined by theta(0), and one defined by theta(t) after training. Alternatively, maybe the experiments can be carried out with recent techniques to perform learning directly on infinite width networks (e.g. https://arxiv.org/abs/1904.11955 ).\"}"
]
} |
rJlnxkSYPS | Unsupervised Clustering using Pseudo-semi-supervised Learning | [
"Divam Gupta",
"Ramachandran Ramjee",
"Nipun Kwatra",
"Muthian Sivathanu"
] | In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudo-labels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve 54.6% accuracy for CIFAR-10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms. | [
"Unsupervised Learning",
"Unsupervised Clustering",
"Deep Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rJlnxkSYPS | https://openreview.net/forum?id=rJlnxkSYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CZAGa3NSC",
"SJl0hUVhoH",
"S1eyIhP_jH",
"Byg-iO4dsS",
"Bye0MLNusS",
"SkeXcSE_sB",
"SyeYFWMb9r",
"B1etG5Xntr",
"HJgjIIMDtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725434,
1573828277840,
1573579846992,
1573566617329,
1573565973904,
1573565835469,
1572049280904,
1571727888552,
1571395154936
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1520/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1520/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1520/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1520/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors addressed the issues raised by the reviewers, so I suggest the acceptance of this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"1) Regarding your comment about using self-supervised networks as a baseline rather than starting from scratch or using pre-trained features. Yes, we agree that it would be an interesting middle point to evaluate. We would like to run this experiment but unfortunately we won\\u2019t have enough time to do it before the author response deadline. \\u00a0Also, for apples to apples comparison, we will need to evaluate prior work under this setting as well.\\n\\n2) We have added DeepCluster as another baseline in Table 2 and added some analysis of DeepCluster in the appendix.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"I thank the authors for their detailed answer.\\n\\n1) Regarding \\\"We did try some experiments on not using any pre-trained models for features and training convnets from scratch.\\\", between training from scratch and using a fully pretrained model, there is a middle point. For example, you could use for a network pretrained with self-supervision as done for semi-supervised learning in \\\"Semi-Supervised Learning with Scarce Annotations\\\" by Rebuffi et al. or \\\"S4L: Self-Supervised Semi-Supervised Learning\\\" by Zhai et al. That could make a stronger case than using a fully pretrained net and better results than from scratch.\\n\\n2) The explanation for the choice of Ladder networks satisfies me as well as the details for Algo 1.\\n\\n3) Thanks for the saturation analysis on MNIST.\\n\\n4) I would still be interested by a baseline using \\\"Deep Clustering for Unsupervised Learning of Visual Features\\\".\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your comments and the positive review of the paper.\\n\\n* We have updated the paper to clarify these experiments better. At a high level, the goal of the experiments in page 2 is to see the effect of generating pseudo labels using existing approaches on the final clustering accuracy using our iteration-based approach. These experiments establish two things: 1) We need good quality of initial pseudo labels to get good final clustering accuracy. 2) None of the existing methods provide high accuracy pseudo labels. \\n\\n* Yes, there are several recent methods for semi-supervised learning that have higher accuracy than ladder networks. For some of these approaches [1,2], data-augmentation is a core component which assumes some domain knowledge of the dataset. Further, many of the data-augmentation techniques are specific to image datasets. There are other methods [3,4] which uses adversarial training to learn latent features. However, we found that these methods do not work well if we jointly train them with unsupervised losses. Ladder networks does not require any domain-dependent augmentation, works for both image and text datasets, and can be easily jointly trained with supervised and unsupervised losses. Thus, we chose to work with Ladder networks though our approach is general enough to work with any semi-supervised method that accommodates training with unsupervised loss terms. \\n\\n* Traditional clustering algorithms focus mainly on clustering the entire data set, not on finding high accuracy clusters of subsets of the data, and thus do not achieve high enough accuracy required for improving final clustering accuracy. One principled algorithm is Girvan\\u2013Newman algorithm [5] that was proposed for community detection but we found that it was computationally impractical given the size of our datasets.\\nRegarding the intuition that most of the neighbours of that node will be connected with each other, we found this to be empirically true in our experiments. For example, on Cifar10, for the threshold of 90% models agreeing on the label, about 81% of the nodes in a cluster were connected to each other. If the threshold is at 100%, all nodes in a cluster are connected with each other due to transitivity. We have updated the paper with these numbers.\\n\\n* We have updated the related work section with discussion of several other related papers.\\n\\n* We found that running K means starting with a random initialization to assign pseudo-labels as described in the paper resulted in poor pseudo-label accuracy. Further, if we iterate based on these low accuracy pseudo-labels, the model degenerates to assigning most of the samples to the same cluster. Thus, we felt that it was unfair to the authors to add these results as a baseline, especially since the authors themselves did not report clustering performance. Note that, for the results in section 2, we did not start with a random initialization (we used a ladder network trained with an unsupervised loss to generate the initial pseudo-labels).\\n\\n* We did try some experiments on not using any pre-trained models for features and training convnets from scratch. On the cifar10 dataset, using Resnet34 as CNN initialized randomly, our method was able to achieve clustering accuracy of 35.17 ( achieving about 2% improvement over the same model without our framework) . In the literature, there are a couple of papers [6 , 7 ] that performs clustering on cifar-10 datasets from scratch, but they use a variety of domain-based data augmentation-based techniques to improve performance and we were not able to reproduce their results. Furthermore, they are applicable to only image datasets and do not help with text-based datasets that we also evaluate on. \\n\\n* We ran additional experiments with 15 models in the ensemble and the accuracy remained at 98.5% accuracy on the MNIST dataset. This suggests that accuracy saturates after 10 models. We have updated the paper with this result. \\n\\n* Thanks for pointing it out, we have fixed it in the revised version of the paper.\\n\\n[1] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning\\n\\n[2] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation\\n\\n[3] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning\\n\\n[4] Saki Shinoda, Daniel E Worrall, and Gabriel J Brostow. Virtual adversarial ladder networks for semi-supervised learning\\n\\n[5] Girvan M. and Newman M. E. J., Community structure in social and biological networks\\n\\n[6] Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep adaptive image clustering\\n\\n[7] Xu Ji, Jo\\u00e3o F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your comments and the positive review of the paper.\\n\\n1. While we did not find identifying clusters to be an issue for the five datasets in our evaluation, identifying k clusters when k is large is indeed challenging (as discussed in Appendix E, applying Algorithm 1 on the Cifar-100 dataset results in fewer than 100 clusters).\\n \\nGiven that the number of nodes in the graph is large (50K-70K), finding cliques, even using approximate algorithms, is prohibitively time consuming. For example, Girvan\\u2013Newman algorithm [5] is O( |E|^2 * |V| ).\\n\\n2. We performed experiments with two different initial clustering methods (using mutual information loss and using dot product loss, respectively, as unsupervised loss terms, and as described in the paper) . The initial graphs constructed using the two methods were indeed different. Still, we observed improvement in accuracy over iterations using ensemble clustering and the scheme converged empirically. The key requirement for the iteration to work is the presence of some diversity in the graphs extracted from the various models of the ensemble. \\n\\n3. Thanks for the suggestion. We have fixed the citation and format issues. For now, we have left section 2 and section 5 as separate since we feel section 2 serves as motivation for some of the decisions we make in the design of our algorithm.\\n \\nPlease let us know if you have any further questions.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thank you for your comments and the positive review of the paper.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a method where they 1) use an ensemble of networks to cluster unlabeled data and assign pairs of data points a cluster label only if all networks agree that the pair belongs to a cluster 2) use the labeled pairs to create a similarity matrix and find a \\\"tight\\\" cluster or set of points that are all very similar to each other. The paper then uses the \\\"labelled\\\" points for semi-supervised learning with a proposed ensemble of models.\\n\\nThe paper's method of creating high precision labels using their multi-step clustering algorithm with information measures is quite interesting. The experiment results look promising.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed an unsupervised learning method of clustering using semi-supervised clustering as a bridge. The method first trains an ensemble of clustering models and use the edge-level majority vote to determine a graph, and then applies rule to get partial clustering signals to feed the final semi-supervised clustering. The scheme is in an iterative fashion to further enhance the quality. I find this paper interesting and somewhat novel, with the following comments.\\n\\n1. In algorithm 1, is it possible that too many nodes are removed so one cannot get k clusters in the end? Though finding cliques are time consuming, have the authors conducted experiments to see the difference between the real clique finding algorithm and the greedy one proposed?\\n\\n2. Does the ensemble clustering step have stability issue regarding the method used? If a different clustering method is used, will the graph constructed later change drastically?\\n\\n3. The writing. First line of section 3, figure 4 seems to point to figure 1. Section 2 seems to have format issue at the beginning. Section 5 could be merged with section 2.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes a method for unsupervised clustering. Similarly to others unsupervised learning (UL) papers like \\\"Deep Clustering for Unsupervised Learning of Visual Features\\\" by Caron et al., they propose an algorithm alternating between a labelling phase and a training phase. Though, it has interesting differences. For example, unlike the Caron et al. paper, not all the samples get assigned a labels but only the most confident ones. These samples are determined by the pruning of a graph whose edges are determined by the votes of an ensemble of clustering models. Then, these pseudo labels are used within a supervised loss which act as a regularizer for the retraining of the clustering models.\", \"Novelties /contributions/good points:\", \"Votes from the clustering models to create a graph\", \"Using a graph to identify the most important samples for pseudo labelling\", \"Modification of the ladder network to be used as clustering algorithm\", \"Good amount of experiments and good results\"], \"weaknesses\": [\"The whole experiment leading to Table 1 in page 2 is unclear for me. I have trouble understanding the experiment settings. Could you please rephrase it. About initial/ final clustering for example and the rest as well. The whole thing puzzles me whereas the experiments section at the end is much more clear.\", \"Lack of motivation about why using the Ladder method rather than another one. Other recent methods have better results in semi-supervised learning.\", \"Algorithm 1 seems quite ad-hoc. Do more principled algos exist to solve this problem ? You could write about it and at least explain why it would not be feasible here. The sentence \\\"The intuition is that most of the neighbours of that node will also be connected with each other\\\" is unmotivated: no empirical proof for this ?\", \"Related work section is too light. It is an important section and should really not be hidden or neglected.\", \"In the experiments, you could add the \\\"Deep Clustering for Unsupervised Learning of Visual Features\\\" as baseline as well even if they use it for unsupervised learning as they do clustering as well.\", \"In the experiments, you use the features extracted from ResNet-50 but what about finetuning this network rather than adding something on top or even better starting from scratch. Because here CIFAR-10 benefits greatly from the ImageNet features. I know that you should reproduce the settings from other papers but it might be good to go a bit beyond. Especially, if the settings of previous papers are a bit faulty.\", \"Regarding, the impact of number of models in section D of the appendix, there is no saturation at 10 models. So how many models are necessary for saturation of the performance ?\", \"Minor point: several times, you write \\\"psuedo\\\".\"], \"conclusion\": \"the algorithm is novel and represents a nice contribution. Though, there are a lot of weaknesses that could be solved. So, I am putting \\\"Weak accept\\\" for the moment but it could change towards a negative rating depending on the rebuttal.\"}"
]
} |
rygixkHKDH | Geometric Analysis of Nonconvex Optimization Landscapes for Overcomplete Learning | [
"Qing Qu",
"Yuexiang Zhai",
"Xiao Li",
"Yuqian Zhang",
"Zhihui Zhu"
] | Learning overcomplete representations finds many applications in machine learning and data analytics. In the past decade, despite the empirical success of heuristic methods, theoretical understandings and explanations of these algorithms are still far from satisfactory. In this work, we provide new theoretical insights for several important representation learning problems: learning (i) sparsely used overcomplete dictionaries and (ii) convolutional dictionaries. We formulate these problems as $\ell^4$-norm optimization problems over the sphere and study the geometric properties of their nonconvex optimization landscapes. For both problems, we show the nonconvex objective has benign (global) geometric structures, which enable the development of efficient optimization methods finding the target solutions. Finally, our theoretical results are justified by numerical simulations.
| [
"dictionary learning",
"sparse representations",
"nonconvex optimization"
] | Accept (Talk) | https://openreview.net/pdf?id=rygixkHKDH | https://openreview.net/forum?id=rygixkHKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"B_JqXFdQ8H",
"H1eJ2sF2or",
"ryxV5ne2jB",
"SkeexDRusH",
"rJxa2N0djH",
"S1xicX0uiS",
"ByeYfqGlcH",
"BygCupVAYH",
"rJe375X3KB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725397,
1573850022868,
1573813387565,
1573607143542,
1573606580555,
1573606290813,
1571985936544,
1571863926231,
1571727908326
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1519/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1519/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1519/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1519/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper investigates the use non-convex optimization for two dictionary learning problems, i.e., over-complete dictionary learning and convolutional dictionary learning. The paper provides theoretical results, associated with empirical experiments, about the fact that, that when formulating the problem as an l4 optimization, gives rise to a landscape with strict saddle points and as such, they can be escaped with negative curvature. As a result, descent methods can be used for learning with provable guarantees. All reviews found the work extremely interesting, highlighting the importance of the results that constitute \\\"a solid improvement over the prior understandings on over-complete DL\\\" and \\\"extends our understanding of provable methods for dictionary learning\\\". This is an interesting submission on non-convex optimization, and as such of interest to the ML community of ICLR . I'm recommending this work for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"We thank the reviewer for the appreciations of our efforts in the revision.\"}",
"{\"title\": \"Reply to Authors\", \"comment\": \"The authors' addressed my remarks and I changed my recommendation from weak accept to accept.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"We thank the reviewer for the accurate interpretations of our results and appreciation of our work. In the revision, to make our paper more accessible to the readers, we have carefully revised the main draft, correcting typos and inaccurate statements.\\n\\nWe agree with the reviewer that the absolute constant for the overcompleteness of ODL is disturbing, and we also believe this benign geometry should hold for a larger overcompleteness than we proved here. The major bottleneck of showing this is due to our very loose analysis for negative curvature in Region $\\\\mathcal R_{\\\\mathrm N}$. The authors have tried many ways to improve this bound, but have not yet managed to succeed so far. \\n\\nOne idea might be to consider i.i.d. Gaussian dictionary instead of the deterministic incoherent dictionary, and use probabilistic analysis instead of the worst-case deterministic analysis. However, our preliminary analysis suggests that elementary concentration tools for Gaussian empirical processes are not sufficient to achieve this goal. More advanced probabilistic tools might be needed here.\\n\\nAnother idea that might be promising is to leverage more advanced tools such as the sum of squares (SoS) techniques. Previous results (e.g., Barak et al., 2015; Ma et al., 2016; Schramm & Steurer, 2017) used SoS as a computational tool for solving this type of problems, while the computational complexity is often quasi-polynomial and hence cannot handle problems of large-scale. In contrast, our idea here is to use SoS to verify the geometric structure of the optimizing landscape instead of computation, to have a uniform control of the negative curvature in $\\\\mathcal R_{\\\\mathrm N}$. If we succeeded, this might lead to a tighter bound on the overcompleteness. Moreover, this can also serve as a more general method for verifying the benign optimization landscapes of other nonconvex problems. We will include a discussion section for elaborating these ideas in the future if the reviewer thinks it would be beneficial for the audience.\"}",
"{\"title\": \"Reply to Reviewer #1\", \"comment\": \"We thank the reviewer for the appreciation of our work. In the revision, we have carefully corrected all the minor issues raised by the reviewers. In addition, we have carefully revised the main draft, correcting other typos and inaccurate statements. We plan to release the code in the very near future as well for reproducible purposes.\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"We thank the reviewer for the comprehensive summary of our results and invaluable suggestions. We have double checked our paper and revised accordingly.\\n\\nOn Page 4 of the revised draft, we have introduced a notion of the spikiness and used simulations in Figure 2 to provide a better explanation of why maximizing $\\\\ell^4$ promotes spikiness (we are not aware of any formal definition of spikiness in the literature). Intuitively, we characterize the spikiness of a vector by the ratio between the largest and the second largest entries in magnitude. In Figure 2, we added simulations to demonstrate that the $\\\\ell^4$-norm tends to be larger when the spikiness of the vector increases. If $\\\\mathbf q$ is close to one of the columns of $\\\\mathbf A$ (e.g., $\\\\mathbf q = \\\\mathbf a_1$), around Equation (2.5) we explained why the vector $\\\\mathbf A^\\\\top \\\\mathbf q$ should be spiky, given the small incoherence $\\\\mu$ of $\\\\mathbf A$ (i.e., $\\\\mu \\\\ll 1$). In theory, we rigorously proved that $\\\\mathbf q = \\\\mathbf a_1$ is close to one of the global optimizers due to the spikiness of $\\\\mathbf \\\\zeta =\\\\mathbf A^\\\\top \\\\mathbf q$.\\n\\nWe agree with the reviewer that the inclusion of CDL might overexpose the readers. Indeed, the authors had several discussions over this before the submission. That being said, we had included CDL because we believe the inclusion of CDL is very beneficial to the audience in the ICLR community. The CDL problem can be reviewed as a more structured ODL problem such that it can be analyzed in a similar fashion with a few new ingredients (e.g., initialization, preconditioning, new concentration ideas). Building on the intuition and theory for ODL, it could make our introduction of CDL more accessible to the audience and save us the effort for another repetitive work. Moreover, the CDL can be reviewed as a very simple one-layer convolutional neural network (CNN) (Papyan et al., 2017a; 2018). The theory developed here has the potential to serve as a building block for developing more interpretable deep CNN, which closely relates to the core interest of the ICLR community. If the reviewer thinks it would be beneficial to address this issue, we will release a much-extended version of this work on arxiv in the future and provide a link in the final version of this paper.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors consider two problems: Overcomplete dictionary learning (ODL)\\nand convolution dictionary learning (CDL).\\nDictionary learning learns a matrix factorization of the data\\nY = A X\\nwhere A is the dictionary and X is the (known to be sparse) code.\\nY consists of n rows (sample size) and p columns (dimension).\\nIn the overcomplete version A is n x m where m > n, i.e. the number of learned\\nfeatures is larger than the sample size.\\nThe CDL problem is a special case of the ODL problem where the dictionary\\nmatrix is known to consist of convolution filters instead of being unstructured.\\n\\nThe authors show that under a given set of assumptions local nonconvex\\noptimization can be used to find globally relevant solutions.\", \"the_basic_assumptions_are\": \"(i) unit norm tight frame\\n(ii) mu-incoherence\\n\\t(relates the angles of the columns of a, e.g.\\\\ if columns are orthogonal,\\n\\tthey are incoherent / have small mu)\\n(ii) stochastic model of the code X that says entries are Gaussian and sparse\\n\\taccording to a Bernoulli random variable\\nThe authors present the idea of maximizing the l^4 norm of A^T q in order to\\nfind q as rows of A.\\nApparently l^4 norm maximization leads to \\\"spikiness\\\" which is exactly\\ndesirable under mu-incoherence.\\n\\nThe authors show (assuming p \\\\to \\\\infty) that the optimization nonconvex\\nlandscape (constrained to the sphere) does not contain any stationary points\\nwithout negative curvature.\\nA saddle avoiding optimizer therefore converges to local minimizers from\\nrandom initialization.\\n\\nThe authors also show that the analysis extends to CDL via a preconditioned\\ninitializer.\\nFinally, they go on to briefly show some experiments that further validate\\nthe theory presented in the paper.\\n\\nOverall, the authors present a rigorous technical analysis using powerful\\nmathematical tools for nonconvex optimization (which is relevant to many\\nmachine learning problems).\\n\\nI am recommending to accept based on the high quality of the work.\\nBut I am not confident as to the accessibility of the paper to the wide\\naudience of ICLR as it is rather technical.\\nPerhaps, the complete contribution would be better suited as a journal article.\", \"notes\": \"It would have been useful to give some more intuition about what \\\"spikiness\\\"\\nof A^T q is, why spikiness exists under mu-incoherence and why l^4 norm\\nmaximization improves spikiness.\\n\\nI am not sure that the inclusion of the CDL problem is beneficial for a\\nconverence paper and would rather have more space allocated to the intuition on\\nwhy the method works for ODL.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the dictionary learning problem for two popular settings involving sparsely used over-complete dictionaries and convolutional dictionaries.\\n\\nFor the over-complete dictionary setting, given the measurements of the form $Y = A X$, where $A$ and $X$ denote the over-complete dictionary and the sparse coefficients, respectively, the paper explores an $\\\\ell^4$-norm maximization approach to recover the dictionary $A$. This corresponds to maximizing $\\\\|q^TY\\\\|^4_4$ over $q \\\\in \\\\mathbb{S}^{n-1}$. Interestingly, the paper shows that when $A$ is unit norm tight frame and incoherent the optimization landscape of the aforementioned non-convex objective has strict saddle points that can be escaped by along negative curvature. Furthermore, all local minimizers are globally optimal which are close to one of the columns of $A$. This shows that any descent method that can escape the saddle points will (approximately) recover one of the columns of $A$. \\n\\nFor convolution dictionaries, the paper shows that when the underlying filters are incoherent a suitably modified $\\\\ell^4$-norms based objective has only strict saddles over a sub-level set. Furthermore, all local optimizers within this sub-level set are close to one of the convolution filters. \\n\\nThe reviewer believes that this paper presents many interesting and novel results that extend our understanding of provable methods for dictionary learning. As claimed in the paper, this the first global characterization for the non-convex optimization landscape for over-complete dictionary learning. Besides, the paper provides the first provable guarantees for convolution dictionary learning. Overall, the paper is very well written and the key ideas used in the paper are nicely explained in the main body of the paper. The experimental results in the paper also corroborate the theoretical findings of the paper.\", \"minor_comments\": \"1. In page 2, \\\"....can be simply summarized by the following slogan.\\\" ---> \\\"....can be simply summarized by the following statement.\\\"?\\n\\n2. In page 7, replace \\\"cook up\\\" with \\\"propose\\\"?\\n\\n------------------------------\\nAfter rebuttal\\n\\nThank you for the response. Releasing the code for reproducibility purposes is certainly a great idea.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Summary]\\nThis paper studies the problem of non-convex optimization for Dictionary Learning (DL) in the situation when the underlying dictionary is over-complete (more basis vectors m than the dimension n). The paper proves that the L4 maximization formulation has a nice global landscape and can be efficiently minimized by (Riemannian) gradient descent, when the over-complete ratio m/n is less than an absolute constant. A similar result is proved for convolutional dictionary learning.\\n\\n[Pros]\\nThe theoretical results in this paper provides a solid improvement over the prior understandings on overcomplete DL, a setting that is practically important yet theoretically more challenging than standard orthogonal/complete DL. \\n\\nSpecifically, the prior work of (Ge & Ma 2017) shows only a nice local optimization landscape when m > n^{1+\\\\eps} and hypothesizes that the global landscape is bad in the same setting (there exists bad local minima out of a certain sub-level set). In comparison, this work proves that at least for m/n <= 3 (roughly), the landscape is globally benign (has the strict saddle property), therefore providing a new understanding that the benign landscape is still preserved from \\u201cthe other side\\u201d where m/n grows mildly above 1. The analysis contains novel technicalities and can be of general interest for understanding the landscape of non-convex problems.\\n\\nThe paper also provides experimental evidence that gradient descent converges globally up until m = O(n^2), a broader regime than suggested by the theory (m <= 3n). (Though when m >= n^{1+\\\\eps}, the reason of global convergence from random init may be far from the present theory, in that there can be potentially exponentially many bad local min yet gradient descent won\\u2019t get trapped.)\\n\\n[Cons]\\nIt is still a bit disturbing to see that m/n needs to be bounded by a fixed absolute constant, rather than *any* constant, for the theory to work. From the proofs it seems like this constant (3) may have the potential to be improved, but it is not quite easy to completely get rid of it?\"}"
]
} |
rkecl1rtwB | PairNorm: Tackling Oversmoothing in GNNs | [
"Lingxiao Zhao",
"Leman Akoglu"
] | The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PairNorm, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PairNorm is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PairNorm makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at https://github.com/LingxiaoShawn/PairNorm. | [
"Graph Neural Network",
"oversmoothing",
"normalization"
] | Accept (Poster) | https://openreview.net/pdf?id=rkecl1rtwB | https://openreview.net/forum?id=rkecl1rtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4-vfWjTydh",
"BygxL-UnjS",
"HyeWZDB2oH",
"rJg-wCfLqr",
"BkxkcqhTtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798725369,
1573835079645,
1573832441125,
1572380249340,
1571830406889
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1517/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1517/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a way to tackle oversmoothing in Graph Neural Networks. The authors do a good job of motivating their approach, which is straightforward and works well. The paper is well written and the experiments are informative and well carried out. Therefore, I recommend acceptance. Please make suree thee final version reflects the discussion during the rebuttal.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for liking our work\", \"comment\": \"Thank you very much for reading our paper thoroughly and giving constructive feedback, and we are glad that you found our paper interesting and contributing to a deeper understanding of the field. We respond to your questions one-by-one in the following.\\n\\n>> Can we interpret PairNorm (or the optimization problem (6)) from the viewpoint of graph spectra?\\n \\nThat is a great question that we have not thought about before. We are doing new work towards understanding stacking GraphConv operations in the spectral domain, but currently we do not have a complete answer for your question. To give some initial thought: The operation is working on features directly. Since it does not change the graph structure, it does not affect the eigenvectors or spectrum of the graph. However, it will affect the alignment/interaction between structure and features. Understanding the fusion between graph structure and features in spectral domain should be investigated more carefully. \\n\\n\\n>> Although the motivation of Centering (10) is to ease the computation of TPD, I am curious how this operation contributes to performance. \\n\\nWe have tested adding the mean back after Scale operation, and for SGC the performance remained the same. However for GCN and GAT, because of the activation function there will be a big difference. Empirically, they have similar performance but sometimes one is better and the other is worse. One does not seem to dominate the other. \\n\\n\\n>> Therefore, I think there is another hypothesis that simply the choice $s$ was misspecified. If this is the case, I am interested in the effect of $s$ on predictive performance. \\n\\n\\nWe did several tests using different $s$ for the SSNC problem, and we found that $s$ does not affect performance much for GCN and GAT. We think this is because the parameter learning has some connection with the scale, so setting different $s$ is not that important. We do not have enough time for doing a thorough testing for all settings, as GCN and GAT are much slower to train than SGC. To sum up, we think it is not surprising that SGC works very well for these settings, which is also demonstrated in the original SGC paper (Wu et al., 2019).\"}",
"{\"title\": \"We have added new section in the paper to address your questions\", \"comment\": \"Thank you for giving us feedback and raising questions that help us clarify our work further. We have done additional measurements and experiments to address your concerns, and these results are included in the last section of the Appendix (please see A.6 in the updated paper).\\n\\nWe address the reviewer's questions one-by-one in the following.\\n\\n>> \\\"The benefits of approach 1 are not entirely clear as it basically just scales the whole embedding population. Such a scaling doesn't affect the relative distances between points and thus should not have major effect on the performance.\\\"\\n\\nThe second step (Eq. 11) of PairNorm scales the length of each node representation by multiplying by a scalar calculated from all node representations, the L2-distance between any two node representations is also scaled accordingly. If we understand correctly, the \\\"relative distances among points\\\" is referring to the ratio between any two distances. We claim that although PairNorm itself does not change the ratio, GraphConv + PairNorm does change it: the ratios are not the same across all layers\\u2019 representations. In Appendix A.6 we also empirically plot the distributions of pairwise distances, where one could see how all pairwise distances change with increasing number of layers. \\n \\n>> \\\"Approach 2 is completely different due to the projection on the sphere of each embedding independently. However, the reasons why it is a good idea or not are not discussed in the paper.\\\"\\n\\nPairNorm-SI (approach 2) achieves a similar goal of making total pairwise distances stable, by adding more restrictions: instead of normalizing by the sum of all squared lengths, it normalizes any squared length directly. It is true that total pairwise squared distances is not exactly constant mathematically in that case, however approach 2 nevertheless does keep it stable empirically. We have put a lot effort to analyzing PairNorm-SI in the new section A.6 of the Appendix \\u2014 please have a look and read it through; we believe it provides the the answer you are looking for. \\n\\n>> \\\"The authors report that the proposed normalization schemes do not improve the quality of classification in the standard semi-supervised learning setting.\\\" \\n \\n\\nYes, this is correct. We find that solving oversmoothing problem would not improve the performance of SSL on the standard benchmark datasets. The reasons are two-fold. First, the best performance of SSL on standard datasets is achieved within less than 3 layers, at which oversmoothing does not happen. Rather, we obtain a smoothing effect, which is in fact beneficial. In such cases, clearly PairNorm is not needed as it is designed to solve the oversmoothing issue for deep GNNs. Second, smoothing is the key effect of Graph Convolution to achieve good performance for SSL, as it improves the generalization ability of the model by reducing the gap between training loss and validation loss. This is clearly shown in Figure 1, where the gap between training loss and validation/test loss shrinks with increasing number of layers. Generalization ability is the most important factor for improving performance, and this is true particularly for SSL where we only have a very small training set, which makes the empirical risk not reliable for estimating true risk. Empirically, we often see the training loss for SSL goes to 0 easily while validation loss is still large. \\n\\n\\n\\n>> \\\"They additionally consider artificially created missing features and observe increasing quality in such a scenario.\\\"\\n\\n\\nWe should state that although SSNC-MF is created by randomly removing features, this scenario is generally existing in the real-world. We have given example scenarios in our paper, another example would be privacy-related problems: for training ML algorithms, many companies can only release/use small fraction of users' data based on the privacy agreement. PairNorm is designed to solve oversmoothing, and SSNC-MF is such a problem where oversmoothing does happen as this scenarios necessitates training deep GNNs. While for SSL on standard datasets oversmoothing has no relationship with the best performance, in order to show the power and ability of PairNorm at solving oversmoothing, we needed to showcase a scenario where oversmoothing hurts the best possible performance.\\n\\n_____\\nWe hope our answers sufficiently addresses your concerns. To wrap up, we would like to to re-emphasize the contributions of PairNorm: \\n\\n1. Solving oversmoothing problem and making training deep GNNs possible for the node-classification scenario, having solid theoretical analysis over SGC.\\n2. PairNorm is a general \\\"patch\\\" -- applicable to any GNN. It can also be applied in any layer, even if say we change the graph structure at each layer.\\n3. PairNorm is the first normalization layer specifically designed for graph neural networks. We hope that more researchers can delve into this area. \\n4. We are also the first to investigate a new scenario, such as the SSNC-MF problem.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The article \\\"PairNorm: Tackling Oversmoothing in GNNs\\\" considers the interesting phenomenon of performance degradation of graph neural network when the depth of the network increases beyond the values of 2-4. The authors argue that one of the reasons for such behavior is so-called \\\"oversmoothing\\\", when intermediate representations become similar for all the nodes in the graph. The authors propose the special NN layer \\\"PairNorm\\\", which aims to battle with this issue.\\n\\nThe proposed PairNorm approach boils down to the recentering and normalization of all the representations after each graph-convolutional layer of the network. The authors consider 2 variants of choosing normalization constant:\\n1. The one which multiplies all the embeddings for the layer by the same number. This operation allows to keep the average squared pairwise distance between node representations constant. \\n2. The one which makes the norms of all the representations equal to pre-specified constant, i.e. just projection of all the embeddings on the sphere.\\n\\nI should note that the two proposed approaches are very different in nature, though the latter one is introduced without much additional discussion. The benefits of approach 1 are not entirely clear as it basically just scales the whole embedding population. Such a scaling doesn't affect the relative distances between points and thus should not have major effect on the performance. Approach 2 is completely different due to the projection on the sphere of each embedding independently. However, the reasons why it is a good idea or not are not discussed in the paper. \\n\\nThe experimental part of the paper considers several standard graph data sets. The authors report that the proposed normalization schemes do not improve the quality of classification in the standard semi-supervised learning setting. They additionally consider artificially created missing features and observe increasing quality in such a scenario.\\n\\nTo sum up, I think that while the motivation behind the paper is very natural, it doesn't look like the paper finds the solution both theoretically and experimentally.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n\\nIt is known that GNNs are vulnerable to the oversmoothing problem, in which feature vectors on nodes get closer as we increase the number of (message passing type graph convolution layers). This paper proposed PairNorm, which is a normalization layer for GNNs to tackle this problem. The idea is to pull apart feature vectors on a pair of non-adjacent nodes (based on the interpretation of Laplace-type smoothing by NT and Maehara (2019)). To achieve this approximately with low computational complexity, PairNorm keeps the sum of distances of feature vectors on all node pairs approximately the same throughout layers. The paper conducted empirical studies to evaluate the effectiveness of the method. PairNorm improved the prediction performance and enabled make GNNs deep, especially when feature vectors are missing in the large portion of nodes (the SSNC-MV problem).\\n\\n\\nDecision\\n\\nI want to recommend to accept the paper because, in my opinion, this paper contributes to deepening our understanding of graph NNs by giving new insights into what causes the oversmoothing problem and which types problem (deep) graph NNs can solve.\\nThe common myth about graph NNs is that they cannot make themselves deep due to the oversmoothing. Therefore, oversmoothing is one of the big problems in the graph NN field and has been paid attention from both theoretical and empirical sides. This paper found that the deep structures do help to improve (or at least worsen) the predictive performance when the significant portion of nodes in a graph does not have input signals. To the best of our knowledge, this is the first paper that showed the effectiveness of deep structures in citation network datasets (Deep GCNs [Li et al., 2019] successfully improved the prediction performance of (residual) graph NNs using as many as 56 layers for point cloud datasets). The proposed method is theoretically backboned, easy to implement, and applicable to (theoretically) any graph NNs. Taking these things into account, I would like to judge the contribution of this paper is sufficiently significant to accept.\\n\\n\\nMinor Comments\\n\\n\\t- Table 3. Remove s in the entry for GAT-t2 Citeseer 0%.\\n\\n\\nQuestions\\n\\n\\t- Can we interpret PairNorm (or the optimization problem (6)) from the viewpoint of graph spectra?\\n\\t- Although the motivation of Centering (10) is to ease the computation of TPD, I am curious how this operation contributes to performance. Since the constant signal does not have information for distinguishing nodes, eliminating it by Centering might result in emphasizing the signal component for nodes classification tasks. From a spectral point of view, Centering corresponds to eliminating the lowest frequency of a signal.\\n\\t- Figures 3 and 7 have shown that GCN and GAT did not perform well compared to SGC when the layer size increases. The authors discussed that this is because GCN and GAT are easier to overfit. However, SGC chose the hyperparameter $s$ from $\\\\{0.1,1,10,50,100\\\\}$, whereas the authors examined a single $s$ for GCN and GAT. Therefore, I think there is another hypothesis that simply the choice $s$ was misspecified. If this is the case, I am interested in the effect of $s$ on predictive performance.\\n\\n[Li et al., 2018] Li, Qimai, Zhichao Han, and Xiao-Ming Wu. \\\"Deeper insights into graph convolutional networks for semi-supervised learning.\\\" Thirty-Second AAAI Conference on Artificial Intelligence. 2018.\"}"
]
} |
S1ltg1rFDS | Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning | [
"Ali Mousavi",
"Lihong Li",
"Qiang Liu",
"Denny Zhou"
] | Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible. Recently, \citet{liu18breaking} proposed an approach that avoids the curse of horizon suffered by typical importance-sampling-based methods. While showing promising results, this approach is limited in practice as it requires data being collected by a known behavior policy. In this work, we propose a novel approach that eliminates such limitations. In particular, we formulate the problem as solving for the fixed point of a "backward flow" operator and show that the fixed point solution gives the desired importance ratios of stationary distributions between the target and behavior policies. We analyze its asymptotic consistency and finite-sample
generalization. Experiments on benchmarks verify the effectiveness of our proposed approach.
| [
"reinforcement learning",
"off-policy estimation",
"importance sampling",
"propensity score"
] | Accept (Poster) | https://openreview.net/pdf?id=S1ltg1rFDS | https://openreview.net/forum?id=S1ltg1rFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"RDDEBlKs3P",
"SJlNlFo3oS",
"rJlNHqc3oB",
"B1gVuKknoH",
"SkliKP1hsH",
"r1gnmI1hjB",
"B1xXv4knjr",
"rkxhdccTFB",
"rklAXTxaKr",
"H1xjTu5iYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725339,
1573857515527,
1573853756186,
1573808491922,
1573808002785,
1573807652227,
1573807194924,
1571822196377,
1571781926334,
1571690690685
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1515/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1515/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1515/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1515/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper addresses an important and relevant problem in reinforcement learning: learning from off-policy data, taking into account the offsets in the visitation distribution of states. This has the promise of lowering variance even with long horizon roll-outs. Existing methods have required access to the behavior policy (or have required data from the stationary distribution). The novel proposed approach instead uses an alternative method, based on the fixed point of the \\\"backward flow\\\" operator, to calculate the importance ratios required for policy evaluation in discrete and continuous environments.\\n\\nIn the initial version of the submission, several concerns were expressed regarding both the quality of the paper and clarity. The authors have updated the paper to address these concerns to the satisfaction of the reviewers, who are now unanimously in favor of acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Our Response\", \"comment\": \"Thanks for the quick reply. We are pleased that our earlier response was helpful.\\n\\n\\nRe \\\"Minor comment: why use $\\\\rho$ instead of the more standard $G$ or $R$ for the return of the trajectory?\\\"\\n\\nThanks for the suggestion. We are open to changing notation to more common choices in the literature.\\n\\n\\n\\nRe \\\"For the empirical results, my comments on variance were actually less concerned with whether the proposed method decreases the variance that is common in off-policy work, and more focused on whether the presented results were statistically significant. The high variance shown by the proposed method makes me question the statistical significance. I am okay with the fact that the proposed method appears to have higher estimator variance over independent runs than the competitors, but I'm curious how that plays a role in the significance of the results.\\\":\\n\\nOne way to address this question is to run the experiments for many times to reduce the standard error. We can do this if it becomes a point of concern. However, please note that our results\\u2019 significance is strengthened by observing that our method compares favorably to IPS in most cases, and comparably in the last one.\\n\\n\\n\\nRe \\\"I'm actually particularly concerned, now, about the new results on the fourth problem. How did new meta-parameter tuning allow for better performance of the proposed method? Was this new meta-parameter tuning done for all competitors as well, and was it fair to those competitors? By re-tuning the proposed method, this could inject a strong bias in the results.\\\":\\n\\nRegarding the hyper-parameter re-tuning, please note that not all methods share the same set of hyper-parameters. Therefore, simultaneously changing a hyper-parameter for all the methods might not be feasible because it may or may not appear in different estimators. For the 4th problem, we particularly changed the network architecture while keeping it fixed to a 3-layer network. Please note that both IPS and our method uses a neural network in their estimators. We re-tuned the network architecture for both during the rebuttal. We did not see any improvement for IPS but our method improved by re-tuning the network architecture. We should emphasize here that we tested every architecture that we used for our method on IPS as well. Therefore, in that sense, we believe that this re-tuning has been fair to our competitors.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for including an explanation of how $x$ and $\\\\bar x$ are sampled. My concern was indeed about the double sampling issue present in RG methods, but now that the paper explains that two independent trajectories are used this concern is relieved.\\n\\n\\n\\\"Minor point: our estimator aims to approximate $w(s, a)$, not $d_\\\\pi(s, a)$.\\\"\\n\\nI see, I believe I have found the underlying source of my confusion. I had thought that the estimation of the state distribution was then being used as the importance re-weighting term in another estimator for the sum of rewards. In this case my concern about the consistency due to model class is ill-founded because there is not a compounding effect of model bias, only the omnipresent model bias common to most machine learning domains.\", \"minor_comment\": \"why use $\\\\rho$ instead of the more standard $G$ or $R$ for the return of the trajectory? I find that $\\\\rho$ is overloaded as it is in the off-policy literature, and confused me greatly on rereading this work today after not having looked at the paper for a couple of weeks.\\n\\n\\nFor the empirical results, my comments on variance were actually less concerned with whether the proposed method decreases the variance that is common in off-policy work, and more focused on whether the presented results were statistically significant. The high variance shown by the proposed method makes me question the statistical significance. I am okay with the fact that the proposed method appears to have higher estimator variance over independent runs than the competitors, but I'm curious how that plays a role in the significance of the results.\\n\\n\\nI'm actually particularly concerned, now, about the new results on the fourth problem. How did new meta-parameter tuning allow for better performance of the proposed method? Was this new meta-parameter tuning done for all competitors as well, and was it fair to those competitors? By re-tuning the proposed method, this could inject a strong bias in the results.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Please find our detailed response in the following.\\n\\n\\nRe \\\"I am concerned with the consistency argument made in section 4.3. It appears from equation (10) and the final statement of theorem 4.1 that it is necessary to have two independent samples of x' given a single state x. In the general case, without access to the environment model, it is not possible to obtain two samples of x'. If the environment is continuing, then the probability of returning to state x to obtain a second sample is 0. Am I misunderstanding the requirements specified by the objective function in equation (10)?\\\":\\n\\nThe reviewer might have confused our independence condition with the double-sample condition in the RL literature (e.g., residual gradient). The latter requires double samples of the form, $(s,a,s_1\\u2019)$ and $(s,a,s_2\\u2019)$, where $s_1\\u2019$ and $s_2\\u2019$ are independent next-states of $(s,a)$. In contrast, our independence condition refers to the pair of samples, $(s_1,a_1,s_1\\u2019)$ and $(s_2,a_2,s_2\\u2019)$, where $(s_1,a_1)$ and $(s_2,a_2)$ are independent.\\nOur independence condition can be met in several ways. For example, if $(s_1,a_1)$ and $(s_2,a_2)$ are from two trajectories, they are independent automatically. As another example, if they are from steps in the same trajectory that are far away from each other, then they are nearly independent under certain mixing assumptions (e.g., Assumption 2 in https://doi.org/10.1007/s10994-007-5038-2).\\nWe modified Theorem 4.1 slightly to make it clearer.\\nWe will add a discussion to clarify this condition in the final version of our paper.\\n\\n\\n\\nRe \\\"An additional concern with the consistency argument is that it appears to assume that the approximator for d_w can achieve 0 error according to the MMD. If d_pi is not representable by the approximator, which could reasonably be the case in more challenging domains, it is unclear from this analysis or empirical results what the behavior of the system will be. It is difficult to assess how large of an assumption this is for consistency claim; for difficult problems will d_pi never be representable, or is this a fairly low concern?\\\":\\n\\nThe consistency result is a necessary (and important) verification that our objective function is fundamentally sound. It implies that our approach will find the correct weights \\\\emph{in the limit} --- with enough samples and using sufficiently rich function approximators.\\nIn practice, the reviewer is right that the performance of our approach depends on the representation power of the approximation function class. The impact of such \\\"approximation error\\\" also exists in other RL algorithms (such as using deep Q-network when solving the Bellman equation) and in machine learning in general, and is not specific to this work.\", \"minor_point\": \"our estimator aims to approximate $w(s,a)$, not $d_\\\\pi(s,a)$.\\n\\n\\n\\nRe \\\"The empirical results look to have high enough variance in the final outcomes that it is difficult to consistently assess the performance of each algorithm (looking specifically at figure 3). However, the primary competitor algorithm is IPS which the proposed algorithm handily beats in three of the four problems. In the fourth problem, it is unclear that the competitor algorithm is winning, and could in fact be better only due to chance given the size of the respective error bars.\\\":\\n\\nIn the revised version, we obtain slightly better results for our method with a retuning of hyperparameter. As shown in Figs 2&3, it now outperforms 3 out of 4 tasks and performs comparably in the last one.\", \"regarding_variance\": \"this work is inspired by Liu et al. 2018 to reduce variance, compared to standard methods that importance-reweight the entire trajectory. But in general off-policy estimation is a very challenging problem, and state-of-the-art methods in the published literature still have relatively high variance. It is an important direction for future research.\\n\\n\\n\\nRe \\\" It is worth noting that, because the parameter settings were tuned only for 50 trajectories, it is important to primarily assess performance based only on that point. It is likely that, given more trajectories to learn from, each algorithm would have chosen a smaller stepsize and effectively performed similarly.\\\":\\n\\nThank you for the suggestion which is a fair comment. In the experiments, we set up the hyper-parameter this way in order to make sure our method (and other competitors) are not sensitive to hyper-parameters. This is important when applying such algorithms in practice.\\n\\n\\n\\nRe \\\"I would be interested in seeing the scalability of this approach empirically.\\\":\\n\\nWe agree with the reviewer that scalability is important but outside the scope of the present work. Here, our primary focus is to develop a new estimator, and to evaluate its usefulness in benchmarks. We are indeed interested in investigating scalability as one of the future directions.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Please find our detailed response in the following.\\n\\nRe \\\"Section 4.2 appears a bit confusing to me\\\":\\n\\nWe edited Section 4.2 to make it more self-contained. In particular, we added a brief discussion on RKHS and how it is connected to the MMD approach. We also added an intuitive explanation on how MMD could be understood in terms of GANs and added few references for interested readers.\\nWe added more details to Appendix C to spell out several steps in the proof explicitly, and will be happy to add further details if the reviewer suggests so.\\n\\n\\nRe \\\"The proposed black box estimator seems quite useful\\\":\\n\\nWe are currently running new experiments on more problems and will report them in the final version of our paper. In Figs 2 & 3 of the initial submission, our method outperformed other methods in 3 out of 4 problems. We re-tuned the hyperparameters for the 4th problem and now our method is comparable to IPS according to the updated Fig 3.\\n\\n\\nRe \\\"For experiments, it would also be useful to demonstrate the significance of not knowing the behaviour policy\\\":\\n\\nIt is important to note that we do *not* claim the usefulness of not knowing the behavior policy, as the reviewer seems to suggest. Instead, the claim is that our approach has the benefit of not requiring that knowledge, thus making the new estimator more broadly applicable than previous work like Liu et al. 2018.\\nFurthermore, a greater assumption our approach removes from Liu et al. 2018 is that transitions be drawn from the stationary distribution $d_{\\\\pi_0}$ of the behavior policy (please see Introduction). In the revised version, we also included a simple yet informative numerical example in Appendix F, which demonstrates that this assumption is critical to the Liu et al. 2018 approach. In particular, when the behavior trajectories are short (so are far from being mixed), the Liu et al. 2018 estimator is biased and converges to an incorrect estimate.\\n\\n\\nRe \\\" I am curious to know more about the bias-variance trade-off\\\":\\n\\nWe included in Appendix F a comparison with DualDICE and IPS on the simple but informative ModelWin problem. The task is to estimate the infinite-horizon average reward, using transitions from short trajectories.\\nOur method has lower bias. DualDICE\\u2019s bias is introduced by using $\\\\gamma<1$ to approximate infinite-horizon estimation target. IPS\\u2019s bias is due to the violation of the assumption that transitions are drawn from the stationary distribution of the behavior policy. Overall, please note that because of the consistency of our estimator, the bias of our approach converges to 0 asymptotically.\\nOur method has slightly higher variance. DualDICE has lower variance thanks to $\\\\gamma<1$, which controls the effective horizon (on the order of $1/(1-\\\\gamma)$). \\nWe are also running more experiments on larger-scale benchmarks and will include them in the final version of our paper.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Please find our detailed response in the following.\\n\\n1) Thank you for the great suggestion. We have now included a formal definition of $\\\\hat{\\\\mathcal{B}}_\\\\pi$ in Section 4.1, as well as pseudocode in Algorithm 1. Also, regarding \\\"equations on top of page 5\\\", thanks for pointing out this issue. In the initial submission, we were using the empirical and expected operator interchangeably in these equations. We have fixed the notation in the updated version.\\n\\n2) While we have one sampled transition for each state, there are many states along multiple trajectories in the behavior dataset. Furthermore, we use neural nets to represent the ratios $w(s,a)$, which allows \\\\emph{generalization} across different states, even though a state may appear only once in the dataset (say, in continuous state problems). We did not have any difficulty with this issue in the experiments.\\nThe reviewer is correct that, in order to have good approximation, $(s,a,r,s\\u2019)$ tuples in the dataset $\\\\mathcal{D}$ should be widespread across the state-action space. This is a common condition for batch RL to work well in general and is not specific to our algorithm.\\nMore generally, one-sample approximation of integrals is used in many successful algorithms in RL, such as fitted Q-iteration and deep Q-learning, etc.\\n\\n3) Based on our observations and experiments, for a wide range of hyper-parameters we did not see numerical issues with the output of the function approximator, i.e., the output is not close to $0$ or $\\\\infty$. However, to be safe, we clipped the output values of the function approximator. This could be done either by numpy.clip or clamp in PyTorch or clip_by_value function in TensorFlow.\\n\\n4) We use similar optimization techniques as in Liu et al. 2018, but should emphasize that our main contributions are in the new objective function, not how to optimize it. In particular, Our work aims to remove two significant assumptions of Liu et al., namely (1) knowledge of behavior policy; (2) more importantly, samples are drawn from the invariant distribution $d_{\\\\pi_0}$.\\nIn order to do so, we had to find a different fixed-point equation that led to the general objective function (9), which enables consistent estimation of the ratios $w(s,a)$. It is also in contrast to the learning target of Liu et al., which is a function of states only: $w(s)$.\\n\\nRe \\\"Minor point\\\":\\nWe have updated that part with more details. Please see the revised version.\\n\\nRe \\\"Suggestion\\\":\\nAs described in the paper, DualDICE only works for the discounted reward criterion ($\\\\gamma<1$). In contrast, our work considers the more general undiscounted criterion (Appendix A).\\nFor the purpose of comparison, we set $\\\\gamma$ to be close to $1$ in DualDICE, as an approximation of the undiscounted case. Therefore, DualDICE estimates could be biased, but using $\\\\gamma<1$ can potentially help reduce variance.\\nAppendix F now has a comparison with DualDICE on ModelWin, with larger scale experiments being run at the moment. Two $\\\\gamma$ values are used: $0.9$ and $0.9999$. As expected, DualDICE tends to have a higher bias but lower variance than our approach. The effect gets intensified with a smaller $gamma$ value.\"}",
"{\"title\": \"Summary of Changes\", \"comment\": [\"We thank the reviewers for carefully reading our paper and providing constructive comments. Here we give a summary of changes we have made in the manuscript, where the more important changes are highlighted in blue.\", \"Adding the pseudo-code of our algorithm (in Algorithm 1 Appendix E).\", \"Stating the definition of the empirical version of our backward-flow operator and fixing the equations corresponding to it (Section 4.1).\", \"Adding more background information about RKHS and MMD in Section 4.2 to make it more self-contained.\", \"Adding comparison with the DualDICE method from bias-variance point of view (Appendix F).\", \"Retuning our experiment for the Acrobot problem and updating the corresponding plot.\", \"Clarifying the consistency argument of our method.\", \"Moving the plots corresponding to the robustness of our approach to changing the behavior policy (Fig. 4 in the initial submission and Fig. 5 in the revised version) to Appendix G.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new algorithm to the off-policy evaluation problem in reinforcement learning, based on stationary state visitation. The proposed algorithm does not need to know the behavior policy probabilities, which provide a broader application scenario compared with previous work. It is based on directly solving the fixed point problem of the resulting target stationary distribution, instead of solving a fixed point problem about the stationary ratios.\\n\\nIn general, this paper is looking at an interesting problem with a lot of increasing focus these days. I think this paper contributes great idea and good theoretical analysis. I like the idea of matching the resulting target distribution instead of minimizing a loss over the ratio. However, several unclear places in the current version hurt the presentation of results. I would like to see them get improved and will increase my score accordingly if so.\", \"detailed_comments\": \"1) The algorithm part could be presented more clearly. So far I did not see where the empirical operator \\\\hat{B} is formally defined. The word *empirical* is also confusing to me in \\\"B is approximated by empirical data\\\" because B is not an expectation, but an *integral* which has no *empirical* opposite of it. For the equations on top of page 5, shouldn't they be k[, ] about empirical operator instead of the expected operator since the RHS is already in sample-based form?\\n\\n2) Related with the last one, B has an integral. To approximate the integral, we only have one sample from the transition probability actually, and the sample state is not uniformly distributed. It needs some explanation of why that would not cause a problem to approximate the integral.\\n\\n3) The current loss function is invariant to the scale of w at all. Since the w is normalized, this is not a problem for the resulting estimator, ideally. However, that can be a numerical issue for float numbers. It's possible that the output from function approximator w goes to 0 or \\\\infty. Both cases can lead to NaN of the output/function approximator update eventually. I'd like to hear if the author has met this problem in the experiment or not, and how can that be fixed. \\n\\n4) I have to point out, as just a slight con of this paper, the technique used in this paper is not that much different from Liu et al 2018. Since it minimizes a loss function which is a supremum over an RKHS, and the resulting empirical loss also has a similar form. It's nice to see the author provide some details of making it work with mini-batch. These details are important for function approximator as NN.\", \"minor_point\": [\"On page 12 the equations after \\\"We have by the definition of D_k\\\", I did not follow the second step of the equations.\"], \"suggestion\": \"This paper study the similar settings (behavior-agnostic OPE), using similar method (on the stationary distribution) came out several months ago: https://arxiv.org/pdf/1906.04733.pdf. I knew it's unfair to ask the author to compare it with a very recent prior/parallel work. However since they are in such a similar case, and they have code available, is it possible to directly compare with the result from their code? https://github.com/google-research/google-research/tree/master/dual_dice\\n\\n\\n======= After rebuttal =======\\nThe author's feedback clarified some of my concerns in the initial review. After reading the author's feedback and other reviews, I think this paper has enough show contribution to the related work. Some of my previous concerns (point 2 and 3) seems true for many related works in this area in general. I partly agree that it is not very fair to ask this paper to fix them. The updated version also presents the theory section in a more clear way. So I'd like to raise my score. I've no problem with acceptance, but I won't fully heart argue for acceptance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary :\\n\\nThis paper proposes an approach for long horizon off-policy evaluation, for the case where the behaviour policy may not be known. The main idea of the paper is to propose an estimator for off-policy evaluation, that computes the ratio of the stationary state distributions (similar to recent previous work Liu et al.,) but when the behaviour policy may be unknown. This is an important contribution since in most practical off-policy settings, the behaviour policy used to collect the data may not necessarily be known.\", \"comments_and_questions\": [\"This is an important and novel contribution where the authors propose an approach for OPE that comes down to solving a fixed point operator. This is simlar to solving for fixed point Bellman operators, but where their problem can be formulated as a backward recursive problem.\", \"The operator defined as in equation 3 is in terms of the backward view of time - this allows the operator to capture the visitation from the previous s,a to the current s. This is the backward flow operator with which a fixed point equation can be described. Although the authors note that similar operators have appeared in the literature before - their main contribution is in terms of using such operators for the OPE problem, which seems novel to me and is an interesting approach with potential benefits as demonstrated in this paper.\", \"The core idea comes from equation 9 which tries to minimize the discrepancy between the empirical distribution and the stationary distribution. This can be formulated as an optimization problem, and the paper uses blackbox estimators, as described in section 4.2 for solving this problem.\", \"The next interesting part of the paper comes from solving the optimization problem in equation 9 with Maximum Mean Discrepancy (MMD) - this is a popular approach that has recently become well-known, and the authors make use of it minimize the differences between the empirical and the stationary state distribution.\", \"Section 4.2 appears a bit confusiing to me with some details missing - it would perhaps be useful if the authors could include more details for 4.2, especially with some explanations of how they arrived at the expression in equation 10. This would also make the paper more self-contained, for readers in the RL community perhaps not so well-read with literature on MMD. Appendix C contains the detailed derivation, but more intuitive explanations might be useful for the context of the paper.\", \"The proposed black box estimator seems quite useful as demonstrated in figures 2 and 3. Although the authors evaluate their approach of few simple domains - it would have been useful if there were more extensive experiments performed for OPE problems. This would be useful since from fig 2, it appears that the proposed method only outperforms in 3 out of 4 evaluated problems.\", \"For experiments, it would also be useful to demonstrate the significance of not knowing the behaviour policy and what are the usefulness of it. The paper is motived in terms of unknown behaviour policies that generated the data - so few experiments that explicitly shows the benefit of it would perhaps strengthen the paper more.\", \"I am curious to know more about the bias-variance trade-off of the proposed OPE estimator as well. Ie, does the proposed method introduce any bias, or has significance in terms of lower variance for the long horizon problem? Experimentally, would it be possible to demosntrate whether the approach has lower variance compared to existing baselines?\"], \"score\": [\"Overall, I think the paper has useful contributions. It is a well written paper, but some additonal details in section 4.2 might be useful, especially on the appriach with MMD. Experimentally, I think there are some experiments missing and doing those can significantly strengthen the paper as well. The proposed method seems simple and elegant, and I would recommend for a weak acceptance of the paper.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n\\nThis paper proposes using a black-box estimation method from Liu and Lee 2017 to estimate the propensity score for off-policy reinforcement learning. They suggest that, even when the full proposal distribution is available (as is the case in much of the off-policy RL literature), that estimating the propensity score can allow for a much lower variance off-policy correction term. This reduced variance allows the proposed method to \\\"break the curse of horizon\\\" as termed by Liu 2018.\\n\\nReview\\n\\nThe proposed fixed point formulation for learning a parameterized off-policy state distribution allows for lower variance off-policy corrections decreasing the impact of the curse of horizon. This approach appears both theoretically sound and empirically well-supported.\\n\\nI am concerned with the consistency argument made in section 4.3. It appears from equation (10) and the final statement of theorem 4.1 that it is necessary to have two independent samples of x' given a single state x. In the general case, without access to the environment model, it is not possible to obtain two samples of x'. If the environment is continuing, then the probability of returning to state x to obtain a second sample is 0. Am I misunderstanding the requirements specified by the objective function in equation (10)? An additional concern with the consistency argument is that it appears to assume that the approximator for d_w can achieve 0 error according to the MMD. If d_pi is not representable by the approximator, which could reasonably be the case in more challenging domains, it is unclear from this analysis or empirical results what the behavior of the system will be. It is difficult to assess how large of an assumption this is for consistency claim; for difficult problems will d_pi never be representable, or is this a fairly low concern?\\n\\nThe empirical results look to have high enough variance in the final outcomes that it is difficult to consistently assess the performance of each algorithm (looking specifically at figure 3). However, the primary competitor algorithm is IPS which the proposed algorithm handily beats in three of the four problems. In the fourth problem, it is unclear that the competitor algorithm is winning, and could in fact be better only due to chance given the size of the respective error bars. It is worth noting that, because the parameter settings were tuned only for 50 trajectories, it is important to primarily assess performance based only on that point. It is likely that, given more trajectories to learn from, each algorithm would have chosen a smaller stepsize and effectively performed similarly.\\n\\nAdditional comments (did not affect score)\\n\\nI would be interested in seeing the scalability of this approach empirically. Given the additional parameterized function to learn, I am unsure if this method would scale reasonably to much larger problems. However, I recognize that the scalability question is largely outside the scope of this paper.\"}"
]
} |
SkeFl1HKwr | Empirical Studies on the Properties of Linear Regions in Deep Neural Networks | [
"Xiao Zhang",
"Dongrui Wu"
] | A deep neural networks (DNN) with piecewise linear activations can partition the input space into numerous small linear regions, where different linear functions are fitted. It is believed that the number of these regions represents the expressivity of a DNN. This paper provides a novel and meticulous perspective to look into DNNs: Instead of just counting the number of the linear regions, we study their local properties, such as the inspheres, the directions of the corresponding hyperplanes, the decision boundaries, and the relevance of the surrounding regions. We empirically observed that different optimization techniques lead to completely different linear regions, even though they result in similar classification accuracies. We hope our study can inspire the design of novel optimization techniques, and help discover and analyze the behaviors of DNNs. | [
"deep learning",
"linear region",
"optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=SkeFl1HKwr | https://openreview.net/forum?id=SkeFl1HKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"z4Qv0-YCbu",
"SkgRXY6cir",
"rkeSGKa9jS",
"SylLjOT5iH",
"ryxYOd65jB",
"H1xQe_6qoH",
"ByeMIYZJ5H",
"Byg1GC30tH",
"SJx9dS5oFr",
"H1xyI5totr",
"rJeA5tIstS",
"r1xXflPEKS",
"rJxwW57NtS",
"rygx_TkxFr",
"SkgO4pZJYr",
"HkxE7zQROB",
"ByxNE7_aOB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1576798725310,
1573734694070,
1573734669245,
1573734558232,
1573734512570,
1573734378875,
1571916105608,
1571896838519,
1571689842016,
1571686982859,
1571674517948,
1571217419343,
1571203582789,
1570925928007,
1570868528428,
1570808347746,
1570763563612
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1514/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1514/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1514/AnonReviewer2"
],
[
"~Runyao_Chen1"
],
[
"~Thiago_Serra1"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"~Haokun_Luo1"
],
[
"ICLR.cc/2020/Conference/Paper1514/Authors"
],
[
"~Thiago_Serra1"
],
[
"~Haokun_Luo1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies the properties of regions where a DNN with piecewise linear activations behaves linearly. They develop a variety of techniques to chracterize properties and show how these properties correlate with various parameters of the network architecture and training method.\", \"the_reviewers_were_in_consensus_on_the_quality_of_the_paper\": \"The paper is well written and contains a number of insights that would be of broad interest to the deep learning community.\\n\\nI therefore recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2 - part2\", \"comment\": \"- Also, how much can be perceived from distributions shown in figure 2, since inradius (Eq. 5) may turnout to be a very coarse representation of linear regions, especially for deeper networks. Can the authors clarify this? Moreover, how would the figures look if we were using a different objective, dataset or architecture?\\n\\nAs mentioned in Section 2.2, a polytope can be represented in V-representation or H-representation, but it is challenging to convert one representation into the other. If we want to explore the size of a linear region, V-representation would be a better choice. However, only H-representation can be obtained from the activation states of the DNN, resulting in difficulties in calculating the size. As a result, we used inradius to measure the narrowness of a linear region, which is related to its size and can be easily calculated from the H-representation. In our experiments, different optimization techniques did lead to different narrowness of the linear regions. In addition, the results of a simple CNN trained on the CIFAR-10 dataset, which was presented in Appendix E, showed similar patterns. \\n\\n- In figure 3, is it not possible to show the average results instead of just one example?\\n\\nThe hyperplanes of different points are not in one-to-one correspondence, which means a pixel of one example has no relationship with the same pixel of another one, hence we think using average results may not be reasonable here.\\n\\n- I would further like to know how the authors would deal with scalability issues if their analysis were to applied to more realistic (i.e. large) network architectures.\\n\\nIt is indeed a limitation of our approach when applied to large network architectures, as mentioned in Section 4. However, we have some preliminary thoughts on dealing with this problem. On the one hand, $\\\\mathbf{x}^*$ is naturally an initial feasible solution of the convex optimization problem, which benefits the optimization process at the beginning. Moreover, large network architectures are usually CNNs, resulting in sparse weights of the inequalities, which may also help accelerate the optimization process. On the other hand, architectures used now are believed to be over parameterized, hence we may reduce the redundant parts before we analyze the architectures. However, these are just our preliminary thoughts, and a lot more studies are required.\\n\\n[1]Boris Hanin and David Rolnick. Complexity of linear regions in deep networks. ICML, 2019.\\n[2]Novak et al. Sensitivity and generalization in neural networks: An empierical study. ICLR, 2018.\\n[3]Boris Hanin and David Rolnick. Deep ReLU Networks Have Surprisingly Few Activation Patterns. NeurIPS, 2019\\n[4]Balduzzi et al. The shattered gradients problem: If resnets are the answer, then what is the question? ICML, 2017.\"}",
"{\"title\": \"Response to Reviewer #2 - part1\", \"comment\": \"Thank you for the valuable feedback! We respond to the weakness in the following and hope we have addressed all the concerns.\\n\\n- The paper is clearly written and is easy to follow for the most part. The paper indeed presents a number properties for analyzing the nature of linear regions in DNNs; however it falls short of connecting them with an improvement in the optimization or interpretability of DNNs. Even with respect to providing support for general applicability, the work does not go very far: without enough variation in data (not just image benchmarks), tasks and architecture, it is hard to determine if the analysis tools presented in the paper generalize beyond the chosen setup. For instance just the optimization techniques compared in the paper have their own hyperparameters and it is not clear how the results might vary with them.\\n\\nThe main purpose of our paper is to provide a new geometric perspective to study the linear region instead of just counting them. The number of the linear regions represents the expressivity of a DNN, but fails to indicate the influence of region\\u2019s geometric properties on some local behaviors of DNN, such as robustness. We think our research may give some new inspirations for studying linear regions. There are a number of studies on linear regions, and our experimental setting mostly followed a previous paper [1]. In addition to the fully-connected DNN, we also presented the results of a simple CNN on the CIFAR-10 dataset, which showed similar patterns. We also performed the experiment on a toy dataset as you suggested, whose results demonstrated similar properties of linear regions. As there are so many choices for training a DNN, we cannot fit all of them in a single paper; so, we put the emphasis on BN and dropout while keeping other hyper-parameters by default. However, some of our findings were also observed in other studies [2][4], which shows the generalization of our results beyond the chosen setup.\\n\\n- I am not sure what to take away from figure 1 since it's only a two-dimensional slice of very high-dimensional input space. Maybe the authors could instead choose an example with a low-dimensional input space for illustration purposes.\\n\\nThe figure showing a two-dimensional slice of the input space was widely used in other papers to show some intuitive properties of the linear regions [1][2][3], and we thought it is suitable for illustration purposes. However, we do believe it is important to precisely illustrate the properties; so, according to your suggestion, we added another experiment on a toy 2D dataset in the revision. Please see Appendix A for more details.\"}",
"{\"title\": \"Response to Reviewer #3 - part2\", \"comment\": \"- Why should distortion be a good measure of the size of a linear region?\\n\\nThe best measure here should be the volume, but as mentioned in Section 3.4, calculating the volume of a high-dimensional polytope is really challenging. Though the inradius can show the narrowness of the linear region, it cannot represent the size of a linear region completely (just imagine a long rectangle), hence the exradius is also needed to describe the size. Unfortunately, calculating the exradius is also a difficult task for H-representation (though easy for V-representation), so we have to use distortion as a rough measure. $\\\\mathbf{x}^t$ lies on the surfaces of the linear region, and is usually far from the decision boundaries. Let\\u2019s imagine a simple linear model: $\\\\mathbf{x}^t$, whcih is the point with the highest probability to be classified as $t$, must be the farthest point to the decision boundary. Therefore, here we used distortion to roughly represent the exradius of a linear region.\\n\\n- The authors claim that it is expensive to run their approach, and that they will aim to improve speed in the future. Can the authors give a more concrete example of runtimes in their current approach?\\n\\nThe most time-consuming part of our experiments is finding $\\\\mathbf{x}^t$ in Section 3.4, costing about 37 seconds per sample. The convex optimization can be solved in polynomial time, depending on the optimizer used (here we use MOSEK https://www.mosek.com/), but the computing time increases with the number of the constraints and the input dimensionality. The optimization for each sample can run in parallel, but our computing resources were really limited.\"}",
"{\"title\": \"Response to Reviewer #3 - part1\", \"comment\": \"We thank the reviewer for the constructive comments, which helped improve the paper. We also apologize for our mistake for including the acknowledgement. It has been removed in our revision. Thank you for pointing this out!\\nWe address your detailed comments below.\\n\\n- Figure 1 Top: What do the different colour represent in the linear regions plot?\\n\\nThe color represents the ratio of the activated nodes in a linear region. We added this in the caption of Fig. 1 in the revision. Albeit that different colors were used to separate linear regions in the previous paper, we believe our plot can provide more information since the gray lines have already illustrated in different regions.\\n\\n- Section 2.1: maybe add a toy graph that visualises the depth-wise \\u2018exclusion\\u2019 process of feasible \\u201cneighbours\\u201d of x*?\\n\\nThanks for your suggestion! We added Fig. 2 to illustrate this more clearly. Please check Section 2.1 in our revision.\\n\\n- Eq. (2) & (3): Explain where these equations come from.\\n\\nAs mentioned in Section 2.1, the first $l-1$ hidden layers serve as an affine transformation of $\\\\mathbf{x}\\\\in S_{l-1}$. Besides, the pre-activation outputs of the $l$-th hidden layer $\\\\mathbf{h}^l(\\\\mathbf{x})$ are also an affine transformation of the activation outputs of the $(l-1)$-th layer, hence $\\\\mathbf{h}^l(\\\\mathbf{x})$ is a linear function of $\\\\mathbf{x}\\\\in S_{l-1}$, which means $\\\\mathbf{h}^l_n(\\\\mathbf{x})=\\\\mathbf{w}_n^T\\\\mathbf{x}+b_n$, where $n$ denotes a node of the $l$-th layer. For a linear function $y=\\\\mathbf{w}^T\\\\mathbf{x}+b$, the $\\\\mathbf{w}$ can be directly calculated by $\\\\mathbf{w}=\\\\nabla_{\\\\mathbf{x}}y$, whereas $b=y-\\\\mathbf{w}^T\\\\mathbf{x}$. Here $\\\\mathbf{x}$ and $y$ can be replaced by $\\\\mathbf{x}^*$ and $\\\\mathbf{h}_n^l(\\\\mathbf{x}^*)$ because $\\\\mathbf{x}^*$ shares the same linear function as other $\\\\mathbf{x}\\\\in S_{l-1}$. Last, the parameters are multiplied by $\\\\mbox{sgn}(\\\\mathbf{h}_n^l(\\\\mathbf{x}^*))$ to make sure that the inequalities, which indicate the activation states of the $l$-th layer, are all in the $\\\\geq$ form.\\nA formal deduction was added in Appendix B.\\n\\n- Sec 3.2, first sentence. The authors claim that inspheres of linear regions are highly relate to the expressivity of DNN. Can they elaborate on that claim? Is this claim a result of their experiments?\\n\\nIt is believed that a DNN with more linear regions has a larger potential to fit complex functions [1][2]. For example, a regular hexagon is a better approximation of a circle than a square. Small inspheres do demonstrate the narrowness of the linear regions, resulting in a large number of regions.\\n[1] Poole et al. Exponential expressivity in deep neural networks through transient chaos. NIPS, 2016.\\n[2] Pascanu et al. On the number of response regions of deep feed forward networks with piece-wise linear activations. https://arxiv.org/abs/1312.6098. 2014\\n\\n- What is the relationship between the number of constraints in eq. (5) and the radius of an insphere? Does the insphere size decrease with more constraints? What implications would that have on deeper networks than the one that was presented?\\n\\nYes, the radius does decrease with more constraints, or more precisely, irredundant constraints (which means the constraints cannot be implied by others). A smaller inradius usually results in a larger number of linear regions, hence deeper networks usually have higher fitting ability. Regarding the number of linear regions, i.e. the complexity of DNNs, a well-known question is that why deeper networks have better generalization, instead of overfitting? We believe it comes from the relevance among the linear regions. A node has a set of fixed weights, creating different constraints for different activation states of the preceding layers, which can be regarded as that part of the weights are picked to construct a constraint. However, different parts of the weights are chosen from the same set, resulting in this relevance. Maybe we are a little off the topic here, but it is really an interesting research direction. Another interesting direction is to show that depth provides irredundant constraints more efficiently than width, which is still our work in progress.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for the positive comments! Our detailed replies are given below.\\n\\n- This paper enumerates a number of interesting findings, all of which seem to raise intriguing questions about the properties of trained networks. However, after reading the paper, I am left a little unsure of what to make of the results. However, I do not think this is a fault of the paper, instead I enjoyed that this paper raises so many interesting questions. Still, some more discussion and interpretation of the results is perhaps warranted. I especially enjoyed the writing, the problem statement and exposition were clear and easy to follow.\\n\\nThanks for your interest! We expanded our discussion according to your suggestions. Please see our revision for more details.\\n\\n- Perhaps the authors could comment, in the discussion, if they think their methods could be extended to networks with smooth nonlinearities (such as tanh), or what aspects of their results are also apply to networks with different nonlinearities.\\n\\n\\u2018Hard\\u2019 linear regions can only be defined when the activation is piecewise linear. However, we believe our findings can be extended to networks with smooth nonlinear activation, because a smooth nonlinear activation, like tanh, can be approximated by piecewise linear functions. So far there is no precise definition of \\u2018soft\\u2019 linear regions for DNNs with smooth nonlinearities, but we may provide some preliminary ideas to find these \\u2018soft\\u2019 linear regions. First, we need a local linearity measure, such as Eq. (5) in https://arxiv.org/abs/1907.02610; and then set a threshold of nonlinearity of every neuron, resulting in a set of inequalities to describe a \\u2018soft\\u2019 linear region. Unfortunately, our methods cannot be directly applied to analyzing these \\u2018soft\\u2019 linear regions since their convexity is not guaranteed. It is still an open problem to precisely analyze these \\u2018soft\\u2019 linear regions.\\nA similar discussion was added in Section 4. Please check our revision for more information.\\n\\n- I was also curious if the authors could comment on similarities and differences between their findings and this relevant paper (https://arxiv.org/abs/1802.08760) by Novak et al. that empirically computes linear regions for 2D slices through input space.\\n\\nThis is a highly related work to ours. Thanks for pointing this out! Fig. 3 in Novak et al. illustrates that the on-manifold regions are usually larger than the off-manifold regions after training, which is also implied by our Fig. 3: the manifold regions are usually larger than the decision regions (see the blue lines of the first two columns). Besides, our paper also shows some other properties of the linear regions, and compares the influences introduced by different optimization techniques.\\nWe updated Section 3.2 in our paper to include this discussion.\\n\\n- Minor edits\\n\\nWe revised our manuscript according to your suggestions. Thanks again!\"}",
"{\"title\": \"Indeed!\", \"comment\": \"Our observations do imply the same result: BN may be one of the reasons which leads to the vulnerability of DNNs. However, there are still some differences.\\n\\nOur results empirically showed that BN usually introduces smaller size of linear regions, but the number of classification regions in a linear region doesn't decrease along with the size, resulting in less robustness.\\n\\nTheir observations showed that BN can lead to the tilting angles of the decision boundary w.r.t. the nearest-centroid classifier, especially when the variances of some hidden outputs are very small. As a result, many points are very close to the decision boundaries, resulting the vulnerability. However, their results cannot imply that BN leads to smaller size of linear regions whereas keeping the number of classification regions in a linear region nearly the same.\\n\\nBy the way, we will delete Table 1 for brevity. Thanks for pointing this out!\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper addresses the following: how do batch normalization and dropout affect the number of linear regions present in a deep network? It does so by devising a search procedure for enumerating a set of linear inequalities that define the linear region around a particular input. The linear region is defined as the region of input space that activates the same units/nodes in the network. The authors compute these linear regions for three different types of fully connected networks trained with: vanilla (nothing added), batch norm, and dropout. Given these linear regions, the authors studied a number of their properties, such as the radii of inscribed spheres, angles between hyperplanes, and number of unique surrounding regions.\", \"Comments\", \"This paper enumerates a number of interesting findings, all of which seem to raise intriguing questions about the properties of trained networks. However, after reading the paper, I am left a little unsure of what to make of the results. However, I do not think this is a fault of the paper, instead I enjoyed that this paper raises so many interesting questions. Still, some more discussion and interpretation of the results is perhaps warranted.\", \"I especially enjoyed the writing, the problem statement and exposition were clear and easy to follow.\", \"Perhaps the authors could comment, in the discussion, if they think their methods could be extended to networks with smooth nonlinearities (such as tanh), or what aspects of their results are also apply to networks with different nonlinearities.\", \"I was also curious if the authors could comment on similarities and differences between their findings and this relevant paper (https://arxiv.org/abs/1802.08760) by Novak et al. that empirically computes linear regions for 2D slices through input space.\", \"Minor edits\", \"After introducing the definition of the insphere (eq 5), it would be helpful to remind the reader that this is for a particular region defined by the set of inequalities C^\\\\*.\", \"Typo in footnote 2 on page 5: partitioned\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"First, I believe that the acknowledgements in the manuscript give identifying information which could stand in conflict with a double blind review process. I\\u2019ll leave it to the area chairs/program chairs to make a decision on this. The following review will be contingent on the fact that the authors did not break the submission rules.\\n\\nThis paper aims to give new insights into deep neural networks by presenting a number of approaches to analyse the linear regions in such networks. The authors define a linear region around a point x* as the intersection between a number of half spaces that are defined through linear approximations of a DNN (tangents) around that point x*. The authors show that points within these regions can be found using convex optimization with a number of linear constraints that are equal or less than the number of nodes in a DNN. In experiments with a fully connected network the authors analyse different properties of these linear regions: (1) How big is the biggest sphere that we can fit in a linear region? (2) How much do the hyperplanes that define a region correlate with each other? (3) How reliably does a linear region represent a single class? And (4), How does a linear region interact with neighbouring regions? In their presentation, the authors focus on comparing these properties between models that were either trained without regularisation, with batch normalisation, or with dropout and with different learning rates. This allows them to draw insightful conclusions about the difference between linear regions in these models. The authors hope that their work will enable new ways of analysing DDNs that will inspire new architectures and optimization techniques.\\n\\nI vote to accept this paper. The authors present a large array of methods to analyse the linear regions of DNNs. Their insights into the differences of BN and Dropout are useful (figure 1 & 2) and sensible (figure 3). The implications of linear regions on adversarial robustness can have an impact in the future. Because the paper relies on geometrical reasoning, I wished there would be more visualisations that guide the reader.\", \"here_are_a_number_of_comments_and_questions_that_i_have_on_the_manuscript\": [\"Figure 1 Top: What do the different colour represent in the linear regions plot?\", \"Section 2.1: maybe add a toy graph that visualises the depth-wise \\u2018exclusion\\u2019 process of feasible \\u201cneighbours\\u201d of x*?\", \"Eq. (2) & (3): Explain where these equations come from.\", \"Sec 3.2, first sentence. The authors claim that inspheres of linear regions are highly relate to the expressivity of DNN. Can they elaborate on that claim? Is this claim a result of their experiments?\", \"What is the relationship between the number of constraints in eq. (5) and the radius of an insphere? Does the insphere size decrease with more constraints? What implications would that have on deeper networks than the one that was presented?\", \"Why should distortion be a good measure of the size of a linear region?\", \"The authors claim that it is expensive to run their approach, and that they will aim to improve speed in the future. Can the authors give a more concrete example of runtimes in their current approach?\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work presents an array of analytical tools to characterize linear regions of deep neural networks (DNNs). Using the tools the work analyzes the effect of dropout and batch normalization (BN) on the linear regions of trained DNNs; namely, by assessing the properties such as inspheres, orientation of hyperplanes, decision boundaries and relevance of surrounding regions, the authors highlight the differences and similarities of linear regions induced by vanilla SGD as compared to SGD with dropout or BN.\\n\\nThe paper is clearly written and is easy to follow for the most part. The paper indeed presents a number properties for analyzing the nature of linear regions in DNNs; however it falls short of connecting them with an improvement in the optimization or interpretability of DNNs. \\n\\nEven with respect to providing support for general applicability, the work does not go very far: without enough variation in data (not just image benchmarks), tasks and architecture, it is hard to determine if the analysis tools presented in the paper generalize beyond the chosen setup. For instance just the optimization techniques compared in the paper have their own hyperparameters and it is not clear how the results might vary with them. \\n\\nI am not sure what to take away from figure 1 since it's only a two-dimensional slice of very high-dimensional input space. Maybe the authors could instead choose an example with a low-dimensional input space for illustration purposes.\\n\\nAlso, how much can be perceived from distributions shown in figure 2, since inradius (Eq. 5) may turnout to be a very coarse representation of linear regions, especially for deeper networks. Can the authors clarify this? Moreover, how would the figures look if we were using a different objective, dataset or architecture?\\n \\nIn figure 3, is it not possible to show the average results instead of just one example?\\n \\nI would further like to know how the authors would deal with scalability issues if their analysis were to applied to more realistic (i.e. large) network architectures.\"}",
"{\"comment\": \"It\\u2019s a nice work which may inspire other researchers!\\n \\nThe result in Table 5 shows that BN introduces smaller size of linear regions, but dose not reduce the number of classification regions in a linear region. It reminds me of another paper which claims that BN may be one of the causes of adversarial examples.\\n \\nGalloway, Angus, et al. \\\"Batch Normalization is a Cause of Adversarial Vulnerability.\\\" arXiv preprint arXiv:1905.02161 (2019). https://arxiv.org/abs/1905.02161\\n \\nP.S. I think there is no need to present Table 1 because the architecture has already been clarified in the context.\", \"title\": \"About BN and adversarial examples\"}",
"{\"comment\": \"Indeed, there is a lot to be analyzed and for sure you cannot fit it all in a single paper!\", \"title\": \"Follow-up\"}",
"{\"comment\": \"Thank you for your interest and additional information! It\\u2019s a nice work which achieved tighter bounds on the maximal number of linear regions and presented more detailed influence of the depth and width of DNNs. We will add this part of discussion into our Introduction. By the way, we think it also interesting to analyze the properties of linear regions introduced by the depth and width. However, there are so many details and choices for training a DNN and we cannot analyze them all, so we put the emphasis on BN and dropout in our paper.\", \"title\": \"RE: Regarding depth and the number of linear regions\"}",
"{\"comment\": \"Thanks for author's quick response!\", \"title\": \"Thanks a lot!\"}",
"{\"comment\": \"Thanks for your comments! Our detailed responses are as follows:\\n\\n1.The paper you mentioned is very valueable, but it seems that only a heuristic conjecture, which discussed linear regions and adversarial examples, is presented in Section 2.2 (their work):\\n\\n\\u201cMoreover, the distance from a typical point to the transition boundaries of linear regions gives a heuristic lower bound for the typical distance to an adversarial example: two inputs closer than the typical distance to a linear region boundary likely fall into the same linear region, and hence \\nare unlikely to be classified differently.\\u201d\\n\\nHowever, it is not consistent with what we observed. As the results we presented in Table 5, a linear region can also contain many classification regions, which means that you could find points with different labels in a linear region. Therefore, according to our experiments, we think the conjecture in [1] is arguable.\\n\\nThe work you mentioned is highly related to our topic, so we will add it into reference in the revision. Again, thanks for pointing this out.\\n\\n2.For second comment, you can consider the C&W loss function. The high-order derivatives of the C&W loss function with respect to the input are all 0, since the DNN behaves completely linear in the linear region. We will add this simple discussion in the revision to make our point clear.\", \"title\": \"Thanks for your comments\"}",
"{\"comment\": \"It is great to see linear regions analyzed through new angles. As someone who has worked on the topic, I would like to add something to your discussion on literature review.\\n\\nRegarding the comment that \\\"Studies have shown that the number of the linear regions increases more quickly with the depth of the DNN than the width\\\", there is actually an analytical trade-off between depth and width that depends on the number of neurons and the size of the input: https://arxiv.org/abs/1711.02114\\n\\nIn Figure 5 of the mentioned paper, you can see that the maximum number of linear regions attainable by neural networks with 60 units according to the size of the input. The bound is exact for shallow networks (the case of 1 layer with 60 neurons), which implies that shallow networks may define more linear regions than deep networks for the same number of units if the size of the input is sufficiently large.\", \"title\": \"Regarding depth and the number of linear regions\"}",
"{\"comment\": \"Interesting work really, and it seems to be the only work on linear regions this time (;D). It is inspiring to study linear regions from geometric perspectives, instead of just counting the number, because some very local behaviors of DNNs, such as adversarial examples, are highly related to the properties of the linear region. However, I have some little comments here.\\n\\u00a0\\n1.\\u00a0I\\u2019d like\\u00a0to mention a closely related paper\\u00a0[1], which also discussed the connection between linear regions and adversarial examples.\\n[1] B. Hanin and D. Rolnick, \\u201cComplexity of Linear Regions in Deep Networks,\\u201d in Proc. 36th Int\\u2019l Conf. on Machine Learning, Long Beach, CA, 2019, pp. 2596\\u20132604.\\n\\u00a0\\n2.\\u00a0In Section 3.4, the authors claimed that the \\u201chigh-order adversaries are not guaranteed to be better than first-order ones\\u201d. It would be nice if the authors could give an example to make it clear.\", \"title\": \"Interesting work!\"}"
]
} |
rJxtgJBKDr | SNOW: Subscribing to Knowledge via Channel Pooling for Transfer & Lifelong Learning of Convolutional Neural Networks | [
"Chungkuk Yoo",
"Bumsoo Kang",
"Minsik Cho"
] | SNOW is an efficient learning method to improve training/serving throughput as well as accuracy for transfer and lifelong learning of convolutional neural networks based on knowledge subscription. SNOW selects the top-K useful intermediate
feature maps for a target task from a pre-trained and frozen source model through a novel channel pooling scheme, and utilizes them in the task-specific delta model. The source model is responsible for generating a large number of generic feature maps. Meanwhile, the delta model selectively subscribes to those feature maps and fuses them with its local ones to deliver high accuracy for the target task. Since a source model takes part in both training and serving of all target tasks
in an inference-only mode, one source model can serve multiple delta models, enabling significant computation sharing. The sizes of such delta models are fractional of the source model, thus SNOW also provides model-size efficiency.
Our experimental results show that SNOW offers a superior balance between accuracy and training/inference speed for various image classification tasks to the existing transfer and lifelong learning practices. | [
"channel pooling",
"efficient training and inferencing",
"lifelong learning",
"transfer learning",
"multi task"
] | Accept (Poster) | https://openreview.net/pdf?id=rJxtgJBKDr | https://openreview.net/forum?id=rJxtgJBKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3l0RPCKgnw",
"Syel4Qx2oH",
"Hkl1amTooH",
"SyxUr7pjiH",
"r1gqfQasiS",
"rkxoNbajir",
"H1gdFC2osS",
"SygBFYrscB",
"rygKCo52KH",
"SJlLFPEjFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725282,
1573810984147,
1573798838607,
1573798717977,
1573798673661,
1573798194773,
1573797504507,
1572718972733,
1571757008851,
1571665790130
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1513/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1513/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1513/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1513/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a method, SNOW, for improving the speed of training and inference for transfer and lifelong learning by subscribing the target delta model to the knowledge of source pretrained model via channel pooling.\\n\\nReviewers and AC agree that this paper is well written, with simple but sound technique towards an important problem and with promising empirical performance. The main critique is that the approach can only tackle transfer learning while failing in the lifelong setting. Authors provided convincing feedbacks on this key point. Details requested by the reviewers were all well addressed in the revision.\\n\\nHence I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Good improvements to the paper - increasing to Accept\", \"comment\": \"Thank you for your detailed reply and the changes to the paper. The hyperparameter sensitivity and the robustness of the model are now clearer, and I am happy to increase my score to Accept.\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"C1. We appreciate your comments on SNOW regarding the number of models. One might assume that SNOW consists of multiple models as it has the source model and the \\u201cdelta\\u201d models. But, the term \\u201cdelta model\\u201d is used for an easier explanation about the expanded parts in the SNOW architecture. We argue that the entire architecture based on SNOW can be regarded as a single model (which just consists of various modules to deliver multi-task predictions) because a delta model alone is not semantically sufficient for an intended task when there already exists transferability from the source task to the target task. In fact, the delta and source models must efficiently cooperate as a single-engine to perform target tasks effectively (which is the key difference from a collection of models such as ensemble learning), which is the main contribution in this work.\\n \\nExpanding a single model as in SNOW is becoming a popular approach to address catastrophic forgetting in lifelong learning. For example, ProgressiveNet (Rusu, 2016) in our experiment can be considered as a single model even though the entire layer pipelines are duplicated for each new task as an expansion. Other various kinds of expansions have been published in major venues recently for lifelong learning, and we here provide a list of the latest representative publications with short summaries.\\n \\n-Lifelong Learning with Dynamically Expandable Networks [ICLR18]\\nThe authors considered lifelong learning simply as a special case of online or incremental learning, in the case of deep neural networks. The proposed approach in this paper is to partially expand the model capacity with new tasks.\\n \\n-Autonomous Deep Learning: Continual learning approach for dynamic environments [ICDM19]\\nA fully elastic deep neural network (DNN), namely Autonomous Deep Learning (ADL) is proposed where the new hidden layers can be dynamically added under the lifelong learning paradigm.\\n \\n-Scalable Recollections for Continual Lifelong Learning [AAAI19]\\nA small auto-encoder per new task is attached for the experience replay purpose for multi-task lifelong learning (instead of episodic memories).\\n \\n-Continual Palmprint Recognition Without Forgetting [ICIP19]\\nThis paper proposes to use reinforcement learning to dynamically expand the neural network when facing newly registered palmprints.\\n \\n-Lifelong Learning Starting from Zero [ICAGI19]\\nThis work proposes to add new nodes/neurons for expansion (which adds new nodes to memorize new input combinations) and generalization (which adds new nodes that generalize from existing ones).\\n \\nC2. Thanks for suggestions. We tried to place the legend outside the figures, but the space limitation makes it very hard. As an alternative solution to prevent any confusion on the readers, we overlay the legend over a gray bounding box which increases the readability as well. Hope this would work for you and future readers.\\nThe purpose of the last picture in Fig. 4 is to show that SNOW requires a similar number of epochs for convergence as well. In the end, what matters most in practice is the wall-clock runtime which is num_epochs X total_samples/throughput. In some cases, it is possible that a certain approach may have a better throughput but require more epochs for convergence, eventually netting a longer end2end training time. Here, with the pictures in Fig. 4, we showed that the total training time of SNOW will be superior to PN, FT, MP, because of the same epoch count for convergence (after 60 epochs) and higher throughput. Note that we simply let all the algorithms run for 200 epochs to ensure that nothing is left for any algorithm. We have made this point clear in the revision (the last paragraph in Section 3.1).\"}",
"{\"title\": \"Response to Reviewer#3 - Part 2\", \"comment\": \"C7. Thank you for suggesting another important comparison to re-validate our results. We used it 0.1 as it is a standard learning rate for resnet50 family in many publications. The reason why we picked a larger learning rate for SNOW was to tune our weights fast enough to stabilize the training earlier. We have not really tuned the hyper-parameters for SNOW (i.e., learning rate/schedule) yet, which is one of our future to-do items. To ensure no mistake, we applied lr=1.0 with batch size 256 to all baselines on the CAR dataset, and the results on Top-1 accuracy are here.\\n\\n \\t | FO\\t FE\\t FT\\t MP\\t PN\\n------------------------------------------------------------------------\\naccuracy |\\t51.95\\t 49.12 1.24\\t 4.79\\t 59.02\\n\\nIt confirms that lr=0.1 was a much superior choice for other algorithms to lr=1.0.\\n\\nC8. Your statement is correct that a larger delta model would be needed for such cases, as the delta model needs to generate more features by itself. We highlighted your point in the first paragraph of Section 2.1.\\n \\nC9. Grammatical errors / suggestions: All corrected, thank you.\"}",
"{\"title\": \"Response to Reivewer#3 - Part 1\", \"comment\": \"C1. Thanks for the comment. To demonstrate the effects of different sigma values, we did apply different values to the car dataset, and here is the result:\\n\\nsigma\\t1e-1 1e-3\\t 1e-5\\t1e-7\\n---------------------------------------------------------\\ntop1\\t81.27\\t83.79\\t83.45\\t83.35\\ntop5\\t95.85\\t96.94\\t96.99\\t96.88\\n\\nAs you can see too large or too small sigma values can lead to sub-optimal predictive power. Therefore, it is important to develop a method to find good sigma values, as the reviewer pointed out. We're currently researching in that direction: one idea we have is to examine the early weight distribution (i.e., after a few epochs) and then determine the sigma value to balance out exploration and stabilization. We discussed this point in Section 3.2.\\n\\n\\nC2. Thanks for the valuable suggestions. We have measured SNOW-256 (with batch size 256) accuracy again over 5 times on each dataset. We found that the avg top-1 accuracy is in fact slightly better than ones in the submission draft. Here is the accuracy distribution and we have updated the draft with the avg/std numbers accordingly.\\n \\n Food DTD Action Cars CUB \\n--------------------------------------------------------------------------\\navg.\\t84.06\\t 72.37\\t 78.48 83.79\\t 75.81\\nstd\\t 0.124 0.520 0.265 0.181 0.297\\n\\nNote that SNOW-128 (with batch size 128) results are also based on the 5 runs.\\n\\n\\nC3. The top-K feature selections change very frequently at the beginning of the training and get stabilized with more epochs. Figure 5 in Section 3.2 shows which features are selected (in the solid vertical lines) during the training of CAR dataset under the same configuration/hyper-parameters in Section 3.1. The top X-axis represents the channel indices, and the left Y-axis represents the training progress (in terms of iteration). The horizontal dotted lines indicate the start of the next epoch. You can see that some channels join and leave the top-K frequently (i.e., small dots) yet some stay in the top-K consistently, getting more stable as training continues.\\n\\nRegarding the comment on sigma, please refer to the provided table in C1.\\n\\nC4. Thanks for the comments, and we agreed with the reviewer and updated the paper accordingly. We wanted to normalize the comparison over the typical mini-batch size for the tested datasets, which made PyTorch split the training over multiple GPUs. When two GPUs are used for training, the communication between GPUs can incur some overheads. However, our platform has GPUDirect over NVLink2 between GPUs which has 160GB/s bandwidth, thus the impact should have been rather limited. \\n\\nPer the reviewer's suggestion, we refreshed all the experiments under the single GPU constraint. In detail, we reduced the batch sizes of MP, FT, PN until it fits into one GPU. Additionally, we tested SNOW with a smaller batch size (128 from 256) to ensure that SNOW still offers advantages with that configuration. As a result, for the example of the Car dataset, the throughput gap between SNOW and PN decreases from 6x to 5.2x. Figure 4 and Table 8 (in the Appendix) are all accordingly updated.\\n\\n\\nC5. Thank you for the input. As suggested, we experimented with the CAR dataset to study the effect of different Ks. Our current finding shows that the accuracy is somewhat sensitive to the K values, and it seems there could be some sweet spot for K: we believe oversubscription may introduce unwanted noises to the delta model (in addition to the size/compute overhead). We elaborated more in Section 2.2 and added results to Section 3.2.\\n\\n K | N/4 N/8 N/16 \\n---------------------------------------------------\\n accuracy | 83.10 83.79 83.39 \\n\\n\\nC6. We again used the CAR dataset to study the performance sensitivity to the number of target-model-specific features and showed the results below.\\n\\ntarget-specific | \\nfeature count | M/4 M/8 M/16 \\n---------------------------------------------------\\n accuracy | 79.02 83.79 80.36 \\n\\nIt shows that the performance is sensitive to the delta model size, and clearly exposes the existence of ideal size. It is obvious that the same rule of thumb in neural network architecture design applies here too: having too few target-specific features hurt accuracy, but having too many (or too big delta net) does hurt as well because the number of target-specific samples may be relatively too small. We added discussion and result in Section 3.2.\"}",
"{\"title\": \"Response to Reviewer#1 (updated with more LF data)\", \"comment\": \"C1. Thanks for the comment. We reflected your request in both the title and abstract of the revision.\\n\\nC2. Thanks for helping us clarify an important aspect of our scheme. We clearly indicated that the delta model is to be trained in the paragraph you mentioned as well as the paragraph before it (\\\"individual to-be-trained delta model) to avoid any confusion.\\n\\nC3. We appreciate your suggestion. We have elaborated the reviewer's point in the paper by adding a new paragraph in Section 2.1.\\n\\nC4. Thanks for the comments. We checked out our implementation to ensure the correctness and performed hyper-parameter tuning to get the best performance for LF. We believe that our test case is extremely challenging for LF, because our five datasets do have very different distributions. Note that the multi-task scenario in the original LF paper, in fact, uses multiple subsets of a single VOC dataset.\\n\\nOur training is still running and we will append the table as soon as the training job is over (in a few hours). We will include the results to the appendix of the revision at the same time.\\n====> We have completed the runs except for Food (which needs another 10+ hours). We will add the entire outcome to the final version ASAP.\\n\\nWe explored a few sequences to get the best outcome for LF, came up with the following, which led to the accuracy changes for the datasets below.\\n\\n sequence ImgNet ->Car ->Action ->CUB ->DTD\\n -----------------------------------------------------------\\n Car | 81.87 28.40 22.78 22.44 . #Car Top1 accuracy drops as new tasks are being added.\\n accuracy Action | X 77.08 68.87 62.60\\n CUB | X X 70.94 46.23\\n DTD | X X X 71.93\\n -------------------------------------------------------------\\n\\nAs you can see, once Action dataset is used, the accuracy against Car dataset drops significantly, as both are very heterogeneous. Yet, adding CUB has somewhat limited impacts on Action. Overall, the combination of datasets in our experiment seems to be a very challenging scenario for LF in terms of catastrophic forgetting. We agree with the reviewer's suggestion to keep the LF result in the paper, as perhaps this can serve as a good example to study for fellow researchers.\\n\\nRegarding the computational performance, LF overall showed the training throughput of 272.69 images/sec with 13.99 GB memory footprint. The reason why LF is slower than FT is LF needs to perform extra forward paths to compute the loss for the old tasks.\\n\\nC5. Thank you for the question and comments. The x-axis in Figure 4 represents training throughput. We double-checked the caption of Figure 4 to ensure that it states training. The models are trained separately in a sequential manner, and we explicitly stated it at the beginning of Section 3.1. \\n\\nC6. Thanks for pointing out this. In fact, this was our typo, and the graph is indeed the validation curve. We have fixed it in the revision.\\n\\nC7. We are in the process of obtaining the clearance for code-release. In the meantime, the pseudo-code in Appendix A.1 should be sufficient enough for anyone to try out our channel pooling idea (i.e., it is already almost a python code snippet).\\n\\nC8. Minor comments: all corrected, thank you.\"}",
"{\"title\": \"Change overview\", \"comment\": \"Dear reviewers,\\nTo address your concerns and comments, we have revised the draft with the following major changes. In the uploaded version, we have addressed all the cosmetic errors/typos.\\n1. We performed a new set of experiments with the 1-GPU constraints and refreshed Fig 4.\\n2. We introduced a new Section 3.2 to discuss the hyper-parameter sensitivity.\\n3. We captured the Top-K changes during training in Figure 5.\\n\\nThank you.\\nAuthors.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"After rebuttal:\\nAuthors have addressed all my doubts. I recommend accepting this paper.\\n\\n=============================\", \"before_rebuttal\": \"\", \"summary\": \"This paper proposes a new way to do transfer learning. Specifically, authors first train a big source ConvNet and then for each task, they train a small ConvNet in which each layer subscribes to some k channels in the corresponding layer of the source ConvNet. Authors show that this model works better than methods that fine-tune the last few layers of the source network and performs close to costlier methods like progressive networks but with lesser parameters and higher throughput. Experiments on 5 tasks verify their claim.\", \"my_comments\": \"Overall, this is a very interesting paper.\\n\\n1. This is an interesting model to do transfer or lifelong learning but only for ConvNet architectures with image data. To avoid overstating the results, I request the authors to highlight this limitation in both the title and the abstract.\\n2. Page 3, para starting with \\u201cIn detail\\u201d: Is the ResNet50 for delta model pre-trained or not? I know it is not pre-trained based on future paragraphs. But it is good to clarify it here.\\n3. Sharing the same source network across multiple tasks during inference time is useful only when all the tasks take the same input. This is a very restricted application. This needs to be elaborated and highlighted in the paper.\\n4. I would like to see the LF results included in the paper even though it has catastrophic forgetting issues.\\n5. In Figure 4, the x-axis represents training throughput or inference throughput? I guess it is training throughput. Also, are the models trained for all the tasks in parallel (as described in serving all the tasks at once section) or separately? Even though I can guess answers for these, it is better to make these explicit in the paper for the benefit of the readers.\\n6. It is never a good idea to show test curves for a task. Please remove the test curves from Figure 4. Instead, use a separate validation set and show validation curves.\\n7. Are the authors willing to release the code to reproduce their results?\", \"minor_comments\": \"1. Section 1, second para, 1st line: \\u201cwee\\u201d should be \\u201cwe\\u201d\\n2. Table 1: Fix grammar in MP description.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"*** Increased to Accept from Weak Accept after author rebuttal and changes to the paper ***\\n\\nThis paper proposes a method, SNOW, for improving the speed of training and inference for transfer and lifelong learning. SNOW starts with a pre-trained, frozen source model, and trains delta models for target tasks which, at each layer, concatenate a small number of task-specific features with the top-K most useful subset of features in the corresponding layer in the source model. As long as the target tasks are sufficiently related to the source task, it allows for small delta models and a small additional parameter overhead in the form of one weight per source model feature map. While there are (i) some issues with the presentation of results for training efficiency, (ii) some question marks over the sensitivity of the model to hyper-parameters, and (iii) several grammatical errors / typos in the manuscript, if these can be addressed I recommend the paper for acceptance because it seems to strike a superior balance of efficiency (regarding memory usage and inference speed) and accuracy when compared to a number of baselines, and to my knowledge it is a novel approach.\", \"detailed_comments\": [\"Section 2.2 - How is sigma (the exploratory noise added for feature selection during training) chosen and how sensitive is the approach to its value? It seems like it was fine-tuned, given that a different sigma is chosen for the Action dataset (several orders of magnitude difference). In practice, tuning sigma could significantly increase training time.\", \"It seems like the performance of only one run was plotted per hyperparameter setting - it would be informative to see a mean and standard deviation especially since the approach seems like it could be unstable for the wrong hyperparameter settings.\", \"Related to the previous point, how much do the top-K feature selections change throughout training? One would have thought that this could cause instability during training for a high sigma. If sigma is too low, you could end up with suboptimal feature selection.\", \"Figure 4 graphs are a bit misleading because the throughput on the x-axis is reported per GPU and the larger models all need 2 or more GPUs. While this is mentioned in the main text, it is still optically deceptive and the results are GPU-dependent - presumably if the GPUs had a larger memory, the larger models would not seems as slow. I think it would be clearer to plot images/sec on the x-axis or to rerun the experiments just using a single GPU.\", \"It is stated that \\u201c[d]etermining K [\\u2026] has a critical impact on both size and target accuracy in the target models\\u201d, where K is the number of feature maps in the source model that the delta model subscribes to in each layer. How sensitive is the accuracy exactly? Can this be quantified or discussed in more detail?\", \"Furthermore, how sensitive is the performance to the number of target-model-specific features at each layer?\", \"Different learning rate schedules were used for SNOW and baselines - initial lr for SNOW is 1.0, while for all other models it is 0.1. Was it checked whether the baselines improve when they are run with an initial lr of 1.0? Was this hyperparameter more heavily tuned for SNOW than for the baselines?\", \"Since the source model is fixed, the applicability of the approach to lifelong learning is heavily dependent on the usefulness of the source model to subsequent tasks. If it is not, then one will have to incorporate large delta models. Furthermore, there can be no transfer between the tasks trained in the delta models.\", \"Grammatical errors / suggestions:\", \"Page 1, first line: \\u201challmark\\u201d doesn\\u2019t make sense in this context - maybe \\u201ckey objective\\u201d or \\u201cgoal\\u201d?\", \"Page 1, 2nd paragraph, first line: \\u201cwee\\u201d -> \\u201cwe\\u201d.\", \"Page 1, 2nd paragraph, line 6: \\u201cbest top-K\\u201d -> either \\u201cK best\\u201d or \\u201ctop K\\\"\", \"Page 2, last paragraph, first line: \\u201cthree folds\\u201d -> \\u201cthreefold\\\"\", \"Section 2.1, line 2: \\u201cpooing\\u201d->\\u201dpooling\\u201d. Same typo on Page 4, last line.\", \"Page 6, line 1: \\u201ctraining from the scratch\\u201d -> \\u201ctraining from scratch\\\"\", \"Page 6, line 9: \\u201cmore 6x than\\u201d -> \\u201c6x more than\\\"\", \"Overall, the manuscript needs to be proofread a few times.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper attempts to tackle transfer learning and lifelong learning problem by subscribing to knowledge via channel pooling. The channel pooling is actually selecting the subsect of the feature map according to the way that prediction accuracy from the delta model can be maximized. Experiments show effectiveness of the proposed method.\", \"pros\": \"Overall, this paper is well written and easy to follow. The technique is sound and the problem studied in this paper is significant.\", \"cons\": \"1.\\tI do not think that the model proposed in this paper is able to tackle lifelong learning problem. The main reason is that lifelong learning basically requires only one model that will continue to learn from new tasks. After learning several new tasks, people hope this model can still perform well on the previous tasks as well as the current ones. However, in this paper, not only one model is learned. Instead, new models appear when new tasks are given, which does not meet the definition or requirement of lifelong learning. It only meets the requirement of transfer learning. The experimental results also validate my opinion since only one new task is given while in lifelong learning, continuous new tasks will come and the original model should perform well on all of them as well as on the old tasks.\\n2.\\tIn Figure 4, the legend in the first picture will confuse the readers. I suggest the authors put it outside all the figures. Besides, the proposed method in the last picture is not the best. What do the authors want to convey by this picture?\"}"
]
} |
HJeOekHKwr | Smoothness and Stability in GANs | [
"Casey Chu",
"Kentaro Minami",
"Kenji Fukumizu"
] | Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions. | [
"generative adversarial networks",
"stability",
"smoothness",
"convex conjugate"
] | Accept (Poster) | https://openreview.net/pdf?id=HJeOekHKwr | https://openreview.net/forum?id=HJeOekHKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9ckjYt9BJ",
"ryerF4z3iH",
"HkgEeBw9sB",
"rylKHzwYsS",
"rkxozAqLoH",
"rJeTlA9LiS",
"BJeJ4c22tH",
"Skl_j_8FKB",
"ryeniNsQYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725254,
1573819516845,
1573709035851,
1573642816562,
1573461523441,
1573461492785,
1571764775432,
1571543199643,
1571169443966
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1512/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1512/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1512/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper provides a theoretical study of what regularizations should be used in GAN training and why. The main focus is that the conditions on the discriminator that need to be enforced, to get the Lipshitz property of the corresponding function that is optimized for the generator. Quite a few theorems and propositions are provided. As noted by Reviewer3, this adds insight to well-known techniques: the Reviewer1 rightfully notes that this does not lead to any practical conclusion.\\nMoreover, then training of GANs never goes to the optimal discriminator, that could be a weak point; rather than it proceeds in the alternating fashion, and then evolution is governed by the spectra of the local Jacobian (which is briefly mentioned). This is mentioned in future work, but it is not clear at all if the results here can be helpful (or can be generalized).\\n At some point of the paper it gets to \\\"more theorems mode\\\" which make it not so easy and motivating to read. \\nThe theoretical results at the quantitative level are very interesting. I have looked for a long time on Figure 1: does this support the claims? First my impression was it does not (there are better FID scores for larger learning rates). But in the end, I think it supports: the convergence for a smaller that $\\\\gamma_0$ learning rate to the same FID indicated the convergence to the same local minima (probably). This is perfectly fine. Oscillations afterwards move us to a stochastic region, where FID oscillates. So, the theory has at least minor confirmation.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to all reviewers: Paper update\", \"comment\": [\"We would like to again thank our reviewers for their valuable comments. We have updated our paper based on their feedback. The major updates are as follows:\", \"We would like to reiterate that our main purpose is to provide firmly rooted theoretical justification for commonly used GAN regularization techniques, by means of a novel theoretical framework based on smoothness and convex duality. We have polished Sections 2 and 3 to make sure the focus and argument are clear.\", \"A major advantage of the inf-convolution-based regularization framework is that it injects the desired regularity conditions without changing the minimizer of the original objective. We have highlighted a theoretically non-trivial result on minimizer invariance in Section 3 to emphasize this point. Please see our response to Review #1 for details.\"]}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your feedback. We are happy to hear that you found the paper insightful and pleasant to read. Please find responses to your questions below:\\n\\n\\n> Proposition 8 provides a way to ensure condition 2 holds (beta-smoothness). It requires spectral normalization and smooth activation functions. In practice, while the spectral normalization is important, the choice of the activation is not in general 1-smooth (Leaky-relu for instance). Does it really matter in practice? Some illustrative experiments could be beneficial to better understand what's happening. \\n\\nWe agree that it would be an insightful experiment to understand how stability is affected by a choice of non-smooth activation functions. In the case of ReLU or LeakyReLU, the discontinuity at 0 makes the function non-smooth, but we conjecture that Proposition 8 [Proposition 9 in the updated draft] and our stability results may still hold in some approximate sense, since these activations can be well-approximated by smooth functions (e.g. $\\\\frac{1}{4}\\\\log (1+e^{4x})$).\\n\\n\\n> Is it that hard to obtain generators that satisfy condition G1 and G2, it seems to be a natural consequence on the regularity of the mapping f? If that is the case, it might be worth better explaining how this is challenging.\\n\\nThank you for this suggestion. It is true that if the generator $f_\\\\theta(z)$ has bounded first and second derivatives with respect to $\\\\theta$, then it will satisfy conditions G1 and G2 for some constants $A$ and $B$. However, recall that these constants dictate how small the learning rate must be to guarantee stability, via Proposition 1. Thus, in order to obtain non-vacuous claims of stability with learning rates used in practice, it is not useful to simply claim that $A$ and $B$ are finite; instead, it is important to compute tight bounds for $A$ and $B$. These computations will vary quite a bit with the choice of architecture used (feedforward, convolutional, ResNet, etc.) and may lead to new generator architectures and regularization techniques. Due to the complexity of these computations and the neat logical separation of discriminator and generator allowed by Theorem 1, we think these computations are best suited for future work. We will add a remark explaining this matter at the end of Section 2.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your feedback and kind words regarding our proofs! We are pleased that the proofs neatly tie together concepts from convex analysis, optimal transport, and RKHS theory, and we hope they inspire future proof techniques in this area.\\n\\n\\n> 1. As the paper concludes, in practice, it is impossible to let the generator be trained after the discriminator attain theoretical optimal. As a paper which topic is about the training process of GAN, it is better to account for real situation.\\n\\nThis is unfortunately a disadvantage of the approach taken by this and prior works (please see the related work section). We hope that future work in our field will further bridge this gap between theory and practice.\\n\\n\\n> 2. The experiment section is too simple and lacks of persuasiveness.\\n\\nBecause the main contribution of this submission is a rigorous theoretical framework on the stability of GAN training, we wanted to choose a setting where we could numerically evaluate our theory, which required choosing a generator where we could analytically compute the relevant Lipschitz constants. The experimental result, while simple, supports the theoretical implication.\\n\\n\\n> The condition (D3) doesn't necessarily still hold if only adding the gradient penalty term to the objective function. Why it can be supposed that the first order term of the expansion plays a leading role in penalizing ? Isn't it unconvincing to explain the necessity of the gradient penalty from the perspective of making the condition (D3) true?\\n\\nRecall that each penalty term in the infinite series encourages an additional degree of regularity on the optimal discriminator, and the regularity of the optimal discriminator corresponds to the regularity of the implied loss function $J$ being minimized, via duality. It is correct that without all the terms of the infinite series, D3 is not guaranteed to be satisfied. When fewer penalty terms are used, the regularization effect on the implied loss function is reduced, but the penalty terms that are present will still encourage partial regularity of the implied loss function. We will add a remark on this matter to the end of Section 6. We view the choice of only using the leading terms as a disadvantageous but practical necessity.\"}",
"{\"title\": \"Response to Official Review #1 (continued)\", \"comment\": \"> Same combination of regularization techniques (gradient penalty, spectral norm and MMD loss) has been studied by [1] in various forms (Gradient-Constrained MMD, Scaled MMD). However, there is no discussion of similarities and differences between these works.\\n\\nArbel et al. obtain strong theoretical and empirical results using a combination of techniques that features many of the same ingredients as we derive in our analysis. Interestingly, these techniques serve different purposes in their work compared to ours. In their work, spectral normalization is used to improve the conditioning of the critic rather than to constrain its Lipschitz constant, as in our analysis. Regarding gradient penalties and MMD, in their work, gradient norms are combined with MMD (with a learned kernel) to obtain a novel discrepancy measure, whereas we show that regularizing an arbitrary loss with a Gaussian-kernel MMD leads to gradient penalties.\\n\\n\\n> (1) End of Section 4: 'Theorem 1 also suggests that applying only Lipschitz constraints is not enough to stabilize GANs'. Theorem 1 is not 'iff', so Lipshitz constraint *may be* not enough.\\n\\nWe tried to be careful about the wording (\\u2018suggests\\u2019 rather than \\u2018implies\\u2019), but we will rephrase this to make it clear. We are editing the end of Section 2 to emphasize this point, that it is possible that our analysis is too conservative.\\n\\n\\n> (2) Section 6 concludes that penalization of discriminators RKHS's norm is required. It is unclear, however, why discriminator function would belong to such space.\\n\\nIn Section 6, we have adopted the convention that if $f$ is not in $\\\\mathcal{H}$, then $|| f ||_{\\\\mathcal{H}}$ is infinite. This is a point we will clarify, but it is made rigorous by Lemma 6. Although the optimal discriminator of the *original* loss function may not belong to the RKHS, the optimal discriminator of the loss function regularized by $R_3$ is guaranteed to belong to the RKHS (due to Lemma 6 and equation 4 [equation 20 in the updated draft]).\\n\\n\\n> (4) It seems there is conceptual misundersting of what MMD-GANs are in Appendix B. Authors say 'Despite their names, MMD-GANs (Li et al., 2017a; Arbel et al., 2018) typically do not directly minimize the MMD but instead an adversarial version of the MMD'. GANs by definition are adversarial, while optimization against MMD alone is not. Hence, it is *according to their names*, not 'despite'. Generator losses implied by MMD-GANs under assumption of optimal discriminators, have been termed 'Optimized MMD' [1] and studied earlier in [2].\\n> (5) Given (4), The Table 2. includes MMD as a GAN loss, although authors probably refer to the properties of non-adversarial Generative Moment Matching Networks [3].\\n\\nThank you for bringing these points to our attention. We now realize this is confusing phrasing and will modify the text accordingly. It seems that the crux of the disagreement is that in this paper, we restrict our attention to the minimization of any convex function of a probability distribution, which is always equivalent to some adversarial game according to equation (3) [equation (4) in the updated draft]. When we consider MMD as a loss, we are indeed referring to GMMN; under our framework, GMMN is adversarial, with the optimal discriminator approximated by samples rather than by training a separate neural network. This is why we listed it in Table 1 as a \\u201cGAN.\\u201d We had also listed MMD-GAN alongside GMMN in Table 1 because they coincide in the special case of a single kernel, for the benefit of readers who might be unfamiliar with GMMN but have heard of MMD-GAN, but we now acknowledge this may be misleading. Regarding the \\u201cdespite their names\\u201d comment, we are referring to the \\u201cMMD\\u201d part, not the \\u201cGAN\\u201d part: it might be expected that MMD-GANs minimize the MMD instead of the Optimized MMD, in analogy with Wasserstein GANs, which minimize the Wasserstein distance, and f-GANs, which minimize an f-divergence.\"}",
"{\"title\": \"Response to Official Review #1\", \"comment\": \"Thank you for your time in writing in-depth feedback. We have read your comments carefully and have found them very helpful in guiding our revisions. Please find our response to your primary concerns below, as well as our more detailed response afterwards (due to the character limit):\\n\\n\\n> No evaluation with respect to any reasonable GAN setting.\\n> Proposed regularization technique combines existing methods and does not actually propose new ones.\\n\\nWe would like to clarify the aim of our paper, since there appears to be a mismatch between our intended message and the reviewer\\u2019s vision for our project. Most importantly, we are not trying to propose a new GAN variant or new regularization techniques. Instead, we explain the need for and the sensibility of existing GAN techniques from the unifying framework of a desire for smoothness, thereby placing the use of these techniques on firm theoretical footing. This is a novel perspective not expressed by previous work to our knowledge. We kindly ask that our paper be evaluated with this aim in mind, not from the perspective of proposing and testing a new GAN variant.\\n\\n\\n> The main insights of sections 4 and 5 are trivial, like enforcing Lipshitzness of optimal discriminator by optimization of only Lipshitz discriminators.\\n\\nWe agree that the strategy for practically enforcing that the optimal discriminator be Lipschitz (section 4) and smooth (section 5) is obvious, namely, to only optimize over the relevant set of discriminators. \\n\\nHowever, the main insight we are intending to share in these sections is not *how* to practically constrain the optimal discriminator, but *why* constraining the optimal discriminator is a sensible thing to do in the first place. Our analysis finds that it is indeed sensible because the process preserves the minimizers of the original loss function $J$. This is because constraining the optimal discriminator is equivalent to inf-convolving the original loss function $J$ with a regularizer $R$, which preserves the set of minimizers. This argument requires carefully reasoning through the interplay between the regularizer $R$ (which determines the loss function) and its convex conjugate $R^\\\\star$ (which determines the properties of the optimal discriminator). We acknowledge that this point may have been lost in the formalism, and we are updating the draft to clarify this point.\\n\\n\\n> It is unclear wheather proposed solutions are practical, e.g. use of smooth activation functions may be costly and may lead to vanishing gradients. Again, experiments would be desired.\\n\\nWe agree with you and Reviewer 3 that a study of smooth activation functions would be insightful. We are considering what steps to take to address this issue.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides a unified theoretical framework for regularizing GAN losses. It accounts for most regularization technics especially spectral normalization and gradient penalty and explains how those two methods are in fact complementary. So far this was only observed experimentally but without any theoretical insight. The result goes beyond that as the criterion could be applied to general convex cost functional.\\nThe main general theorem is Theorem 1 which states 3 conditions on the optimal critic and 2 others on the generator. The paper is mainly concerned by the conditions on the optimal critic and show that the first 2 conditions can be achieved by the Spectral normalization, while the last one can be achieved by some gradient penalty.\\nThe paper is clearly written, well structured and pleasant to read.\", \"i_have_the_following_two_remarks\": [\"Proposition 8 provides a way to ensure condition 2 holds (beta-smoothness). It requires spectral normalization and smooth activation functions. In practice, while the spectral normalization is important, the choice of the activation is not in general 1-smooth (Leaky-relu for instance). Does it really matter in practice?\", \"Some illustrative experiments could be beneficial to better understand what's happening.\", \"Is it that hard to obtain generators that satisfy condition G1 and G2, it seems to be a natural consequence on the regularity of the mapping f? If that is the case, it might be worth better explaining how this is challenging.\"], \"limitations\": \"The paper considers only the setting where the optimal critic is reached and therefore it is still unclear if the analysis carries on to the training procedures used in practice (non-optimal critic). The authors recognize this limitation and leave it for future work.\\n\\nOverall, I feel that the paper provides good insights on what regularization is important for training gans and why. For that reason, I think this paper should be accepted.\\n\\n\\n------------------------------------------------------------------------------------------------------------\", \"revision\": \"I think the paper provides a good theoretical contribution in terms of interpreting many of the tricks used for improving GAN training. In fact the paper also suggests some new regularization methods (prop 13 for conditions D3) which would constrain the RKHS norm of the critic. The authors show how it is related to gradient penalty, in a particular case, but the result also suggests something more general. For instance [1], consider an abstract RKHS space containing deep networks and provide an upper-bound on the rkhs norm of such networks in terms of the spectral norm of their weights and a lower-bound in terms of its Lipschitz constant. \\n\\nI do agree with reviewer 1 that a better discussion of the connection to [2] should be included since that paper was interested in ensuring weak continuity of the loss, which can be thought of as a first requirement to get more regularity of the cost functional.\\n\\nI still think the paper is worth being accepted and raised my score to 8 as I think the authors addressed the major concerns that were raised. \\n\\n[1] A. Bietti, G. Mialon, D. Chen, and J. Mairal. A Kernel Perspective for Regularizing Deep Neural Networks.\\n[2] Michael Arbel, Dougal Sutherland, Miko\\u0142aj Binkowski, and Arthur Gretton. On gradient regularizers for MMD GANs. In Advances in Neural Information Processing Systems, pp. 6700\\u20136710, 2018.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The work studies the relationship between the stability and the smoothness of GANs based on the proposition which was proposed by Bertsekas . It explains many nontrivial empirical observations when one is training GANs, including both of the necessities of the spectral normalization and the gradient penalty, in a theoretical perspective. And the work points out that most common GAN losses do not satisfy the all of the smoothness conditions, thereby corroborating their empirical instability. Meanwhile, it develops regularization techniques that enforce the smoothness conditions, which can lead to stability of the GAN.\\n\\nPros\\n1. The paper theoretically gives a reasonable explanation of why applying a gradient penalty together spectral norm seems to improve performance of generator.\\n2. The proofs of the theorems and the propositions in this paper are gorgeous and beautiful.\\n\\nCons\\n1. As the paper concludes, in practice, it is impossible to let the generator be trained after the discriminator attain theoretical optimal. As a paper which topic is about the training process of GAN, it is better to account for real situation.\\n\\n2. The experiment section is too simple and lacks of persuasiveness. The main theorem only gives the sufficiency of those conditions. I think it\\u2019s necessary to give an example which can imply that anyone condition is essential.\\n\\n3. Proposition 9, Proposition 12 and Equation (7) show the equivalence between the condition (D3) and the existence of the regularization term of the reproducing kernel Hilbert space norm of the discriminator. But after this, the paper uses the first order term of the expansion in Proposition 13 to substitute $\\\\|\\\\psi\\\\|_{H}^2$. The condition (D3) doesn't necessarily still hold if only adding the gradient penalty term to the objective function. Why it can be supposed that the first order term of the expansion plays a leading role in penalizing ? Isn't it unconvincing to explain the necessity of the gradient penalty from the perspective of making the condition (D3) true?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper provides new theoretical view on GAN regularisation. However, it lacks proper empirical evaluation and makes an impression of a work in progress. Furthermore, the conclusions lead mostly to common techniques that have already been studied.\", \"pros\": [\"Theorem 1 provides sufficient conditions for convergence of generator gradients to zero (under assumption of optimal discriminators).\", \"New view on combining loss functions and regularizers via inf-convolutions.\", \"Clarification of a difference between gradient penalties and spectal normalization.\"], \"cons\": [\"No evaluation with respect to any reasonable GAN setting.\", \"Proposed regularization technique combines existing methods and does not actually propose new ones.\", \"The main insights of sections 4 and 5 are trivial, like enforcing Lipshitzness of optimal discriminator by optimization of only Lipshitz discriminators.\", \"It is unclear wheather proposed solutions are practical, e.g. use of smooth activation functions may be costly and may lead to vanishing gradients. Again, experiments would be desired.\", \"Same combination of regularization techniques (gradient penalty, spectral norm and MMD loss) has been studied by [1] in various forms (Gradient-Constrained MMD, Scaled MMD). However, there is no discussion of similarities and differences between these works.\", \"Submission's main text is 10 pages long without sufficient reasons for that (figures, tables).\"], \"detailed_comments\": \"(1) End of Section 4: 'Theorem 1 also suggests that applying only Lipschitz constraints is not enough to stabilize GANs'. Theorem 1 is not 'iff', so Lipshitz constraint *may be* not enough.\\n(2) Section 6 concludes that penalization of discriminators RKHS's norm is required. It is unclear, however, why discriminator function would belong to such space.\\n(3) In Appendix B authors say, in the context of WGAN, that 'The Lipschitz constraint on the discriminator is typically enforced by spectral normalization (Miyato et al., 2018), (...)'. This setting fails, as stated earlier in the Introduction.\\n(4) It seems there is conceptual misundersting of what MMD-GANs are in Appendix B. Authors say 'Despite their names, MMD-GANs (Li et al., 2017a; Arbel et al., 2018) typically do not directly minimize the MMD but instead an adversarial version of the MMD'. GANs by definition are adversarial, while optimization against MMD alone is not. Hence, it is *according to their names*, not 'despite'. \\nGenerator losses implied by MMD-GANs under assumption of optimal discriminators, have been termed 'Optimized MMD' [1] and studied earlier in [2].\\n(5) Given (4), The Table 2. includes MMD as a GAN loss, although authors probably refer to the properties of non-adversarial Generative Moment Matching Networks [3].\\n\\n\\n[1] Michael Arbel, Dougal Sutherland, Miko\\u0142aj Binkowski, and Arthur Gretton. On gradient regularizers for MMD GANs. In Advances in Neural Information Processing Systems, pp. 6700\\u20136710, 2018.\\n[2] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, G. R. G. Lanckriet, and B. Sch\\u00f6lkopf. \\u201cKernel choice and classifiability for RKHS embeddings of probability distributions.\\u201d NIPS. 2009\\n[3] Yujia Li, Kevin Swersky, Richard Zemel, \\\"Generative Moment Matching Networks\\\", ICML 2015.\"}"
]
} |
r1lOgyrKDS | Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation | [
"Xinjie Fan",
"Yizhe Zhang",
"Zhendong Wang",
"Mingyuan Zhou"
] | Sequence generation models are commonly refined with reinforcement learning over user-defined metrics. However, high gradient variance hinders the practical use of this method. To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control. Due to the correlation, the number of unique rollouts is random and adaptive to model uncertainty; those rollouts naturally become baselines for each other, and hence are combined to effectively reduce gradient variance. We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios by decomposing each categorical action into a sequence of binary actions. We evaluate our methods on both neural program synthesis and image captioning. The proposed methods yield lower gradient variance and consistent improvement over related baselines. | [
"binary softmax",
"discrete variables",
"policy gradient",
"pseudo actions",
"reinforcement learning",
"variance reduction"
] | Accept (Poster) | https://openreview.net/pdf?id=r1lOgyrKDS | https://openreview.net/forum?id=r1lOgyrKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"d0nnx1hQdD",
"SkxFkND3iH",
"S1eQqpL3oH",
"H1lGR5UnoS",
"HJgX78WIcr",
"HklBH2ICFr",
"B1xMsRpaFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725223,
1573839841444,
1573838218968,
1573837513577,
1572374043409,
1571871805073,
1571835545744
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1511/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1511/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1511/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper presents a novel reinforcement learning-based algorithm for contextual sequence generation. Specifically, the paper presents experimental results on the application of the gradient ARSM estimator of Yin et al. (2019) to challenging structured prediction problems (neural program synthesis and image captioning). The method consists in performing correlated Monte Carlo rollouts starting from each token in the generated sequence, and using the multiple rollouts to reduce gradient variance. Numerical experiments are presented with promising performance.\\n\\nReviewers were in agreement that this is a non-trivial extension of previous work with broad potential application. Some concerns about better framing of contributions were mostly resolved during the author rebuttal phase. Therefore, the AC recommends publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to questions\", \"comment\": \"Thank you for your comments and insightful questions, to which we response below. We have revised our paper accordingly and highlighted the major edits in blue.\", \"q1\": \"Can you say anything about the optimality of scaling the number of rollouts with the policy uncertainty? Does the algorithm make optimal use of the number of rollouts? i.e. is the variance minimal for the number of rollouts, or is there scope for improvement?\", \"response\": \"The number of rollouts is adaptive across different samples, different training stages, different sentence positions, and different binary tree depth. We concatenate different rollouts, and process them as a batch, whose size is varying. In our applications, since we have the access to target data, we can first pretrain our policy with MLE objective to obtain a good initial policy, so the number of unique pseudo actions is limited even when V is large (see Figure 3 (c)). Therefore, such adaptive characteristic would not affect the batch parallelization much. However, we note that in a cold-start setting where we start from a complete random policy, it is still challenging to make our methods sufficiently fast as the number of pseudo actions may be too large if V is large. We leave it as a future work to adapt our methods to this more challenging setting, where to our best knowledge, little work has been done except for Ding & Soricut (2017); d\\u2019Autume et al. (2019). We have added a discussion of this limitation in the revised paper.\", \"q2\": \"The number of rollouts being random possibly complicates efficient parallel evaluation of the rollouts (batch sizes are effectively varying). This is presumably not a problem for the chosen applications, but could you discuss the limitations in a broader setting?\"}",
"{\"title\": \"Point-by-point response to all questions\", \"comment\": \"Thank you for your detailed comments and suggestions. We have revised our paper accordingly and highlighted the major edits in blue. Below please find our point-by-point response.\", \"on_the_presentation_of_the_paper\": \"we admit that the formulation could be intense, so we only kept those equations and definitions that are essential to understand our methods. Following your recommendation, we have further moved some equations from the main paper to the appendix, and added more intuitive explanations to facilitate the understanding of the key ideas.\", \"response_to_the_questions_on_the_technical_side\": \"1. The intuition behind the adaptiveness across training process is that as the learning progresses, the policy becomes more and more confident, meaning that the entropy of the prediction distribution at each step becomes smaller and smaller and hence the prediction probabilities would possibly only peak at fewer and fewer categories. If we look at how we get our pseudo actions z^{m swap j} at the line below Equation (4), it is clear that if the entropy of \\\\softmax(\\\\phi_1,...,\\\\phi_i, \\u2026\\\\phi_V) is small, the number of unique pseudo actions would also tend to be small. \\n\\n2. It is true that if all pseudo actions are the same, the estimated ARSM gradient is zero. But if one takes another random draw, there is a certain probability that not all pseudo actions are the same, in which case the estimated ARSM gradient will not be zero. We note that the probability that all pseudo actions are the same (and hence the estimated ARSM gradient is zero) tends to increase as the training progresses. When the policy prediction probabilities become highly peak at one category for each step, which usually only happens, if it does happen, at the end of the training process, then it is very likely that the ARSM gradient estimates will be zeros. \\n\\n3. The ARM estimator is designed for V=2, and ARSM estimator is designed for V>=2. The ARSM estimator is unbiased regardless of whether V=2 or V>2.\\n\\n4. We note that the variance of a gradient estimator is highly related to the parameter value at which the gradient is evaluated, and hence there is no guarantee that the variance of a gradient estimator (including ARS-K/M) is non-increasing during the training process. For example, it is totally possible that the optimal solution is associated with a large gradient variance. Note some other estimators have tried to design baselines, whose parameters are optimized to minimize the sample variance of the estimated gradients; this actually leads to a min-max optimization procedure hidden in the main algorithm that may prevent converging to a local optimal solution, as there is no guarantee that the path towards a local optimal solution will be associated with lower gradient variance. \\n\\nAs mentioned in our paper (in NPS results and analysis part), the gradient variance at a given iteration is related to both the property of the gradient estimator and the parameter value at that iteration. It is possible that in the beginning of training, due to the poor performance of policy, the rewards can be very sparse (like in NPS), so the gradient may have lower variance compared to the later stage of training.\\n\\n5. The number of rollouts at each step is an indicator of runtime. In Figures 1 and 3, we compare the number of rollouts between ARS-K/M with competitors. In Figure 1(b), we notice that in the beginning ARSM has more rollouts (slower) than RL_beam, but quickly the number of rollouts decreases and has fewer rollouts (faster) than RL_beam. In Figure 3, the effective number of rollouts of SC at each step is around 1/6 while for ARS-K the effective number of rollouts goes under 1 after 4 epochs (total of ~25 epochs of fine-tuning), meaning aside from the time for computing pseudo actions, ARS-K is at most five-times slower than SC.\", \"response_to_the_questions_on_experiments\": \"1. Our results differ from what\\u2019s in Bunel et al (2018) due to the exclusion of grammar checker, as in this paper we focus on the context where the action space is fixed instead of adaptive to other supervision. It is unclear at this moment how to modify ARSM to an adaptive action space, which is beyond the scope of this paper and we leave it as an interesting topic for future investigation. \\n\\n2. Code: we have included the arm.sh file in the updated code folder to make it easier to reproduce our results. The variance plots are based on code in nps/train.py from Lines 418 to 422. \\n\\nWe have addressed your minor comments.\"}",
"{\"title\": \"We have clarified our contribution and addressed all the other concerns\", \"comment\": \"Thank you for your constructive comments and suggestions. We have revised our paper to address your criticisms, with the major edits highlighted in blue.\", \"on_the_contribution_of_this_paper\": \"we view our core contribution as addressing the contextual categorical sequence generation problems by applying and modifying the ARS-K/M estimators. The ARSM paper only did proof-of-concepts experiments on small action spaces, while in our paper, we consider space of up to V~10^4 actions, which poses several significant challenges to efficient implementations that have been successfully addressed. For example, the swapping operations required by ARS-K/M to compute unique pseudo actions could become the computation bottleneck when V is large; to address this issue, we have developed an efficient algorithm that avoids unnecessary swapping operations to significantly accelerate the computation, as explained in detail in Appendix C.\\n\\nTo make the concept of ARS-K/M clear and intuitive in our setting, we often describe it with correlated Monte Carlo (MC) rollouts, and interpret our method as a gradient estimation method exploiting correlated MC rollouts based token-level rewards, which naturally serve as the baselines for each other. From this point of view, the comparison between ARS-K/M based methods and MC-K, SC together with other baselines, where either independent MC rollouts or sentence-level rewards are used, becomes clearer and demonstrates the advantage of using token-level rewards and correlated MC rollouts. Moreover, we reveal that the number of correlated MC rollouts is automatically adapted to model uncertainty across samples, iterations, sentence positions, and depths, as illustrated in Section 4.2 and Appendix A.\\n\\nWe hope to note that our binary tree extension is not a simple variant of hierarchical softmax. While we share with Morin & Bengio (2005) the basic idea of decomposing a full-softmax to a sequence of binary-softmax, our choices of model details are very different: We use all previous tokens (with RNN) instead of only neighboring previous tokens to compute softmax logits, and our tree construction procedure is totally data-driven using agglomerative clustering on the given word embeddings; by contrast, Morin & Bengio (2005) manually constructed a non-binary tree based on WordNet and then split the tree into binary using K-means. Further, we introduce task-specific embeddings. Our ablation studies shown in Table 2 (updated in revision) demonstrate that, using task-specific embeddings learned from the full softmax model gives superior results over off-the-shelf embeddings. \\n\\nMoreover, while it appears natural to combine binary softmax with ARSM, figuring out the technical and implementation details is not trivial at all, mainly because that two levels of sequential structures are involved. For example, after completing each lower-level sequence (binary code), we need to map the binary code to the higher-level vocabulary and feed the token to LSTM. Then, we use the output from LSTM to guide the generation of the binary code for the next token. Such interchange between two-level information complicates the implementation.\", \"response_to_other_comments\": [\"As defined right after Eq 4, $j$ in Eq 4 is the index of a reference category randomly selected from {1,...,V}.\", \"In the last paragraph of Section 4.1, we have elaborated our argument about why RL_beam appears to overfit while ARSM does not.\", \"We note ARSM tends to generate fewer unique pseudo actions as the policy becomes more certain, and provides zero gradient when zero unique pseudo action is generated, which may help provide implicit regularization. More specifically, as the ARSM estimator is unbiased, zero gradients at some iterations may imply larger gradients at other iterations, and hence our intuition is that it either freezes the parameters or update them confidently with larger gradients (more likely towards the same directions of the true gradients).\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents experimental results on the application of the gradient ARSM estimator of Yin et al. (2019) to challenging structured prediction problems (neural program synthesis and image captioning). The authors also propose two variants, ASR-K which is the ARS estimator computed on a random sample of K (among V) labels, as well as a binary tree version in which the V values are encoded as a path in a binary tree of depth O(log(V)), effectively increasing the length of sequences to be predicted but reducing the action space at each tilmestep.\\n\\nThe paper is self-contained and clear. The main value of the paper is to present good experimental results on challenging tasks; the ARS-K variant, although fairly straightforward, seems to be a reasonable implementation of the ARS(M) estimator.\\n\\nMy main criticism on the paper is that the exact nature of the contribution is not properly stated. As far as I understand, the main value of the paper is to demonstrate the effectiveness of ASR-K/M on challenging tasks. In a first read however, it seems that the authors claim an algorithmic/theoretical contribution compared to the state-of-the-art. Comparing with the paper by Yin et al. (2019), it seems to me that the technical contribution is rather incremental (the binary tree version is a variant of the hierarchical softmax, and ASR-K seems very straightforward), up to the point that the first set of experiments is actually only about vanilla ARSM.\", \"other_comments\": [\"what is j in Eq 4?\", \"RL_beam vs ASRM on neural program synthesis: the authors say that \\\"RL_beam overfits [...] because of biased gradients\\\", whereas \\\"ASRM converges to a local minimum that generalizes better\\\". I do not see why biased gradients would help fitting the data (compared to unbiased gradients). And as far as I understood, ASRM is about getting a better gradient (hence better optimization, and hence better fitting of the data), so I really do not understand this argument.\", \"RL_beam vs ASRM on NPS: I do not see why ASRM cannot fit the data as well as RL_beam. Is there some regularization involved?\"], \"minor\": \"- \\\"expected award\\\" (first line section 3.1)\\n\\n------ after author rebuttal\\n\\nThe authors answered my main concerns, I raises my score to weak accept.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents a novel reinforcement learning-based algorithm for contextual sequence generation. The algorithm builds on the previously proposed MIXER algorithm and improves it by integrating gradient estimates with lower variance (augment-REINFORCE-swap-merge). To further improve the runtime complexity of the proposed algorithm, binary tree-based hierarchical softmax is applied. The algorithm is evaluated on the Karel dataset for neural program synthesis and the MS COCO dataset for image captioning.\\n\\nThe presentation of the paper must be improved (my score assumes that this will have been done). It is nice to have the detailed derivations when trying to dive deeper into the problem, but it hinders understanding of the main concepts at the first reading. Therefore, I would highly recommend to move most of the formulas to the appendix and keep instead only key ideas with intuitive explanations. It would also free some space in the main paper for the experiments from the appendix.\", \"several_questions_on_the_technical_side\": \"1. Is there any intuition why pseudo actions tend to be equal to the true one when learning progresses? What causes this? Might it enforce any structure (like uniform)?\\n2. When all pseudo actions are the same, the gradient is zero. In theory, zero gradient of a function corresponds to its extremum. Does it mean that when all pseudo actions are the same, an extremum is reached or is it just an artifact of this particular estimator? Can one prove any results of this kind?\\n3. I understand that the ARSM estimator should be unbiased for V = 2. Does the estimator remain unbiased when V > 2?\\n4. In the experiments, the variance is shown to reduce significantly which is nice. However, in theory, does the ARMS guarantee non-increasing variance or can it potentially go up in some cases? If it can, have it ever been observed in practice?\\n5. How does the runtime of the proposed algorithm compare to the competitors?\", \"experiments\": \"1. Bunel et al (2018) report higher generalization on the Karel dataset. Is the difference due to the removal of the optional grammar checker? Can the same experiments be performed with this checker on or are there any constraints of the ARSM-based method?\\n2. The submitted code for the NPS experiment is actually the one by Bunel et al with their comments. I could not find any instructions or scripts reproducing the results of this paper (and I didn\\u2019t have much time to figure that out). One thing I wanted to check in the code is how the variance was computed for the plots?\", \"minor\": \"1. p. 4, \\u201cexpected award\\u201d >> \\u201cexpected reward\\u201d\\n2. g_{ARSM} defined twice in (5) and in the beginning of p. 4\\n3. \\u201cFig. 1 (left two) plots\\u201d and \\u201cFig. 1 (right two) plots\\u201d not good\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a new algorithm for unbiased stochastic gradient estimation for use in reinforcement learning of sequence generation tasks (specifically neural program synthesis and image captioning). The method consists in performing correlated Monte Carlo rollouts starting from each token in the generated sequence, and using the multiple rollouts to reduce gradient variance. An interesting property of the proposed algorithm is that the number of rollouts automatically scales with the uncertainty of the policy.\\n\\nThe proposed algorithm is novel, and the results are promising. Implementation of the idea seems non-trivial, but the authors provide open source code. The proposed algorithm could be impactful. The paper is clearly written.\", \"questions_for_the_authors\": [\"Can you say anything about the optimality of scaling the number of rollouts with the policy uncertainty? Does the algorithm make optimal use of the number of rollouts? i.e. is the variance minimal for the number of rollouts, or is there scope for improvement?\", \"The number of rollouts being random possibly complicates efficient parallel evaluation of the rollouts (batch sizes are effectively varying). This is presumably not a problem for the chosen applications, but could you discuss the limitations in a broader setting?\"]}"
]
} |
BJewlyStDr | On Bonus Based Exploration Methods In The Arcade Learning Environment | [
"Adrien Ali Taiga",
"William Fedus",
"Marlos C. Machado",
"Aaron Courville",
"Marc G. Bellemare"
] | Research on exploration in reinforcement learning, as applied to Atari 2600 game-playing, has emphasized tackling difficult exploration problems such as Montezuma's Revenge (Bellemare et al., 2016). Recently, bonus-based exploration methods, which explore by augmenting the environment reward, have reached above-human average performance on such domains. In this paper we reassess popular bonus-based exploration methods within a common evaluation framework. We combine Rainbow (Hessel et al., 2018) with different exploration bonuses and evaluate its performance on Montezuma's Revenge, Bellemare et al.'s set of hard of exploration games with sparse rewards, and the whole Atari 2600 suite. We find that while exploration bonuses lead to higher score on Montezuma's Revenge they do not provide meaningful gains over the simpler epsilon-greedy scheme. In fact, we find that methods that perform best on that game often underperform epsilon-greedy on easy exploration Atari 2600 games. We find that our conclusions remain valid even when hyperparameters are tuned for these easy-exploration games. Finally, we find that none of the methods surveyed benefit from additional training samples (1 billion frames, versus Rainbow's 200 million) on Bellemare et al.'s hard exploration games. Our results suggest that recent gains in Montezuma's Revenge may be better attributed to architecture change, rather than better exploration schemes; and that the real pace of progress in exploration research for Atari 2600 games may have been obfuscated by good results on a single domain. | [
"exploration",
"arcade learning environment",
"bonus-based methods"
] | Accept (Poster) | https://openreview.net/pdf?id=BJewlyStDr | https://openreview.net/forum?id=BJewlyStDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0hOd2-CaQ",
"rJxjPjINiH",
"B1lnu5INiS",
"SklK7x6-cS",
"BJg-P7A3KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798725193,
1573313378748,
1573313140017,
1572093985279,
1571771225195
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1510/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1510/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1510/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1510/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a detailed comparison of different bonus-based exploration methods on a common evaluation framework (Rainbow) when used with the ATARI game suite. They find that while these bonuses help on Montezuma's Revenge (MR), they underperform relative to epsilon-greedy on other games. This suggests that architectural changes may be a more important factor than bonus-based exploration in recent advances on MR.\\n\\nThe reviewers commented that this paper makes no effort to present new techniques, and the insights discovered could be expanded on. Despite this, it is an interesting paper that is generally well argued and would be a useful contribution to the field. I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review and taking the time to read our paper.\\n\\n \\u201c(1) Can you show the comparison results on Rainbow without the prioritized replay buffer? It will strengthen the understanding of these exploration methods.\\u201d\\n\\nWe agree that these results would be helpful and we will add them in a future revision of the paper.\\n\\n(2) The noisy networks perform well on most games, while bonus methods perform well on hard games. Is there any combination method to achieve better performance?\\n\\nBonus methods seem to perform well on Montezuma\\u2019s Revenge, but they do not perform well on the remaining hard exploration games. We think a naive combination of NoisyNets and exploration bonuses would likely combine their weaknesses. As such, it is not clear right now how to combine their benefits, so we leave it to future work.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank your feedback and taking the time to go through our manuscript.\\n\\n\\u201cThis also leads to the conclusion that recent results on the game Montezuma\\u2019s revenge can be attributed to architectural changes instead of the exploration method.\\u201d\\n\\nWe do not disagree that progress in MR can be attributed to the exploration method. However, we argue that the benefits of these methods do not translate to other games in the ALE.\\n\\n\\u201cthe paper puts absolutely zero effort into investigating if there is a quick fix to the questions it poses\\u201d\\n\\nIt is true that we focused primarily on current practices in exploration research. It seemed important to us to highlight how these practices have impacted the field of exploration in RL. Moreover, we wanted to know how we may have been misled regarding our progress in exploration. \\n\\nGiven the experiments we performed it is not clear whether a \\\"quick fix\\\" exists. It seems more likely that new methods will have to be designed, which take our findings into account.\\n\\n\\u201cA comparison that showed the performance of CTS for a couple more values of factors such as (1/N) or (1/N)^{1/4} would have been nice to see if that mattered.\\u201d\\n\\nWe thought this direction did not hold much promise and thus we did not investigate further. There are theoretical reasons for this particular value choice (see [1, 2]). 1/N with DQN has been done in [3] in Figure 10 and led to a significant performance drop. \\n\\n[1] An Analysis of Model-Based Interval Estimation for Markov Decision Processes, Strehl and Littman (2006).\\n[2] Near-Bayesian exploration in polynomial time, Kolter and Ng (2009).\\n[3] Unifying Count-Based Exploration and Intrinsic Motivation, Bellemare et al. (2016)\\n\\n\\u201cIf it is saying (1st) then I find it contradictory that it is not ok to focus on MR but it is ok to focus on ATARI as a single domain;\\u201d\\n\\nIt is (1). Focusing on the ALE as a single domain is in line with the current use of the ALE for research in reinforcement learning. Limiting oneself just to Montezuma\\u2019s Revenge or the hard exploration games is a practice that is unique to exploration practitioners.\\nMoving beyond the ALE will be of interest in the future; however, our results show that efficient exploration in the ALE is currently far from being solved. We think that evaluating on a set of 60 diverse games is already a step up from only using 7 games.\\n\\n\\u201cIt is interesting to note that noisy networks are most robust to hyperparameter optimization on a separate set of games when tested on a different set of games.\\u201d\\n\\nWe did not tune NoisyNets and kept the original hyperparameters. We will update the paper to make this more clear.\\n\\n\\u201cIt is also interesting to note that noisy networks are the only exploration bonus method that does not decrease/reduce exploration as the experience of the agent increases.\\u201d \\n\\nIt is true that the amount of noise injected by NoisyNets is learned. NoisyNets are then able to modulate the amount of exploration during training. In contrast, the bonuses from other methods will shrink in states that have already been visited.\\n\\n\\u201cOne of the comparisons I did not particularly find fair was when the hyperparameters of various methods were tuned to play MR and then the hyperparameters were fixed and the method were tested on other ATARI games.\\u201d\\n\\nWhat about the comparisons do you not find fair? If you find splitting the games in the ALE into a train and test set of games is unfair, then we would argue that this is a standard procedure (see [3,4]).\\nThe choice of games in the training set is open to debate and for this reason we evaluate its impact in Section 3.4.\\nIf it is something else you find unfair, please let us know.\\n\\n[3] The Arcade Learning Environment: An Evaluation Platform for General Agents, Bellemare et al. (2012)\\n[4] Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents, Machado et al. (2017)\\n\\n\\u201cAnother point I felt was missing was checking if rainbow DQN is really the reason behind the observed performance of the methods. It would have been interesting to know how the methods performed when combined with the original DQN algorithm.\\u201d\\n\\nWe agree that a more detailed study would be helpful to explain why the gap between $\\\\epsilon$-greedy and exploration bonuses has shrunk. We postulate that this is mostly due to the use of a prioritized replay buffer and we will add experiments without prioritization.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"#rebuttal responses\\n \\nI change the score to be weak accept as the authors do not provide any comparison result on Rainbow without the prioritized replay buffer during the rebuttal phase. I also agree with Reviewer 1's opinion that the authors do not provide some fixing method, such as combining the noisy networks and bonus methods.\\n\\n\\n#review\\nThis paper evaluates the recently proposed exploration methods that achieve ground-breaking performance in the difficult exploration problem, Montezuma's Revenge. The authors combine Rainbow with different exploration methods, such as count-based bonus methods, curiosity-driven methods, and noisy networks. Results show that these methods fail to beat epsilon-greedy on other Atari games, even if the parameters of these methods are tuned. \\n\\nThe paper is very well written, and they claim that evaluating the exploration methods on the Montezuma's Revenge and tuning parameters on this environment are not suitable for the total ALE environments. The claim is very interesting and important for the exploration community. \\n\\nTo support their claim, the authors firstly compare bonus exploration methods, noisy networks, and epsilon-greedy on hard exploration games. Then results in easy games and other games are presented. The results are very impressive.\", \"question\": \"(1) The authors compared these methods based on Rainbow, which employs many techniques, such as the prioritized replay buffer. Can you show the comparison results on Rainbow without the prioritized replay buffer? It will strengthen the understanding of these exploration methods.\\n(2) The noisy networks perform well on most games, while bonus methods perform well on hard games. Is there any combination method to achieve better performance?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Updated review: I am overall happy with the response of the authors. I can appreciate the contributions of the paper and I am happy to recommend accept. The empirical study offers some insights into deep RL methods for ATARI games and raises some key questions. I feel the current version of the paper does not build upon these insights to propose a new method.\\n\\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"summary\": \"This paper presents a detailed empirical study of the recent bonus based exploration method on the Atari game suite. The paper concludes that methods that perform well on Montezuma\\u2019s revenge do not necessarily perform well on the other games, sometimes, even worse than the eps-greedy approach. This also leads to the conclusion that recent results on the game Montezuma\\u2019s revenge can be attributed to architectural changes instead of the exploration method.\\n\\nI think this is a-ok paper in that it does what it says it does. The paper is clear and well-written. \\n\\nI think the main contribution of the paper is that it raises some questions over existing methods/trends in solving exploration problems in reinforcement learning by comparing the performance of multiple methods across various games in ATARI suite. \\nI think this is relevant to the ICLR community and will be appreciated by it. \\n\\nHowever, I also feel that while the paper runs a satisfactory empirical analysis, it was all too much focussed on the existing methods. Throughout the paper, the experiments and results raise questions on the robustness and generalization of existing exploration methods across various ATARI games, but the paper puts absolutely zero effort into investigating if there is a quick fix to the questions it poses. For example, one could easily investigate in the CTS method if the factor by which exploration bonus dies N^{alpha} (alpha=-1/2 by default) changes, then does it do better or worse (more below on this).\\nI can understand that might not be the aim of the paper but still. \\n\\nHere are a couple of points that I felt conflicted/confused about the paper: \\n- The conclusion of the paper is that \\u2018progress of exploration in ATARI suite is obfuscated by good results in single domain\\u2019. I am confused if the paper is making a narrow point that (1) dont focus on Montezuma\\u2019s revenge OR (2) is it admitting a broader point that focussing on even ATARI is probably not a good choice. I am not saying that I know the answer to this question, but I am unclear as to what is the question the paper is trying to raise. If it is saying (1st) then I find it contradictory that it is not ok to focus on MR but it is ok to focus on ATARI as a single domain; if it is saying the second then also it is contradictory because the paper only experiments with the ATARI suite.\\n \\n- It is interesting to note that noisy networks are most robust to hyperparameter optimization on a separate set of games when tested on a different set of games. It is also interesting to note that noisy networks are the only exploration bonus method that does not decrease/reduce exploration as the experience of the agent increases. I would have liked to see if the paper had made an attempt to investigate this. I feel such a hypothesis would have been easy to investigate with simple modifications to the CTS methods. Currently, the exploration bonus goes down by the factor of 1/sqrt(N) in the CTS method. A comparison that showed the performance of CTS for a couple more values of factors such as (1/N) or (1/N)^{1/4} would have been nice to see if that mattered.\\n\\n- One of the comparisons I did not particularly find fair was when the hyperparameters of various methods were tuned to play MR and then the hyperparameters were fixed and the method were tested on other ATARI games. \\n\\n- Another point I felt was missing was checking if rainbow DQN is really the reason behind the observed performance of the methods. It would have been interesting to know how the methods performed when combined with the original DQN algorithm.\"}"
]
} |
BkxDxJHFDr | Power up! Robust Graph Convolutional Network based on Graph Powering | [
"Ming Jin",
"Heng Chang",
"Wenwu Zhu",
"Somayeh Sojoudi"
] | Graph convolutional networks (GCNs) are powerful tools for graph-structured data. However, they have been recently shown to be vulnerable to topological attacks. To enhance adversarial robustness, we go beyond spectral graph theory to robust graph theory. By challenging the classical graph Laplacian, we propose a new convolution operator that is provably robust in the spectral domain and is incorporated in the GCN architecture to improve expressivity and interpretability. By extending the original graph to a sequence of graphs, we also propose a robust training paradigm that encourages transferability across graphs that span a range of spatial and spectral characteristics. The proposed approaches are demonstrated in extensive experiments to {simultaneously} improve performance in both benign and adversarial situations. | [
"graph mining",
"graph neural network",
"adversarial robustness"
] | Reject | https://openreview.net/pdf?id=BkxDxJHFDr | https://openreview.net/forum?id=BkxDxJHFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"xgKy_iXAt2",
"HJeapTXYoS",
"Syg7RVcOiH",
"BJx2Xmcujr",
"SylEaM5_oS",
"BylgNz5uir",
"SylgClqOiH",
"r1xRjgcuiS",
"BkxQEAZ0YB",
"H1gRE3E6tH",
"ryg6Ve_tuB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725163,
1573629380720,
1573590219266,
1573589796161,
1573589691891,
1573589544384,
1573589191602,
1573589157649,
1571851818762,
1571798070074,
1570500660879
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1509/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1509/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1509/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper identifies the limitation of graph neural networks and proposed new variants of graph neural works. However, the reviewers feel that the theory of the paper have some problems:\\n1. A major concern is that the theoretical analyses in this paper are limited to graphs sampled from the SBM model. It is unclear how these analyses can be generalized to real graphs. \\n2. The robustness definition is inconsistent. \\nFurthermore, more extensive experiments on more datasets will also be helpful.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your effort in reviewing the manuscript! Your comments and remarks have been very helpful to improve the quality of the paper.\", \"we_have_made_three_major_additions_to_the_manuscript\": \"1) We performed additional experiments with baselines SGC and MixHop in both the clean data setting and the adversarial setting against five strong attack strategies.\\n2) We added a section (Section 4.4) to discuss the important issues regarding our methods, including (i) the difference between our method and those in the literature that are based on directly powered graph Laplacian or k-th order polynomials; (ii) the relation between adversarial robustness and spectral robustness; and (iii) the limitation of the present analysis to stochastic block model.\\n3) We also added an experiment to evaluate the runtime of the proposed method on a social network dataset (Social circles: Facebook).\\n\\nPlease find the detailed responses below. We thank you again for your constructive feedback. Please do not hesitate to let us know if there are further issues that we could help clarify.\"}",
"{\"title\": \"Response (part 1)\", \"comment\": \"Thank you for the insightful comments and suggestions! We are pleased that you like how the paper addressed the weakness of the existing graph Laplacian operators and that we proposed a new method with theoretical justifications. We also thank you for acknowledging our thorough experimental evaluation of the proposed method. In the following, please see our detailed responses to the concerns that you have raised.\", \"q\": \"I also hope the paper could have done the experiments on more datasets since there exists some evidence on the unreliability of evaluations on citation networks [3]. However, I do not think this point is critical since the paper did a great job of evaluating the robustness in various aspects and they all show consistent improvement.\", \"a\": \"Thank you for raising this important issue! Also, thank you for acknowledging our substantial efforts in evaluating the robustness in various aspects and demonstrating a consistent improvement of the proposed method. Indeed, By focusing on the three most common benchmark datasets, we are able to make sure that our implementation of existing methods achieves similar performance on the clean data setting. This makes it possible to reliably estimate their performance in an adversarial setting against various state-of-the-art attack strategies. We also thank the reviewer for pointing out the reference! The paper [3] points out the unreliability of evaluation on the benchmark datasets in the clean setting, but it remains to be seen whether it is also the case for the adversarial setting. By using various attack strategies, our aim is to control the variance of robustness evaluation, so to alleviate this unreliability issue as much as we can.\"}",
"{\"title\": \"Response (part 2)\", \"comment\": \"Q: The acronyms are slightly confusing to understand at first sight, since they first appear at the equations without any information on what the letters stand for. Something like a \\\"variable power network (VPN)\\\" would make the paper more pleasant to read.\", \"a\": \"Thank you for this comment! In the original paper [4], the knowledge is distilled between models, where knowledge is transferred from one to another. We attempted to draw an analogy to graphs, where the knowledge learned on the powered graph is transferred to the model that is applied to the original graph. We agree with the reviewer that using this term can be confusing to some readers. To improve clarity, we replaced this term with \\\"transfer\\\" in the revised manuscript.\\n\\nThank you once again for the useful feedback! Please do not hesitate to let us know if there are further issues that we could address.\", \"q\": \"In the r-GCN framework, the terminology distillation is slightly confusing. Was this choice of word used for making a connection to the knowledge distillation [4]? How is the knowledge distilled between graphs?\"}",
"{\"title\": \"Response (part 1)\", \"comment\": \"We thank the reviewer for raising many important points to improve the paper! We are also pleased that the reviewer acknowledges the improvement of our proposed methods for accuracy and robustness in both synthetic and real world benchmark graphs. We have added the additional baselines [2,3] in our evaluation to further substantiate our claim. We also discussed the issues and clarified the text to address the questions raised by the reviewer. Please see the detailed responses below.\", \"q\": \"In the paper, compared to the classic GCN, the authors replace the adjacent matrix with the proposed \\u201cvariable power operator\\u201d. However, the proposed \\u201cvariable power operator\\u201d is very similar to \\u201ck-th order polynomials of the Laplacian\\u201d, which has been fully discussed in [1]. Could you distinguish the differences between the proposed \\u201cvariable power operator\\u201d and \\u201ck-th order polynomials of the Laplacian\\u201d? And as the authors proposed to use high-order matrix some recent models which also explores high-order matrix such as [2, 3] may also need to be selected as baseline methods.\", \"a\": \"Thank you for this important question! There is an important difference between the proposed operator and the k-th order polynomial. If you power the graph Laplacian L to the k-th order, the resulting matrix has the same eigenvectors as of L, and only the eigenvalues are powered to the k-th order. Thus, the resulting k-th order polynomial has the same eigenvectors, so summing them up does not change the eigenvectors. However, the proposed variable power operator has radically different eigenvectors as the graph Laplacian (or it's k-th order). This is an important difference because we know (empirically) that the leading eigenvectors of a graph Laplacian are extremely sensitive to outliers under the SBM, and they often correspond to either tails or high-degree nodes (please see Fig. A.3 for an illustration). However, we have proved in Theorem 3 that the leading eigenvectors of the proposed operator can asymptotically recover the underlying community under SBM, and enjoys the \\\"spectral gap\\\" property. This is also a fundamental difference between our proposed method and those proposed in [2,3]. We added this discussion in Section 4.4.\\n\\nThank you for the suggestion to add the comparison with baselines SGC [2] and MixHop [3]. We have also added them in our experiments. For simplicity, we provide the results on dataset Citeseer, attacked by ADW3 here and include the full results in the revision:\\nMETHOD | Vanilla GCN | PowerLap2 | PowerLap3 | GCN(RNM) | IGCN(AR) | LNet | RGCN | SGC | MixHop | r-GCN | VPN\\n5% | 68.8 | 69.2 | 70.0 | 63.5 | 65.1 | 60.0 | 70.3 | 70.5 | 70.4 | 71.8 | 70.6\\n10% | 68.4 | 69.3 | 69.6 | 63.1 | 63.6 | 59.6 | 70.1 | 70.1 | 69.4 | 71.2 | 70.2\\n15% | 68.9 | 69.5 | 69.8 | 62.9 | 63.2 | 59.5 | 70.3 | 70.6 | 68.7 | 71.2 | 70.6\\n20% | 68.8 | 69.4 | 69.5 | 63.5 | 63.4 | 59.1 | 69.8 | 70.3 | 67.9 | 71.1 | 70.4\\n25% | 68.8| 69.2 | 69.3| 63.6 | 63.8 | 59.3 | 69.9 | 70.2 | 67.7 | 71.2 | 70.1\\n30% | 68.78| 69.2 | 69.3| 63.6 | 63.8 | 59.3 | 69.9 | 70.2 | 67.3 | 71.2 | 70.1\\nFrom the above, we can find that our proposed method is consistently outperforms the baselines even with [2, 3] added as baselines.\"}",
"{\"title\": \"Response (part 2)\", \"comment\": \"Q: As it is clearly defined in [4, 5], all the five attack methods adopted in the paper are poisoning (training time) attacks methods. However, the proposed models are claimed to defend against evasion (testing) attacks. Why choose the poisoning attacks methods here? The experiment with the evasion attack method [6] is suggested to be added.\", \"a\": \"Thank you for raising this important issue! We agree with the reviewer that the adjacent matrix could be very dense after powered several times, which is a general challenge for all high-order matrix based approaches. To alleviate this issue, we employ a simple sparsification strategy in our proposed method. Thank you for your suggestion for the additional experiment! We introduced a running time comparison experiment on the real world social network (Social circles: Facebook) [A1]. This dataset consists of 'circles' (or 'friends lists') from Facebook and becomes very dense from power 1 (4.71%) to power 2 (92.12%), thus is suitable for this scenario. The results are as follows:\\nMETHOD | Vanilla GCN | PowerLaplacian | IGCN(RNM) | IGCN(AR) | LNet | RGCN | SGC | MixHop | r-GCN | VPN |\\nRun Time (s) | 6.02 | 6.36 | 3.33 | 7.21 | 5.56 | 18.14 | 0.231 | 11.38 | 6.73 | 7.18\\nWe can find that the running efficiency of our proposed method is compatible with baselines and the density doesn't affect the efficiency significantly when the dataset is not too large. Moreover, in this paper, our primary goal is to improve the robustness. We leave the solution to resolve the scalability issue as a future direction.\\n\\n[A1] J. McAuley and J. Leskovec. Learning to Discover Social Circles in Ego Networks. NIPS, 2012. https://snap.stanford.edu/data/ego-Facebook.html\\n\\nWe thank the reviewer once again for the constructive feedback to improve the paper further. Please do not hesitate to let us know if there are further issues that we could address.\", \"q\": \"When the proposed models are applied to other kind of graph data, ie., social network, according to the small world theory, the \\\"variable power adjacency matrix\\\" would be very dense when with 2-layer GCN. The efficiency of the proposed might be an issue. Is it possible to add one experiment with demonstrating the running time on the real world social network?\"}",
"{\"title\": \"Response (part 1)\", \"comment\": \"We thank the reviewer for raising the interesting and important points! We are also pleased that the reviewer acknowledges our efforts to extensively evaluate the performance of our method and demonstrate its consistent improvement in both benign and adversarial situations. Please see our detailed responses below.\", \"q\": \"In the performance part of Section 4.2, the improvement of the performance by replacing Laplacian with VPN is marginal (compared with the original GCN). Furthermore, the performance of VPN is close to or sometimes worse than the baseline RGCN.\", \"a\": \"Thank you for this comment. Among all the evaluated methods (we also added two additional baselines, SGC and MixHop), the performance of our method is consistently ranked in the top two in the clean dataset (please see Table I for the updates). In the adversarial setting, our extensive experiments indicate that our method is more robust than the baselines for the majority of the time. Therefore, the key strength of the proposed method is that it can *simultaneously* improve accuracies in both the clean and the adversarial settings, and it is this combination that makes our method unique.\"}",
"{\"title\": \"Response (part 2)\", \"comment\": \"\", \"q\": \"There are many typos such as: \\u201cwith the presence of absence of edges\\u201d, \\u201cnormalizating\\u201d, \\u201casymptotoic\\u201d, \\u201cbenigh\\u201d, \\u201cajdacency\\u201d, \\u201csensitve\\u201d, \\u201cadajacent\\u201d, \\u201cone of the network\\u201d, etc.\", \"a\": \"Thank you for catching these typos! We have corrected them in the revised version.\\n\\nWe thank the reviewer once again for the very interesting and important remarks! Please do not hesitate to let us know if there are further issues that we could help clarify.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a graph convolutional operator based on graph powering and applies it to GCN architecture to improve the performance and robustness. This work is mainly motivated by the paper (Graph powering and spectral robustness, Abbe et al., 2018). The authors introduce the graph powering to graph convolution neural network domain to replace the original Laplacian operator. They further propose a graph sparsification/pruning strategy on the powered adjacency matrices in order to reduce the complexity and increase the robustness against adversarial attacks. They also provide theoretical analysis to prove that the proposed powering operator and subsequent methods have some spectral properties and theoretical feasibility. However, some conclusions are limited to the ideal situations or seem subjective. Extensive experiments are conducted to show better or comparable performance in both benign and adversarial situations.\", \"there_are_some_concerns_that_need_to_be_addressed_or_clarified\": \"A major concern is that the theoretical analyses in this paper are limited to graphs sampled from the SBM model. It is unclear how these analyses can be generalized to real graphs. Furthermore, the theorem 3 and proposition 5 are even limited to SBM model with 2 communities, which makes the analyses less convincing. \\n\\nSome of the arguments in the paper might be imprecise. For example, in Section 1.1, when discuss \\u201cwhy not graph Laplacian?\\u201d, a small spatial scope is claimed to be problematic. Although, it is correct for the GCN (Kipf & Welling, 2017), the powered Laplacian (mentioned earlier in the same section) does have a broad spatial scope. \\n\\nIt would be better if the authors could provide more details about the sparsification. Specifically, how to choose the threshold (adaptively). \\n\\nIn the performance part of Section 4.2, the improvement of the performance by replacing Laplacian with VPN is marginal (compared with the original GCN). Furthermore, the performance of VPN is close to or sometimes worse than the baseline RGCN.\", \"suggestions\": \"In the Informative and robust low-frequency spectral signal part of Section 4.3, it would be better if the authors can clarify the experiments setting. Is it using the low-frequency part (first few eigenvectors) to recover the signal and then using the recovered signal to perform the classification task? The titles of Figure 7 and Figure 8 are a little bit confusing.\", \"some_minor_problems\": \"\", \"there_are_many_typos_such_as\": \"\\u201cwith the presence of absence of edges\\u201d, \\u201cnormalizating\\u201d, \\u201casymptotoic\\u201d, \\u201cbenigh\\u201d, \\u201cajdacency\\u201d, \\u201csensitve\\u201d, \\u201cadajacent\\u201d, \\u201cone of the network\\u201d, etc.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors study the classic GCN and proposed the new convolution operator with wider spatial scope and robust properties. The proposed models could improve the accuracy in both benign and evasion setting on synthetic, ie., SBM dataset and real world benchmark graphs. However, I have the following questions for the authors:\\n\\n1. In the paper, compared to the classic GCN, the authors replace the adjacent matrix $A$ with the proposed \\u201cvariable power operator\\u201d. However, the proposed \\u201cvariable power operator\\u201d is very similar to \\u201ck-th order polynomials of the Laplacian\\u201d, which has been fully discussed in [1]. Could you distinguish the differences between the proposed \\u201cvariable power operator\\u201d and \\u201ck-th order polynomials of the Laplacian\\u201d? And as the authors proposed to use high-order matrix some recent models which also explores high-order matrix such as [2, 3] may also need to be selected as baseline methods. \\n\\n2. As it is clearly defined in [4,5], all the five attack methods adopted in the paper are poisoning (training time) attacks methods. However, the proposed models are claimed to defense against evasion (testing) attacks. Why choose the poisoning attacks methods here? The experiment with the evasion attack method [6] is suggested to be added. \\n\\n3. When the proposed models are applied to other kind of graph data, ie., social network, according to the small world theory, the \\\"variable power adjacency matrix\\\" would be very dense when $r>2$ with 2-layer GCN. The efficiency of the proposed might be an issue. Is it possible to add one experiment with demonstrating the running time on the real world social network?\\n\\n[1] Defferrard, Micha\\u00ebl, Xavier Bresson, and Pierre Vandergheynst. \\\"Convolutional neural networks on graphs with fast localized spectral filtering.\\\" Advances in neural information processing systems. 2016.\\n[2]Wu, Felix, et al. \\\"Simplifying graph convolutional networks.\\\" International Conference on Machine Learning. 2019.\\n[3] Abu-El-Haija, Sami, et al. \\\"Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing.\\\" International Conference on Machine Learning. 2019.\\n[4]Z\\u00fcgner, Daniel, and Stephan G\\u00fcnnemann. \\\"Adversarial attacks on graph neural networks via meta learning.\\\" In ICLR 2019.\\n[5]Bojchevski, Aleksandar, and Stephan G\\u00fcnnemann. \\\"Adversarial Attacks on Node Embeddings via Graph Poisoning.\\\" International Conference on Machine Learning. 2019.\\n[6] Dai, Hanjun, et al. \\\"Adversarial attack on graph structured data.\\\" International Conference on Machine Learning. 2018.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new architecture for graph convolutional network based on graph powering operation which generates a new graph based on the shortest distance between pair of nodes. Its main motivation is to overcome the dominance of the first eigenvector in the existing GCN architectures based on the graph Laplacian operator. The theoretical evidence for the robustness is provided based on the signal-to-noise (SNR) ratio of the simplified stochastic block model (SBM). Two versions of the algorithms are proposed, namely the robust graph convolutional network (r-GCN) and variable power network (VPN). First, r-GCN is based on augmenting the graphs with graph powering operation. Next, VPN replaces the adjacency matrix of the graph convolutional operator by the newly proposed variable power operator. An additional sparsification scheme is proposed since the graph powering operation densifies the original graph.\\n\\nOverall, I like how the paper addresses the weakness of the existing graph Laplacian operators (dominance of the first eigenvector) and proposed a new method with theoretical justifications. Experiments were conducted thoroughly and results look great in the presented datasets. However, I also have concerns about the paper that I feel necessary to be resolved. \\n\\nMost importantly, the concept of \\\"robustness\\\" in GCN seems to be inconsistent throughout the paper. Namely, the meaning of robustness in the neural network (adversarial robustness) and the SBM literature (spectral robustness) are different. This point is crucial since the paper use the spectral robustness for justification of the method, yet experiments are done on the adversarial attacks. More specifically, adversarial training methods for neural networks, e.g., adversarial attack methods [1] considered in the paper, typically make the loss function (or output of network) more persistent against the small perturbation of inputs. On the other side, the robustness for SBM models, e.g., Theorem 3 in the paper, cares more about the preservation of the original input characteristics. For illustration, an invertible neural network [2] is not necessarily robust to adversarial attacks (the first meaning of robustness) but preserves all the input characteristics (the second meaning of robustness). \\n\\nI also hope the paper could have done the experiments on more datasets since there exists some evidence on the unreliability of evaluations on citation networks [3]. However, I do not think this point is critical since the paper did a great job of evaluating the robustness in various aspects and they all show consistent improvement.\", \"minor_questions_and_suggestions\": \"- The acronyms are slightly confusing to understand at first sight, since they first appear at the equations without any information on what the letters stand for. Something like a \\\"variable power network (VPN)\\\" would make the paper more pleasant to read.\\n- In the r-GCN framework, there might be an edge case where the powered graph is almost identical to another graph. Would there be any justification for avoiding this?\\n- In the r-GCN framework, the terminology distillation is slightly confusing. Was this choice of word used for making a connection to the knowledge distillation [4]? How is the knowledge distilled between graphs? \\n\\nReferences\\n[1] Bojchevski and G\\u00fcnnemann. Adversarial attacks on node embeddings via graph poisoning. ICML 2019\\n[2] Jacobsen et al., i-RevNet: Deep Invertible Networks. ICLR 2018 \\n[3] Shchur et al., Pitfalls of Graph Neural Network Evaluation, Arxiv 2018\\n[4] Hinton et al., Distilling the Knowledge in a Neural Network, Arxiv 2015\"}"
]
} |
ByeDl1BYvH | Global graph curvature | [
"Liudmila Prokhorenkova",
"Egor Samosvat",
"Pim van der Hoorn"
] | Recently, non-Euclidean spaces became popular for embedding structured data. However, determining suitable geometry and, in particular, curvature for a given dataset is still an open problem. In this paper, we define a notion of global graph curvature, specifically catered to the problem of embedding graphs, and analyze the problem of estimating this curvature using only graph-based characteristics (without actual graph embedding). We show that optimal curvature essentially depends on dimensionality of the embedding space and loss function one aims to minimize via embedding. We review the existing notions of local curvature (e.g., Ollivier-Ricci curvature) and analyze their properties theoretically and empirically. In particular, we show that such curvatures are often unable to properly estimate the global one. Hence, we propose a new estimator of global graph curvature specifically designed for zero-one loss function. | [
"graph curvature",
"graph embedding",
"hyperbolic space",
"distortion",
"Ollivier curvature",
"Forman curvature"
] | Reject | https://openreview.net/pdf?id=ByeDl1BYvH | https://openreview.net/forum?id=ByeDl1BYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Knise7HamQ",
"r1leUQf2sS",
"Skg2VSb3iB",
"rylYHNb3sS",
"SJelGQWnjH",
"r1lv_XXjjH",
"r1lwZoJmjH",
"rygor9kXoH",
"SyliHdJmjr",
"HJxYwVfw9S",
"rkei-QhAKB",
"SJlybsupKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725134,
1573819207682,
1573815604408,
1573815361335,
1573815048273,
1573757807330,
1573219070518,
1573218883302,
1573218370974,
1572443232784,
1571894019184,
1571814135195
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1508/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1508/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1508/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the problem of embedding graphs into continuous spaces. The authors focus on determining the correct dimension and curvature to minimize distortion or a threshold loss of the embedding. The authors consider a variety of existing notions of curvature for graphs, introduce a notion of global curvature for the entire graph, and how to efficiently compute it.\\n\\nReviewers were positive about the problem under study, but agreed that the current manuscript somewhat lacks a clear contribution. They also pointed out that the goal of using a global notion of curvature should be better motivated. For these reasons, the AC recommends rejection at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revised paper\", \"comment\": \"Thank you for the suggestions! We uploaded a revised paper, see our comment above https://openreview.net/forum?id=ByeDl1BYvH¬eId=SJelGQWnjH . In particular, we tried to improve the motivation part. We also added a comment on the different notions of distortion.\"}",
"{\"title\": \"Revised paper\", \"comment\": \"Thank you for the feedback!\\n\\nWe uploaded a revised paper, the improvements are listed in the comment above https://openreview.net/forum?id=ByeDl1BYvH¬eId=SJelGQWnjH\"}",
"{\"title\": \"Revised paper\", \"comment\": \"Thank you for the feedback!\\n\\nWe uploaded a revised paper, see our comment above https://openreview.net/forum?id=ByeDl1BYvH¬eId=SJelGQWnjH \\n\\nIn particular, we addressed your question on more general graphs via additional experiments (Figures 6 and 7).\\n\\nWe also improved and extended the experimental part and tried to better motivate the applicability of our research.\"}",
"{\"title\": \"Revised paper\", \"comment\": \"We would like to thank the reviewers for their comments and suggestions. We uploaded a revised version of the paper, where we tried to address all concerns.\", \"we_made_the_following_updates\": [\"Improved the motivation part in the text as suggested by Reviewer 2 (for complete details see our reply to Reviewer 2).\", \"Let us remark that when starting the project we hoped that global curvature could be an intrinsic property of a network but as the result of our research we realized (and proved for some graphs) that it is also influenced by the properties of an ambient space, e.g. dimensionality. In the text we put additional emphasis on the important fact that usually a network which seems to be negatively curved (for some small dimension) becomes more neutral as dimensionality grows (which is confirmed by our theory and experiments). This means that in large dimensions hyperbolic embeddings may not be needed.\", \"Regarding the question of Reviewer 4 on combinations of simple graphs, we totally agree that it is an important question and, while it is extremely hard to give a complete answer theoretically, we did empirical simulations, added a section with an illustration of the results (Appendix C) and referred to this at the beginning of Section 4.3.\", \"We agree that experiments were a bit inconclusive and tried to improve this part. Now experiments show that 1) global curvature significantly depends on dimension and loss, 2) all existing curvatures are unable to capture the global one. More illustrations of these conclusions are in Appendix E.5. In Appendix E.6 we added a more detailed analysis of the proposed volume-based estimator, where we show that it is able to capture well whether the space is negatively curved and it also predicts the behavior of optimal curvature as dimension grows.\", \"To support our conclusion that the optimal curvature depends on a loss function, we also considered more threshold-based loss functions including the correlation coefficient which is (in some sense) unbiased, as discussed in a very recent paper arXiv:1911.04773. While our theoretical results hold for all loss functions, for more complex graphs the optimal curvature may indeed depend on a particular threshold-base loss.\", \"We also made some other smaller changes, as we promised in our reply to Reviewer 2.\", \"We plan to make a small update of the paper later today (by adding some illustrative figures to Appendix E.5).\"]}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks for your answers to my questions. One response:\\n\\n- Could you, please, clarify this question? Is this about the intuition or are there any particular transition which is unclear?\\n\\nOriginally I was confused about the expression near the bottom of Page 14, where D_min = \\\\Omega(min(R,1)/n) in hyperbolic space. The initial reason for my confusion is that you should be able to drive this lower bound down to 0 for n a constant, but at first glance that formula doesn't appear to behave that way.\\n\\nIn effect though, you're getting this by making your R go to 0 by increasing c. So it's behaving as expected.\", \"i_should_say_one_more_thing_for_this_part\": \"although we have this nice exact distortion formula D(f), in the math literature on embeddings, most of the time the distortion is up to some constant factor, e.g., distortion(f) = min_c 1/c * D(f), or even just measured as the worst-case type of thing, max_{u,v} d(f(u),f(v))/d_G(u,v) / min_{u,v} d(f(u),f(v))/d_G(u,v).\\n\\nUnder these definitions, I think the only lower bound is 0 for trees in hyperbolic space, even with R and n fixed.\\n\\nAnyway, this isn't directly relevant for your paper, but it might be good to comment on the different notions of distortion, since there's slightly different uses in other fields.\"}",
"{\"title\": \"To Reviewer 2: Reply to comments and questions\", \"comment\": \"Thank you for the careful reading and many valuable comments.\\n\\n\\u201cThe distinction between what the authors think of as \\\"global\\\" and \\\"local\\\" curvatures is confusing and should be explained further. From what I can see, the authors think of global as being a scalar, and local as being defined at each point; intuitively these seem like pretty bad labels. I would think of the \\\"global\\\" one as being coarse, and the \\\"local\\\" as being more refined, since it contains a lot more information.\\u201c\\n\\nOur use of \\u201cglobal\\u201d and \\u201clocal\\u201d is based on their uses in complex network analysis. Here a local property of a graph refers to a property that depends on some small neighborhood of a node, while a global property depends on the whole graph. An example is the local clustering coefficient, which computes the fraction of links between the neighbors of a given node, and the global clustering, which computes the fraction of triangles in the whole networks compared to the total number of paths of length two. Also note that both global and local properties can be scalars, vectors or even functions. So although in this case the global curvature is a scalar, this does not mean that any notion of global curvature has to be, nor that we only consider scalars as global properties.\\n\\n\\u201cIf the idea is to simply use one space and not a product, there are in fact various spaces with non-constant curvature, e.g., the complex manifold CH^n.\\u201d\\n\\nThank you for pointing us to these manifolds. We did consider other manifolds, such as the Bolza surface (compact manifold with constant negative curvature). However, computing distances on this manifold (which is obtained as a the factorization of the Poincare disc over some group) is non-trivial. Thus, to make sure we could efficiently implement the computations needed for our experiments we initially choose to stick with these three classes of manifolds. From a practical perspective, hyperbolic, euclidean and spherical spaces are already widely used and there are embedding techniques developed for them.\\n\\n\\u201cWhy do your need your graph to be unweighted at the very beginning of Section 2?\\u201d\\n\\nWe want it to be easier to define both distortion and threshold loss. If there is a weighted graph, then one has to convert this weight to a distance to compute distortion or to 0-1 to compute threshold loss. This can be done and the analysis can be extended, but this would add another dimension to the research, so we decided to start from unweighted and undirected graphs.\\n\\n\\u201cOn the other hand, you may want to define your graph to be connected for the distortion function to be well-defined.\\u201d\\n\\nThanks for pointing this out, we\\u2019ll add this assumption. Note that in practice it is reasonable to assume that a graph is connected since connected components can be embedded separately.\\n\\n\\u201cThe statement \\\"1, graph distances are hard to preserve:...\\\" isn't really meaningful, since for the example in 4.3.1, it is possible to embed that graph arbitrarily well. That is, even if the distortion isn't 0, it can be made as small as we desire. There are indeed graphs that are hard to embed (i.e., have lower bounds that do not go to 0) in reasonably tractable spaces, and the authors actually prove such a result, but the star graph is not one of these.\\u201d\\n\\nThank you, this motivation may indeed seem unclear. We wanted to show that the star with 4 nodes is an example of a very small graph, where only minus infinite curvature works. But minus infinity gives a degenerate tree-like structure and if a graph is not a tree, then it becomes a problem, as illustrated by our example with bipartite graphs. We will change this statement and make it more clear.\\n\\n\\u201cThere's various tricks that actually make some of these graphs very easy to embed. One example is K_n in 4.33. Instead of just embedding K_n, embed the star graph on n+1 nodes, and place a weight of 1/2 on each edge. Now every pair of (non-central) nodes is at distance exactly. 1, and this thing is embeddable into hyperbolic space, etc. Interestingly, this is actually predicted by the Gromov hyperbolicity (for K_n_ that the authors briefly mention.\\u201d\\n\\nIndeed, in the proof of Theorem 4.4 (at the very end of section B.4, on page 15) we actually refer to this trick with converting K_n to star. This is the reason why cliques may have two minima - a positive one and minus infinity. We\\u2019ll mention this trick explicitly in the main text. \\n\\n\\u201cCan the authors write out what's going on for the hyperbolic lower bound on D_min in the proof of Thm. 4.1?\\u201d\\n\\nCould you, please, clarify this question? Is this about the intuition or are there any particular transition which is unclear?\"}",
"{\"title\": \"To Reviewer 2: Motivation behind our research\", \"comment\": \"1) We chose spaces of constant curvature since such spaces are currently used in various applications (like, e.g., Poincare GloVe) and also relatively easy to implement and use in practice. While product space proposed by Gu et al. are able to achieve a superior quality, they are harder to implement and they require the signature (combination of spaces) to be chosen before the embedding (in their experiments, Gu et al. performed grid search to choose a combination of spaces to use).\\n\\n2) Our approach can be used as a tool in some more advanced approaches. E.g., one could embed a graph into a space of constant curvature and then refine this embedding by embedding the residual in some other space of another constant curvature. Or different parts of the graph can be embedded into different spaces.\\n\\n3) These constant curvature spaces are easier to understand and therefore to obtain results in, as we do in this paper. These results and insights can eventually be extrapolated to analyze curvature in other, more complicated spaces. As far as we know, the problem of analyzing different notions of curvature for embedding graphs (even in simple spaces) was not considered before, so our aim is to stimulate research in this direction.\\nThe motivation part will be improved in the revised version of the paper.\"}",
"{\"title\": \"To Reviewer 4: Reply to comments and questions\", \"comment\": \"Indeed, the main contributions of this paper are theory and the acquired insights. In particular, we proved the limitation of all existing simple estimators. The main aim is to bring attention to the problem of curvature computation for embeddings and start research in this important direction. Note that we also propose a simple curvature estimator which has desired properties: it depends on dimension and designed for threshold-based loss.\\n\\n\\u201cWhy is it reasonable to take these curvature metrics and use them directly as the curvature of the ambient space at all? Especially given that Ollivier curvature belongs to a small interval and Forman curvature is always negative.\\u201d\\n\\nThese curvatures are widely used in complex network analysis, so our aim was to test their applicability in practice. Indeed, Ollivier curvature has a limited interval and Forman curvature is often highly negative (but not always, see \\\\hat{F} in Section 4.3.3). Additionally, we also consider a heuristic curvature that was actually used in practice. However, the main drawback of all these curvatures is the fact that they do not depend on dimension or loss function, which is crucial, as we show in this paper.\\n\\n\\u201cDoes any of the graph family analysis carry over to more general graphs? For example, assuming some priors about the appearance of these families as subgraphs, or the observed features of real networks in [1]?\\u201d\\n\\nThis is an excellent question and definitely on our list of future projects. The difficulty is that it is not simply the joint appearance of families as subgraphs but also how they are related among each other. For example, it matters if a star has one peripheral node that belongs to a cycle where some of its nodes also belong to the star or none of them do. The real issue is that curvature is not simply a function of subgraph occurrences, but really a function of the intricate graph structure. We are currently working on the analysis of such graph combinations and will reply with more details when we get some insights.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper considers the problem of embedding graphs into continuous spaces. The emphasis is on determining the correct dimension and curvature to minimize distortion or a threshold loss of the embedding.\", \"pros\": \"The problem is clearly stated and easy to understand.\\nThe limitations of the three local curvatures are shown empirically and theoretically\", \"cons\": \"Experiments seem inconclusive, with no discussion of the results\\nProposed global curvature characterizes the optimal embedding parameters but not a different, efficiently calculable discrete curvature to approximate them\\nAnalysis of particular graph families doesn\\u2019t necessarily inform what to expect from embedding large graph data\\n\\nOverall, I lean towards rejecting this paper. The problem does seem an important one, but it seems the main contribution of this paper is comparing the local curvatures against an oracle for determining optimal curvature in embedding space, without putting forward an alternative method.\", \"questions\": \"Why is it reasonable to take these curvature metrics and use them directly as the curvature of the ambient space at all? Especially given that Ollivier curvature belongs to a small interval and Forman curvature is always negative.\\n\\nDoes any of the graph family analysis carry over to more general graphs? For example, assuming some priors about the appearance of these families as subgraphs, or the observed features of real networks in [1]?\\n\\n[1] J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and Z. Gharamani. Kronecker graphs: an approach to modeling networks. arXiv:0812.4905v1, 2008.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents a novel notion named glocal graph curvature, which offers a solution to determine the optimal curvature for embedding. In particular, the global graph curvature depends on both dimension and loss function used for the network embedding. Besides, the authors studied the existing local curvatures and show that the existing graph curvatures may not be able to properly capture the global graph structure curvature. Extensive results demonstrate the statements proposed in the paper. In general, I like the paper due to its nice presentation, interesting view of graph curvature, and solid theoretical analysis. However, I am not familiar with graph curvature. All I can say is the approach is intuitively appealing, the text is well written and easy to follow, even for an outsider. I do not know any related works or what to expect from the results. I could not find anything wrong with this paper, but also do not have any intelligent questions to ask.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper is about curvature as a general concept for embeddings and ML applications. The origin of this idea is that various researchers have studied embedding graphs into non-Euclidean spaces. Euclidean space is flat (it has zero curvature), while non-Euclidean spaces have different curvature; e.g., hyperbolic space has a constant negative curvature. It was noted that trees don't embed well into flat space, while they embed arbitrarily well into hyperbolic space.\\n\\nAll of the notions of curvature, however, are defined for continuous spaces, and have to be matched in some sense to a discrete notion that applies to graphs beyond a particular class like trees. The authors study this setting, consider a variety of existing notions of curvature for graphs, introduce a notion of global curvature for the entire graph, and now to efficiently compute it. They also consider allowing these concepts to vary with the downstream task, as represented by the loss function.\\n\\n\\nPros, Cons, and Recommendation\\n\\nThe study of the various proposed distances is fairly interesting, although it's hard to say what the takeaway here is. I think the part I'm struggling with the most is the motivation. Why do we care about using a space of constant curvature? True, we do so when it's appropriate---we embed trees into hyperbolic space. But when we have a more complicated and less regular graph, then compressing all of that curvature information into a scalar doesn't seem like a good idea, and indeed that's the point of the Gu et al work that's being built on here: it mixes and matches various component spaces that each have constant curvature, but altogether have varying curvature.\\n\\nAt the same time, I like the idea of studying a bunch of proposed measures and attempting to gain new insights. This is a pretty unusual paper for ICLR, since the experimental section is really barely there, and what's most interesting are really these atomic insights. If the authors work on the motivation I would consider accepting it---for now I gave it weak accept.\", \"comments\": [\"The distinction between what the authors think of as \\\"global\\\" and \\\"local\\\" curvatures is confusing and should be explained further. From what I can see, the authors think of global as being a scalar, and local as being defined at each point; intuitively these seem like pretty bad labels. I would think of the \\\"global\\\" one as being coarse, and the \\\"local\\\" as being more refined, since it contains a lot more information. This is also related to the motivation: why stuff all of this information into one single scalar curvature? It forces you to take averages, while Gu et al defined a distribution over the local curvatures.\", \"If the idea is to simply use one space and not a product, there are in fact various spaces with non-constant curvature, e.g., the complex manifold CH^n.\", \"Why do your need your graph to be unweighted at the very beginning of Section 2? On the other hand, you may want to define your graph to be connected for the distortion function to be well-defined.\", \"The statement \\\"1, graph distances are hard to preserve:...\\\" isn't really meaningful, since for the example in 4.3.1, it is possible to embed that graph arbitrarily well. That is, even if the distortion isn't 0, it can be made as small as we desire. There are indeed graphs that are hard to embed (i.e., have lower bounds that do not go to 0) in reasonably tractable spaces, and the authors actually prove such a result, but the star graph is not one of these.\", \"There's various tricks that actually make some of these graphs very easy to embed. One example is K_n in 4.33. Instead of just embedding K_n, embed the star graph on n+1 nodes, and place a weight of 1/2 on each edge. Now every pair of (non-central) nodes is at distance exactly. 1, and this thing is embeddable into hyperbolic space, etc. Interestingly, this is actually predicted by the Gromov hyperbolicity (for K_n_ that the authors briefly mention.\", \"The reason I bring this up is that even if the authors' project is successful, simple graph transformations may induce much better embeddings. That's fine, though, although it should be mentioned.\", \"Can the authors write out what's going on for the hyperbolic lower bound on D_min in the proof of Thm. 4.1?\"]}"
]
} |
BklIxyHKDr | Deep k-NN for Noisy Labels | [
"Dara Bahri",
"Heinrich Jiang",
"Maya Gupta"
] | Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than some recently proposed methods. We also provide new statistical guarantees into its efficacy. | [
"deep",
"noisy labels deep",
"examples",
"noisy labels",
"performance",
"hard",
"empirical study",
"simple"
] | Reject | https://openreview.net/pdf?id=BklIxyHKDr | https://openreview.net/forum?id=BklIxyHKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GOXrTep2ho",
"HyeMkEW2ir",
"S1xjoW-2ir",
"rkxFL70AYH",
"S1lnl2tCKB",
"Bye4OdB4KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725104,
1573815258371,
1573814690755,
1571902289263,
1571884020128,
1571211371932
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1507/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1507/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1507/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1507/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1507/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposed and analyze a k-NN method for identifying corrupted labels for training deep neural networks.\\n\\nAlthough a reviewer pointed out that the noisy k-NN contribution is interesting, I think the paper can be much improved further due to the followings:\\n\\n(a) Lack of state-of-the-art baselines to compare.\\n(b) Lack of important recent related work, i.e., \\\"Robust Inference via Generative Classifiers for Handling Noisy Labels\\\" from ICML 2019 (see https://arxiv.org/abs/1901.11300). The paper also runs a clustering-like algorithm for handling noisy labels, and the authors should compare and discuss why the proposed method is superior.\\n(c) Poor write-up, e.g., address what is missing in existing methods from many different perspectives as this is a quite well-studied popular problem.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"response\", \"comment\": \"Re (2):\\nThe reviewer is correct that the analysis does not look at the intermediate features. Still, we believe that our representation-agnostic results are quite general and significant. They are general because the results say that given any initial representation, under mild assumptions, the kNN will be able to recover the noisy labels at least as well as any method up to logarithmic factors. The results are original --they use some insights from a recent analysis of kNN in the noiseless setting (Jiang 2019) and we provide a novel nonparametric assumption (kNN spread, Def 2) and show precise theoretical guarantees under this quantity as well as the other quantities in more classical results. Moreover, compared to previous works which analyze the noisy kNN setting (which we\\u2019ve cited), to the best of our knowledge, our results are the only finite-sample results while the previous works are asymptotic.\\n\\nRe (3):\\nWe consider Gold Loss Correction a state-of-the-art method and the other methods very competitive. We have added two new baselines as well.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your detailed review.\", \"re\": \"\\\"It would be valuable to the scientific community if the authors can comment on:\\nRolnick et al. (2018). Deep Learning is Robust to Massive Label Noise. https://arxiv.org/pdf/1705.10694.pdf \\n\\n- Some similar looking ideas have been proposed in: \\n\\nGao et al. (2018). On the Resistance of Nearest Neighbor To Random Noisy Labels\", \"learning_with_confident_examples\": \"Rank Pruning for Robust Classification with Noisy Labels\\nNorthcutt et al. (2017). https://arxiv.org/abs/1705.01936\\n\\nThis work, while relevant, focuses solely on binary classification while our method is applicable to general multiclass classification.\\n-----------\", \"https\": \"//globaljournals.org/GJCST_Volume10/7-A-Modification-on-K-Nearest-Neighbor-Classifier.pdf\\n\\nso the authors should contrast their method/analysis against those papers.\\\"\\n\\nWe have now commented on these papers in the Related Works.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a k-NN method for identifying corrupted labels, and then applies this k-NN in the representation space of a deep neural net rather than the original feature space. Overall the paper is well written and the results look quite convincing\\n\\nThe theory appears to be important (if somewhat straightforward-looking) contributions of existing k-NN theory to the corrupted labels setting, based on the key quantity the authors defined as S_k, the minimum k-NN spread.\\n\\nSince the theory highly depends on this quantity, the authors should after Definition 2 justify why they chose to base their results around S_k, and why intuitively it is the right quantity.\\n\\n- I encourage another experiment where the label corruption is not completely at random, that is it depends on the values of x itself. \\n\\n- In particular I encourage a synthetic experiment where the authors look at corruptions with varying S_k (minimum k-NN spread) values, to empirically verify the their theory holds and this really is a meaningful quantity to consider in the label-corruption setting.\\n\\n- Why don't the authors show the results for vanilla deep kNN trained on the full (noisy + clean) dataset in their experiments. This seems important to ascertain the benefits that might be attributed to simply switching to kNN. Or is deep kNN generally worse that the original model trained on the full dataset?\\n\\n- Why didn't the authors show the original model trained on the full dataset?\\nIs it because it always does worse than all the baselines considered in the paper?\\nI would expect it sometimes does much better than Control (eg. when noise rates are low), and this is the straightforward approach must practitioners would use.\\n\\n- Why don't the authors present the accuracy of the k-NN method at identifying corrupted datapoints vs the other methods that aim to explicitly identify the corrupted datapoints?\\nIn general, it seems the authors did not compare other filtering baselines, which would be more related to their method, for example:\", \"learning_with_confident_examples\": \"Rank Pruning for Robust Classification with Noisy Labels\\nNorthcutt et al. (2017). https://arxiv.org/abs/1705.01936\\n\\n\\n- It would be valuable to the scientific community if the authors can comment on: \\nRolnick et al. (2018). Deep Learning is Robust to Massive Label Noise. https://arxiv.org/pdf/1705.10694.pdf \\n\\n- Some similar looking ideas have been proposed in: \\n\\nGao et al. (2018). On the Resistance of Nearest Neighbor To Random Noisy Labels\", \"https\": \"//globaljournals.org/GJCST_Volume10/7-A-Modification-on-K-Nearest-Neighbor-Classifier.pdf\\n\\nso the authors should contrast their method/analysis against those papers.\\n\\n\\n- In Thm 1: \\\"w.r.t. X\\\" should be \\\"w.r.t. x\\\" (lower case)\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose to apply k-NN on the intermediate representations of neural networks for data cleaning. They prove some theoretical properties of k-NN and demonstrate that the proposed data cleaning approach is effective for some tasks.\\n\\nIt is recommended to reject the paper, with the following concerns in mind.\\n\\n(1) The proposed approach is not deeply studied. For instance, what's the difference of applying k-NN on raw features, the earlier representations, the later representations, or even \\\"all\\\" representations? What's the effect of similarity/distance functions on the k-NN? Without the deeper study, Section 3 is at best a naive use of k-NN for data cleaning, and it is not clear whether the contribution is substantial.\\n\\n(2) The theoretical analysis does not seem related to applying k-NN to *deep learning* intermediate features. It seems more related to applying k-NN in general. If so, it is also not clear how the theoretical analysis advances current knowledge about k-NN. Are the results original or known? What are the best theoretical results in the literature to compare with?\\n\\nI thank the authors for answering about the originality. I agree that the original theoretical results is an important contribution on its own, but putting it in the context of deep learning is arguably not the best angle to present the contribution.\\n\\n(3) It is not clear whether the experiments are compared with respect to state-of-the-art (or at least it is hard to see from Section 2). It seems that rather straightforward baselines are being compared.\\n\\nI thank the authors for clarifying this.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provided \\\"an empirical study showing that a simple k-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than some recently proposed methods\\\". Even though it has many theoretical analysis and experiments, the paper itself is poorly written. There is no intuitive discussion on what is missing in existing methods, why the proposed method can be better, and when the proposed method may also fail.\\n\\nNote that an important related work is missing, namely \\\"Robust Inference via Generative Classifiers for Handling Noisy Labels\\\" from ICML 2019 (see https://arxiv.org/abs/1901.11300). The idea of that paper is also making use of the learned representations of ANY discriminative neural classifier, where the geometric information of the hidden feature spaces can help to distinguish correctly and incorrectly labeled training data. That paper was a 20-min long oral presentation at Hall A (i.e., one of the most crowded sessions), and the authors should really compare with it both conceptually and experimentally.\"}"
]
} |
Skg8gJBFvr | Filling the Soap Bubbles: Efficient Black-Box Adversarial Certification with Non-Gaussian Smoothing | [
"Dinghuai Zhang*",
"Mao Ye*",
"Chengyue Gong*",
"Zhanxing Zhu",
"Qiang Liu"
] | Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for $\ell_2$ perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design two new families of non-Gaussian smoothing distributions that work more efficiently for $\ell_2$ and $\ell_\infty$ attacks, respectively. Our proposed methods achieve better results than previous works and provide a new perspective on randomized smoothing certification. | [
"Adversarial Certification",
"Randomized Smoothing",
"Functional Optimization"
] | Reject | https://openreview.net/pdf?id=Skg8gJBFvr | https://openreview.net/forum?id=Skg8gJBFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Vgu2JZ1WMj",
"r1x8lRc2jB",
"BJxlo2c2jB",
"B1eytnq3jr",
"rylAi45hor",
"BygQJM9noS",
"B1lWEtgb9B",
"rJejRI6aKS",
"ByllmnE6tH",
"Hkgqg8GpFH",
"rkeRwZqa_S",
"SJlZpBx1OB",
"BJxwqasovr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798725075,
1573854702118,
1573854359985,
1573854326962,
1573852325970,
1573851610753,
1572043049028,
1571833555234,
1571798040084,
1571788273960,
1570771301751,
1569813944601,
1569598862879
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1506/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1506/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1506/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"~Greg_Yang1"
],
[
"ICLR.cc/2020/Conference/Paper1506/Authors"
],
[
"~Greg_Yang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors extend the framework of randomized smoothing to handle non-Gaussian smoothing distribution and use this to show that they can construct smoothed models that perform well against l2 and linf adversarial attacks. They show that the resulting framework can obtain state-of-the-art certified robustness results improving upon prior work.\\n\\nWhile the paper contains several interesting ideas, the reviewers were concerned about several technical flaws and omissions from the paper:\\n\\n1) A theorem on strong duality was incorrect in the initial version of the paper, though this was fixed in the rebuttal. However, the reasoning of the authors on the \\\"fundamental trade-off\\\" is specific to the particular framework they consider, and is not really a fundamental trade-off.\\n\\n2) The justification for the new family of distributions constructed by the author is not very clear and the experiments only show marginal improvements over prior work. Thus, the significance of this contribution is not clear.\\n\\nSome of the issues were clarified during the rebuttal, but the reviewers remained unconvinced about the above points.\\n\\nThus, the paper cannot be accepted in its current form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of paper revision\", \"comment\": \"We fix the issue of Theorem 1 pointed out by Reviewer #3. Our lower bound is still tight (strong duality holds) for all the cases we studied.\"}",
"{\"title\": \"Thanks for your review. 2/2\", \"comment\": \"3. (sketchy justification): 'The paper justifies a smoothing distribution that concentrates more mass around the center as follows: 'This phenomenon makes it problematic to use standard Gaussian distribution for adversarial certification, because one would expect that the smoothing distribution should concentrate around the center (the original image) in order to make the smoothed classifier close to the original classifier (and hence accurate).' I don't see why we should want more mass near the center---in the limit as we move all the mass towards the center and get the original classifier, our certified bound will be terrible, so it's not clear why moving in that direction should be expected to help. Indeed, the experimental gains are minimal (1 to 3 percentage points) and on methods that were not carefully tuned, so one could imagine that the baseline method could be improved by that much just with careful tuning.'\", \"reply\": \"As we have shown in Eq (7), the certified lower bound can be interpreted as two terms: the \\\"accuracy\\\" and the \\\"robustness\\\". And a good certified lower bound is based on a good trade-off between these two terms: Too much perturbation on the original image will cause the classifier gives prediction with low accuracy while having good robustness (low lambda-total variation). On the other hand, as you said, in the extreme case, if we move all the mass to center, the certified bound would be terrible. This is because in this case, the lambda-total variation would be very large but meanwhile, we have high prediction accuracy. The reason we have better accuracy is obvious: the perturbed image is more close to the original image.\\n\\nAgain, the key argument of our paper is on: using the proposed distribution to achieve a better trade-off between accuracy and robustness. It is not that meaningful to have an argument that considering the 'limit' case because the art in this area is playing with the trade-off.\\n\\nRegarding your question on the experiment, we want to point out that in all the experiments, we simply use the pre-trained models provided by the paper we compare. We believe that those pre-trained models are well tuned to have good performance for the baseline method. We expect to get similar (or better) results with further tuning.\\n\\nWe also want to highlight that the improvement we made is not marginal at all, comparing with several recent works in this area, including the concurrent submission we mentioned (https://openreview.net/forum?id=SJlKrkSFPH). \\n\\n4. I similarly didn't understand the justification for the mixed L-inf / L-2 distribution for L-infinity verification. The main justification was \\\"The motivation is that this allows us to allocate more probability mass along the 'pointy' directions with larger norm, and hence decrease the maximum distance term max \\u03b4\\u2208B`\\u221e,rDF(\\u03bb\\u03c00\\u2016\\u03c0\\u03b4)\\\" This is at the very least too brief for justifying the main experimental innovation in the paper (here at least the empirical improvements are bigger, although still not huge).\\n\\nThe proposed mixed L-inf/L-2 distribution has two terms: ||z||_\\\\infty^-k and \\\\exp(-||z||^2_2/\\\\sigma^2). The second term controls the tail behavior of the proposed distribution. We justify why we use \\\\ell_2 norm for this term by Theorem 4. The first controls how the distribution shrink towards the center. As illustrated in Figure3, as the \\\\ell_infty is a hypercube and using \\\\ell_norm will have less total variation and thus gives better performance. \\n\\nBesides, to the best of our knowledge, we have achieved the best certified accuracy on l_infty certification on Imagenet at the time of submission.\"}",
"{\"title\": \"Thanks for your review. 1/2\", \"comment\": \"Response to Reviewer #3\\n\\nThank you very for your time and comments. Here we provide our response. We hope you could consider raising your score if they addresses your concerns. Otherwise, please let us know and we will try our best to improve and clarify. We hope the reviewer could take the overall benefit and potentials of our new framework into account when it comes to the decision. \\n\\nAs we also point to the other reviewers and AC, there is an independent and concurrent submission to ICLR that overlaps with our work in both our basic framework and many detailed algorithmic choices. \\\"A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES (https://openreview.net/forum?id=SJlKrkSFPH)\\\". Because of the high overlap with our work, we hope you could also check this paper and their reviews and calibrate your score accordingly, since if our work is rejected while their accepted only due to uncalibrated reviews, it would block the opportunity for publishing our work in the future. \\n\\n1. (main theorem is incorrect): 'Claim 3 in the appendix is wrong. The fact that (delta', f') outperforms (delta-bar, f-bar) with respect to lambda* does not imply that (delta', f', lambda*) is a better solution to the primal problem, because we must take max over lambda and the maximizing lambda need not be lambda*. In particular if f' doesn't satisfy the constraint we would instead take lambda to infinity.'\", \"reply\": \"Thanks very much for pointing out this! We have *fixed the issue*. The strong duality is the previous submission is wrong. In the newly updated version, we show that strong duality holds for the case of our experiment, that is, the proposed method is tight for the certification problem we studied. Please check the updated draft for details.\\n\\n2. 'The paper makes several references (in italics) to a \\\"fundamental trade-off between accuracy and robustness\\\". But a fundamental trade-off means that *any* method that attains good accuracy must sacrifice robustness and vice versa; this requires a \\\"for all\\\" statement, i.e. a lower bound. All the paper shows is that the *particular upper bound* exhibits a trade-off (and even then, the notions of \\\"accuracy\\\" and \\\"robustness\\\" are merely interpretations of quantities in the bound; it's not clear why the robustness term in particular is tied to more standard notions of robustness).'\\n\\nTheoretically, as shown by our theory, if we view the adversarial certification problem as a constraint optimization problem and set the space of classifier space as F_[0,1], the solution of the lower bound can be decomposed into the two terms in Equ(7). Under the assumption in our paper, this is a 'for all' statement and this statement holds. \\n\\nSecondly, the adversarial defense is on the trade-off between accuracy and robustness. We believe this point of view should have been widely accepted by the community, e.g. [1, 2, 3]. There are many ways to mathematically characterize this trade-off and this is exactly one of our contributions. \\n\\n[1] Zhang, Hongyang, et al. \\\"Theoretically principled trade-off between robustness and accuracy.\\\" International conference on machine learning. 2019.\\n\\n[2] Raghunathan, Aditi, et al. \\\"Adversarial Training Can Hurt Generalization.\\\" arXiv preprint arXiv:1906.06032 (2019).\\n\\n[3] Shi, Yujun, et al. \\\"Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric.\\\" arXiv preprint arXiv:1906.02494 (2019).\"}",
"{\"title\": \"Thanks for your review! Below please find our response.\", \"comment\": \"Response to Reviewer #2\\n\\nThank you very for your time and comments. We hope you can re-consider your evaluation based on the new framework that we develop, which significantly generalizes and simplify the derivations in existing results. We believe our empirical results are sufficient to support and demonstrate the potential benefit of this framework (see response below). \\n\\nWe also want to point out an independent and concurrent submission to ICLR that overlaps with our work in both our basic framework and many detailed algorithmic choices. \\\"A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES (https://openreview.net/forum?id=SJlKrkSFPH)\\\". Because of the high overlap with our work, we hope you could also check this paper and their reviews and calibrate your score accordingly, since having our work rejected while their work accepted due to uncalibrated review would block the opportunity for publishing our work in the future.\", \"we_response_to_the_main_arguments_here\": \"1. You are correct that models trained with our proposed noise should be used ideally. But it is very computationally expensive to conduct (especially when training on ImageNet). So we used Cohen's as a standard setting, which I think it forms a fair comparison. \\n\\nWe used Salman et al's model for Linfty certification because we found Salman's model performs better for L_infty certification. This is likely because Cohen's model is trained with Gaussian noise data augmentation, which does not match our smoothing distribution of L_infty certification, while Salman's model is trained in a more adversarial fashion, and turns out to be more robust for L_infty distribution (even though it was still designed for the standard Gaussian distribution).\\n\\n2. For results of Cohen's method, Salman et al.'s paper (https://github.com/Hadisalman/smoothing-adversarial), Salman's blog (https://decentdescent.org/smoothadv.html) and Cohen's paper (https://arxiv.org/pdf/1902.02918.pdf) have inconsistent results. This may be due to randomness of the algorithm. For our paper, we reported numbers that came from our experiment with Cohen's github code.\\n\\n3. We are not sure what do you mean by 68.2. We work on L_infty certification and there is no result for L_inf certification in Salman et al.'s paper (they work on an extension of Cohen's, which it's about L_2 setting). We 'transfer' their result on \\\\ell_2 certification to L_infty setting using theorem 3.\\n\\nThanks for your other comments, which point out many improper descriptions and are helpful for us to revise our work.\"}",
"{\"title\": \"Thanks for your review. However we respectfully disagree with many of your comments.\", \"comment\": \"Thank R#1 for your time and comments. We do respectfully disagree with many of your comments, and think they are based on misunderstanding. We hope our response can help clarify the issue.\\n\\nWe also note an independent and concurrent submission to ICLR that overlaps with our work in many ways. \\n\\\"A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES (https://openreview.net/forum?id=SJlKrkSFPH)\\\". Because of the high overlap with our work, we strongly encourage these two works can be jointly considered and receives calibrated scores.\", \"question\": \"''In the light of previous arguments, I don't think the choice of Eq. (9) or Eq. (10) is well motivated. Why not smooth it with a cube of appropriate radius''\", \"reply\": \"This is just a motivating example for illustration. Cohen's work derives a bound with worst case achieved by linear classifier and the space of classifier we and they concern is very general (that includes almost all nonlinear real-word case and of course the sphere-based classifier)\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a new method for adversarial certification using non-Gaussian noise. A new framework for certification is proposed, which allows to use different distributions compared to previous work based on Gaussian noise. From this framework, a trade-off between accuracy and robustness is identified and new distributions are proposed to obtain a better trade-off than with Gaussian noise. Using these new distributions, they re-certify models obtained in previous work.\\n\\nI am hesitating between a weak reject and a weak accept. The theoretical results are interesting, showing a clear trade-off between robustness and accuracy with a new lower bound and deriving better smoothing distributions. However, the experimental results are lacking, and do not support much the proposed method. Training with this new distribution would have been a natural experiment given the argument. Moreover, the results for L_inf are partial and it would be expected to have some results for ImageNet as claimed in the introduction. I would have given an accept if the previous points had been addressed and I feel that with some more work on it, it would become an excellent paper.\", \"main_arguments\": \"\", \"my_main_concern_is_about_the_experiments\": \"Why were Cohen et al.\\u2019s models used instead of Salman et al.\\u2019s? Salman et al.\\u2019s have achieved better certified accuracy under the L_2 norm so it would only seem natural to use their model.\", \"about_the_main_results\": \"there seems to be a discrepancy between the results reported for Cohen et al. and the original paper for both CIFAR-10 and ImageNet L2 certification. Also, the reported certified accuracy for Salman et al.\\u2019s model for L_inf on CIFAR-10 reported in the original paper is 68.2 at 2/255, which is very far from the 58 in Table 3. What is the reason for these differences?\", \"minor_comments\": \"In the third paragraph, it is claimed that L_inf attacks are a stronger and more relevant type of attacks than L_2 attacks. These two different objectives cannot be compared in those terms.\\nDefenses such as adversarial training have not been \\u201cbroken\\u201d as claimed in section 2 in the sense that the claims made in the original paper still hold true. The term broken is used for defenses in which the claimed accuracy against stronger attacks were found to be much lower than what was claimed in the original paper.\\nIt is claimed that \\u201cif ||z||_inf is too large to exceed the region of natural images, the accuracy will be obviously rather poor\\u201d; however, the common practice is to clip to the input space bounds. How would that affect the method?\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"In the first paragraph, Goodfellow et al., 2015 is cited, however, papers on adversarial attacks were published earlier than that such as Szegedy et al., 2014 or Biggio et al., 2013.\\nVershynin, 2018 is cited about the distribution of a gaussian in high-dimensional spaces. However, this is a very well known result and does not need any citation (or if any, Bellman, 1961).\", \"typo_after_equation_4\": \"||f||_{L_p}\\nTypo in \\u201cBlack-box Certification with Randomness\\u201d paragraph: \\u201cby convovling\\u201d\\nTypos in Table 2.: the columns 2.0 to 3.5 are mislabeled\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThis paper investigates the choice of noise distributions for smoothing an arbitrary classifier for defending against adversarial attacks. The paper focuses on the two major adversaries: \\\\ell_2 adversaries and \\\\ell_\\\\infty adversaries. Theorem 1 quantifies the tradeoff between the choice of smoothing distribution which (1) has clean accuracy close to the original classifier and (2) promotes the smoothness of smoothed classifier (and hence adversarial accuracy). For the \\\\ell_2 adversary, the paper argues that Gaussian distribution is not the right choice, because the distribution is concentrated on the spherical shell around the x. Instead, the authors propose using a new family of distributions, with the norm square (p_{|z|_2^2}) following the scaled \\\\chi^2 distribution with degree d-k (Eq. 8). This allows an extra degree of freedom, and setting k=0 recovers the Gaussian distribution. For \\\\ell_\\\\infty perturbations, the paper suggests another family of distributions combining the \\\\ell_2 and \\\\ell_\\\\infty norm (Eq. 9), and argues that it outperforms the natural choice of \\\\ell_\\\\infty norm-based distributions (Eq. 10).\\n\\nI think the paper should be rejected because (1) For \\\\ell_2 perturbations, there is no major difference between this new family of distributions (d-k \\\\chi^2) and a Gaussian with different variance. (2) For \\\\ell_\\\\infty distributions, the motivation of mixed norm distributions (Eq. 9) over \\\\ell_\\\\infty based distributions (Eq. 10) is not very clear. (3) The experimental evidence is also weak (see below).\", \"main_arguments\": \"1. The distribution of the norm \\\\|z\\\\|_2 in Eq. (8) would be concentrated on a thin spherical shell of radius about \\\\sqrt{d-k}\\\\sigma. As the Gaussian distribution with standard deviation \\\\sigma' is supported on a shell of radius about \\\\sqrt{d} \\\\sigma', for each (k,\\\\sigma) in the family of Eq. 8, there is an equivalent Gaussian with appropriate \\\\sigma' (Theorem 3 now just compares the radius of the spherical shell). Therefore, I don't see the benefit of this extra degree of freedom of k: the noise distribution is again a \\\"soap bubble\\\" of a different radius. Thus, a grid search over \\\\sigma' for a Gaussian should be the same as a grid search over (k,\\\\sigma) in Eq. 8.\\n\\nEven the experimental experiments are a marginal improvement over Cohen et al. I don't see why the value of (k,\\\\sigma) was not provided in Table 1 and only \\\\sigma was provided. Also, the table of Cohen et al. was only calculated for specific values of \\\\sigma for Gaussian distributions (0.12, 0.25, 0.5, 1.00). For a fair comparison, comparable values of \\\\sigma's must be calculated, and then the best choice should be selected. \\n\\n2. In the light of previous arguments, I don't think the choice of Eq. (9) or Eq. (10) is well motivated. \\nWhy not smooth it with a cube of appropriate radius? Also, not enough experimental details are provided for Table 3. Salman et al. (2019) reports the accuracy of 68.2% for \\\\ell_infty perturbations (Table 3, Salman et al. (2019)), whereas the value reported in your Table 3 for at the same radius is 58%. Is it a typo? In any case, the values reported for the proposed model in Table 3 are only a marginal improvement over Figure 1 (left) in Salman et al. (2019), just going by the trivial \\\\ell_2 to \\\\ell_\\\\infty certificate.\", \"other_areas_for_improvement\": \"1. The paper contains numerous grammatical errors, confusing statements, and nonstandard phrases. For example: (i) more less robust, (ii) black start, (iii) pointy points, etc. I suggest that the authors spend more time clarifying their manuscript.\\n\\n2. The paragraph starting with \\\"Trade-off between Accuracy and Robustness\\\": I think this paragraph should be reworded for clarification. It is not robustness but rather the lack thereof -- say, sensitivity.\\n\\n3. On p.5, why was the toy classifier sphere-based? The toughest classifier for Gaussian smoothing (the one achieving the lower bound for Gaussian smoothing) is actually a linear classifier.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper introduces an improvement to the randomized smoothing analysis in Cohen et al. (2019), using Lagrangian relaxation to achieve a more general lower bound. Using this, it considers different adversarial smoothing distributions that yield some increase in certified adversarial accuracy.\", \"overall_assessment\": \"While the Lagrangian relaxation idea is interesting and could yield interesting follow-up work, the paper is sloppy in several respects and needs to be tightened before it can be considered for publication.\", \"key_issues\": \"1. Proof of main theorem (strong duality) is incorrect. Likely the statement itself is also incorrect. Fortunately the most important direction (lower bound) is still true, so this isn't a fatal flaw to the approach.\\n2. The paper makes several references (in italics) to a \\\"fundamental trade-off between accuracy and robustness\\\". But a fundamental trade-off means that *any* method that attains good accuracy must sacrifice robustness and vice versa; this requires a \\\"for all\\\" statement, i.e. a lower bound. All the paper shows is that the *particular upper bound* exhibits a trade-off (and even then, the notions of \\\"accuracy\\\" and \\\"robustness\\\" are merely interpretations of quantities in the bound; it's not clear why the robustness term in particular is tied to more standard notions of robustness).\\n3. The justification for why the particular smoothing distributions are good ideas is sketchy.\\n\\nI elaborate on 1 and 3 below. Addressing 1-3 effectively will improve my score.\\n\\n#1 (main theorem is incorrect): Claim 3 in the appendix is wrong. The fact that (delta', f') outperforms (delta-bar, f-bar) with respect to lambda* does not imply that (delta', f', lambda*) is a better solution to the primal problem, because we must take max over lambda and the maximizing lambda need not be lambda*. In particular if f' doesn't satisfy the constraint we would instead take lambda to infinity.\\n\\n#3 (sketchy justification): The paper justifies a smoothing distribution that concentrates more mass around the center as follows: \\\"This phenomenon makes it problematic to use standard Gaussian distribution for adversarial certification, because one would expect that the smoothing distribution should concentrate around the center (the original image) in order to make the smoothed classifier close to the original classifier (and hence accurate).\\\" I don't see why we should want more mass near the center---in the limit as we move all the mass towards the center and get the original classifier, our certified bound will be terrible, so it's not clear why moving in that direction should be expected to help. Indeed, the experimental gains are minimal (1 to 3 percentage points) and on methods that were not carefully tuned, so one could imagine that the baseline method could be improved by that much just with careful tuning.\\n\\nI similarly didn't understand the justification for the mixed L-inf / L-2 distribution for L-infinity verification. The main justification was \\\"The motivation is that this allows us to allocate more probability mass along the \\u201cpointy\\u201d directions with larger`\\u221enorm, and hence decrease the maximum distance term max \\u03b4\\u2208B`\\u221e,rDF(\\u03bb\\u03c00\\u2016\\u03c0\\u03b4).\\\" This is at the very least too brief for justifying the main experimental innovation in the paper (here at least the empirical improvements are bigger, although still not huge).\", \"minor_but_related\": \"Why is the x-axis in Figure 4 so compressed? This is also in a regime where all 3 methods fail to certify so not clear it's meaningful.\", \"writing_comment\": \"Change some of the Theorems to Propositions. Theorems should be for key claims in paper (there shouldn't be 4 of them in one 8-page paper).\"}",
"{\"comment\": \"Thanks a lot for your questions.\", \"first_question\": \"We use Salman et al\\u2019s model for Linfty certification because we found Salman's model performs better for L_infty certification.\\nThis is likely because Cohen\\u2019s model is trained with Gaussian noise data augmentation, which does not match our smoothing distribution of L_infty certification, while Salman\\u2019s model is trained in a more adversarial fashion, and tends out to be more robust for L_infty distribution (even it was still designed for the standard Gaussian distribution).\", \"second_question\": \"We agree that we may get better performance by directly training on our proposed distribution. It is an interesting direction that we plan to explore in future work.\", \"title\": \"Thanks for your questions\"}",
"{\"comment\": \"Thanks for the clarification! This makes sense.\", \"some_other_questions\": \"1) Why did you use Cohen et al.'s models for l2 certification, but Salman et al.'s models for linfty certification? What if you use Salman et al.'s models for l2, and Cohen et al.'s for linfty?\\n\\n2) It would seem natural to train on your proposed distribution as well, and one would perhaps expect better performance after doing so, compared to using pre-trained models trained on Gaussian noise or SmoothAdv. Do you have those results?\\n\\nThanks! Hope to hear back from you soon :)\", \"title\": \"Thanks for the clarification; other questions\"}",
"{\"comment\": \"Thanks for your concern! We want to clarify that actually we don't need to get the exact optimal lambda because *any* lambda could give a valid confidence lower bound. Besides, the optimization is smooth since the derivative of lower bound $D$ w.r.t. lambda is actually bounded by 2 with some algebra. It's our negligence of omitting these details :( The search space is chosen heuristically, which is good enough (the optimization of lambda can be shown to convex with only one minimal point), so we just don't explore more on this. We will update relevant details as soon as the update of pdf file is allowed :)\", \"title\": \"Some clarification\"}",
"{\"comment\": \"Dear authors,\\n\\nThanks for a very interesting paper. I might have missed this in the paper, but how did you choose \\\\lambda_start, \\\\lambda_end, and h for your algorithm 1 and 2? Additionally, what is the guarantee you provide for those choices? For example, how do you prevent the possibility that the best \\\\lambda is outside your interval, or the possibility that the function in lambda is very nonsmooth and h is too big in comparison?\\n\\nThanks again, and looking for your reply :)\", \"title\": \"How to choose \\\\lambda_start, \\\\lambda_end, as well as the increment h?\"}"
]
} |
SyxBgkBFPS | Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization | [
"Hao Liu",
"Richard Socher",
"Caiming Xiong"
] | Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from sparse reward tasks, which leads to poor sample efficiency during training. In this work, we propose a guided adaptive credit assignment method to do effectively credit assignment for policy gradient methods. Motivated by entropy regularized policy optimization, our method extends the previous credit assignment methods by introducing more general guided adaptive credit assignment(GACA). The benefit of GACA is a principled way of utilizing off-policy samples. The effectiveness of proposed algorithm is demonstrated on the challenging \textsc{WikiTableQuestions} and \textsc{WikiSQL} benchmarks and an instruction following environment. The task is generating action sequences or program sequences from natural language questions or instructions, where only final binary success-failure execution feedback is available. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy optimization approaches. | [
"credit assignment",
"sparse reward",
"policy optimization",
"sample efficiency"
] | Reject | https://openreview.net/pdf?id=SyxBgkBFPS | https://openreview.net/forum?id=SyxBgkBFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"UPsAqRDqP-",
"HkxCF407jB",
"SJl_u4RmjB",
"rJxO8NC7ir",
"HJgXnWk0Yr",
"rkeNx7jTFB",
"HJxsdYBcYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798725046,
1573278854277,
1573278831693,
1573278800211,
1571840427329,
1571824363868,
1571604850899
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1505/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1505/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1505/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a policy gradient algorithm related to entropy-regularized RL, that instead of the KL uses f-divergence to avoid mode collapse.\\n\\nThe reviewers found many technical issues with the presentation of the method, and the evaluation. In particular, the experiments are conducted on particular program synthesis tasks and show small margin improvements, while the algorithm is motivated by general sparse reward RL.\\n\\nI recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit an improved version elsewhere.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Initial response to R3\", \"comment\": \"Thank you for your time, detailed feedback, and a clear summary of our contribution.\\nWe are happy to hear you found this work valuable. \\nWe answer most of the said points below. It would be very helpful to hear your thoughts on our responses so that we can make the appropriate modifications to the paper. \\n\\n\\n1. Is there an experiment demonstrate f-divergence performs better than KL-divergence?\\n \\n GACA w/o AG is equivalent to replacing f-divergence with KL-divergence in GACA. Table 1. shows that GACA > GACA w/o AG > MAPO on both program synthesis benchmarks, this result demonstrates that f-divergence(adaptive gradient estimation) is better KL-divergence. \\n\\n\\n\\n2. Does dropping out zero-reward trajectory buffer have an impact on the performance?\\n \\n Intuitively, dropping out zero-reward trajectory buffer reduces the number of samples used to estimate gradient, thus it hurts the minimization of f-divergence and downgrades the performance. Empirically, we found that dropping out zero-reward buffer downgrades the performance of GACA but still better than MAPO. This result further validates the value of this work \\u2014 enable reusing off-policy samples by efficient credit assignment is important. \\n\\n\\n3. Does separate buffer help baselines?\\n \\n Separate buffer doesn\\u2019t help baselines and is not necessary because mathematically baselines method can only utilize high-reward trajectories to compute gradient, as shown at the beginning of Section 4 and Appendix E. \\n We use separate buffer because GACA can use both high-reward and zero-reward trajectories, naturally, we can use stratified sampling to estimate unbiased and low variance gradient. \\n\\n\\n4. Typos\\n\\n\\n Thank you for pointing out typos, we have improved the presentation of the paper. The updated version of the paper is available now. Again, thank you for reading the paper.\"}",
"{\"title\": \"Initial response to R2\", \"comment\": \"Thank you for your time and detailed feedback.\\nWe are happy to hear you found this work valuable, and understand your concerns. We are confident that, following the points of clarification highlighted by you, we can resolve any ambiguities in the paper, and thank you for helping make the paper stronger as a result. We answer most of the said points below. It would be very helpful to hear your thoughts on our responses so that we can make the appropriate modifications to the paper. \\nWe hope that you will be willing to consider revising your assessment in light of the clarifications. \\n\\n\\n1. Typos in the paper, move algorithm to main paper, and other presentation suggestions\\n\\n\\n Thanks for pointing out typos in the submission, we have moved the main algorithm from Appendix to main paper, adjust the organization, and improved the presentation of the paper a lot. The updated version of the paper is available now. Again, thank you for reading the paper. \\n\\n\\n2. Why using two replay buffers \\\"leads to a better approximation\\\"\\n\\n\\n We acknowledge that this sentence is a little bit confusing. What we mean is GACA enables using both high-reward and zero-reward trajectories, which leads to a higher sample efficiency. While previous methods either only reply on on-policy trajectories(e.g. REINFORCE, MML, etc), or use a buffer to save high-reward trajectories for replaying(e.g. MAPO). \\n\\n\\n4. Motivations behind using inverse tail probability to approximate f-divergence \\n\\n\\n By using f-divergence, we reveal the connections between various previous credit assignment methods, for example, IML and RAML both minimize a reverse KL-divergence between policy and a target distribution, MAPO minimizes KL-divergence, etc. Different divergence measures lead to a different approximation of policy to target distribution[1], for example, KL-divergence often leads to mode-collapse. We want the policy distribution to approximate/cover the target distribution as good as possible, thus we utilize inverse tail probability technique which we found performs the best. \\n\\n\\n [1] Yingzhen Li, Richard E. Turner. R\\u00e9nyi Divergence Variational Inference. Advances in Neural Information Processing Systems(NeurIPS), 2016.\\n\\n\\n\\n5. The claim that GACA recovers all the mentioned methods as special cases are questionable\\n\\n\\n Your concern about REINFORCE and RAML as special cases of GACA can be resolved if you recall that for simplicity and without loss of generality, we assumed that reward is 1 for successful task completion trajectories and 0 otherwise, this follows the notations convenience in [1, 2]. In the meanwhile, we do take your suggestions on improving the presentation of the paper that we have updated Appendix E with improved presentation of how does GACA recover existing credit assignment methods. Thank you for reading our Appendix.\\n\\n\\n [1] Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, and Ni Lao. Memory augmented policy optimization for program synthesis with generalization. Advances in Neural Information Processing Systems(NeurIPS), 2018.\\n \\n [2] Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1051\\u20131062, Vancouver, Canada, July 2017.\"}",
"{\"title\": \"Initial response to R1\", \"comment\": \"Thank you for your time and detailed feedback.\\nWe are glad to hear you found this work interesting and valuable, and understand your concerns. We are confident that, following the points of clarification highlighted by you, we can resolve any ambiguities in the paper, and thank you for helping make the paper stronger as a result. \\nWe answer most of the said pints below. It would be very helpful to hear your thoughts on our responses so that we can make the appropriate modifications to the paper. \\nWe hope that you will be willing to consider revising your assessment in light of the clarifications.\\n\\n\\n1. Evaluation of the proposed method on other domains\\n\\n We would like to point out GACA can be useful in other challenging tasks such as combinational optimization and structured prediction where credit assignment from binary feedback remains a major challenge. We want emphasize that one of the main purposes of this paper is to introduce the method of guided adaptive credit assignment. As such, the purpose of this paper is not to beat every single benchmark but to show the benefit of this framework. In particular, we demonstrate the effectiveness of GACA on two challenging program synthesis benchmarks, to our best knowledge, this work is the current state-of-the-art method that uses only binary supervision and outperforms previous methods by a large margin. We leave the investigation of our method on other domains for future work. \\n\\n\\n2. Results explanation/significance\\n\\n\\n GACA outperforms recent state-of-the-art methods MeRL, BoRL, and MAPO by a large margin, on a variety of tasks\\u2014including the challenging WikiSQL and WikiTable, as shown in Table 1, 2, 3. To our best knowledge, GACA is by far the state-of-the-art method on these benchmarks using only binary feedback. GACA is easy to implement and is a generalization of various credit assignment methods(MAPO MML, EML, RAML, and REINFORCE), we want to emphasize that GACA is general and can be further improved by combining it with techniques in other methods to further boost performance, such as meta-learning proposed in MeRL.\\n\\n\\n\\n3. More related work on program synthesis\\n\\n\\n We have included several program synthesis papers in related work.\\n\\n\\n\\n4. Why use two-buffer estimation in Eq. 13\\n\\n\\n Because GACA enables reusing all the past trajectories while previous methods only use high-reward trajectories, two-buffer estimation naturally arises here as a result of using stratified sampling to obtain unbiased and low variance gradient. w_b and w_c are calculated via stratified sampling, refer to Eq(13) for details. \\n\\n\\n5. What\\u2019s the performance of GACA if f-divergence is replaced with KL-divergence?\\n\\n\\n If KL-divergence is used, then GACA reduces to GACA w/o AG which means GACA without adaptive gradient estimation. From experimental results(e.g. Table 1), we can see that GACA w/o AG greatly outperforms baselines on both benchmarks.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"The authors formulate the credit assignment method as minimizing the divergence between policy function and a learned prior distribution. Then they apply f-divergence optimization to avoid the model collapse in this framework. Empirical experiments are conducted on the program synthesis benchmark with sparse rewards.\", \"The main contribution of this paper is applying f-divergence optimization on the program synthesis task for credit assignment.\", \"One of my concerns is that the experiment section is in a limited domain to argue it is a broad algorithm for credit assignment. The paper will be stronger if the comparison is applied in a distant domain like goal-based robot learning etc. With some experiments on a different domain, the paper will be more convincing.\", \"The improvement/margin in program synthesis task needed to be explained well, is the margin significant enough?\", \"The paper could discuss more on related papers on program synthesis in the related work section as the main experiment is in this work.\", \"The authors claim that the two-buffer estimation is better and lead to better gradient estimation, but it is not demonstrated empirically or theoretically. It could be better if the ablation study is conducted in the experiment. Or the author could provide a theoretical analysis of why equation (13) is better. Moreover, the investigation of different choices of $w_b$ and $w_c$ is necessary.\", \"Another study needed is the investigation of different divergences; the work will be stronger if a KL divergence version is compared. Otherwise, it is not clear how much the f-divergence will contribute to the performance.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes guided adaptive credit assignment (GACA) for policy gradient methods with sparse reward.\\n\\nGACA attacks the credit assignment problem by\\n1) using entropy regularized RL objective (KL divergence), iteratively update prior \\\\bar{\\\\pi} and \\\\pi_\\\\theta;\\n2) generalizing KL to f-divergence to avoid mode seeking behaviour of KL;\\n3) using 2 tricks to estimate the gradient of f-divergence to update \\\\pi_\\\\theta, a) modified MAPO (Liang et al., 2018) estimator (using two buffers), b) replacing rho_f by the inverse of tail probability (Wang et al., 2018).\\n\\nExperiments of program synthesis and instruction following are conducted, to show the proposed GACA outperform competitive baselines.\\n\\nAlthough the experimental results look promising, I have many concerns with respect to this paper as follows.\\n\\n1. The organization is bad. The main algorithm has been put into the appendix. It should appear in the paper.\\n\\n2. There are too many typos and errors in the paper and derivations, which quite affected reading and understanding.\", \"for_example\": \"in Eq. (6), what is z \\\\sim Z? Should be z \\\\in Z? It also appears in many other places.\\nin Eq. (7), there should not be \\\\sum_{z \\\\in Z} here.\\nProof for Prop. 1, I cannot really understand the notations here. Please rewrite and explain this proof. (I can see it follows Grau-Moya et al., 2019, but the notations here are not clear.)\\nin Eq. (11), \\\\bar{\\\\pi} / \\\\pi_\\\\theta is used, but in Eq. (12), \\\\pi_\\\\theta / \\\\bar{\\\\pi} appeared, which one is correct? While in the proof for Lemma 2, it is \\\\bar{\\\\pi} / \\\\pi_\\\\theta. And in Alg. 1 it is \\\\pi_\\\\theta / \\\\bar{\\\\pi}. Please make this consistent.\\nTypos, like \\\"Combining Theorem 1 and Theorem 2 together, we summarize the main algorithm in Algorithm 1.\\\" in the last paragraph of p6. However, they appeared as Prop. 1 and Lemma 2. Please improve the writing.\\n\\n3. The mutual information argument Eq. (9) seems irrelevant here. (It follows Grau-Moya et al., 2019, but the notations in the proof are bad and I cannot understand it). Whether the solution is mutual information or not seems not helpful for getting better credit assignment. I suggest remove/reduce related arguments around Eq. (9) and (10), and make space for the main algorithm.\\n\\n4. The entropy regularized objective and the KL is kind of well known. Maybe reduce the description here. And the key point is Eq. (8), which lays the foundation of iteratively update \\\\bar{\\\\pi} and \\\\pi_\\\\theta. However, Eq. (8) is the optimal solution of KL Eq. (7). Is it also the optimal solution of f-divergence used in the algorithm? If it is, clearly show that. If not, then update \\\\bar{\\\\pi} in Alg. 1 is problematic. Please clarify this point.\\n\\n5. The 2 tricks used here for estimating the gradient of f-divergence with respect to \\\\pi_\\\\theta, i.e., modified MAPO estimator in Prop. 2, and inverse tail probability in Wang et al., 2018, seems quite important for the empirical performance.\\nHowever, motivation is not clear enough. First, why using two replay buffers \\\"leads to a better approximation\\\"? Any theory/intuition or experiment to support this claim? Second, why using inverse tail probability \\\"achieve a trade-off between exploration and exploitation\\\". It seems not obvious to see that. And also, explain why using this trick makes \\\"\\\\pi_\\\\theta adaptively coverage and approximate prior distribution \\\\bar{\\\\pi}\\\".\\n\\n6. The claim that GACA recovers all the mentioned methods as special cases are questionable. For example, as in E.1, \\\"by simply choosing \\\\rho_f as constant 1\\\", comparing Eq. (12) with the gradient of REINFORCE, there is a difference that REINFORCE has a reward term, but GACA does not have. Then why GACA reduces to REINFORCE? Also in E.5, the RAML objective seems wrong. There is no reward term here. Please check them.\\n\\nOverall, the proposed GACA method achieves promising results in program synthesis tasks. However, there are many concerns with respect to motivation and techniques that should be resolved.\\n\\n=====Update=====\\nThanks for the rebuttal. I keep my rating since some of my concerns are still not resolved. In particular, \\\"Eq. (8) is the optimal solution of KL Eq. (7). Is it also the optimal solution of f-divergence used in the algorithm?\\\" Eq. (8) looks not the same as the paragraph above Lemma 2 \\\"\\\\bar{\\\\pi} = \\\\pi_\\\\theta\\\" to me. If Eq. (8) is not the optimal solution of Eq. (11), the update in Alg. 1 is somewhat problematic and other better choices exist. Since Algorithm 1 explicitly uses f-divergence, I think at least this point should be clarified by the authors rather than my guess.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis work proposed an off-policy framework for policy gradient approach called\\nguided adaptive credit assignment (GACA) in a simplified setting of goal-oriented\\nentropy regularized RL.\\nGACA optimizes the policy via fitting a learnable prior distribution that using\\nboth high reward trajectory and zero reward trajectory to improve the sample efficiency. \\nThe experiments on sparse reward tasks such as WikiTableQuestions and WikiSQL\\ndemonstrate the effectiveness of the GACA, comparing with a set of advanced baselines.\", \"detailed_comments\": \"\", \"off_policy_learning\": \"The Environment dynamic is not considered. The trajectory\\nreward is determined by the initial state, goal, and the sequence of actions taken thereafter. The off-policy learning can be applied since the distribution of\\ninitial state, and goal is not affected by the policy. This reduces to a\\nweighted maximum likelihood problem.\", \"resolving_the_sparse_reward_issue\": \"In sparse reward tasks, many of trajectories have zero rewards, in order to utilize\\nthe zero reward trajectory (since in the weighted problem those samples have no\\ncontribution to the gradient). This work proposed to store the trajectories\\ninto two replay buffers and samples from both of them separately. \\nIntuitively, it is not clear to me why minimizing mutual information between z and\\nreward would help the learning. I am suspecting the reason is that mutual information brings non-zero gradient for zero reward trajectories (given zero-reward trajectories indeed helps the learning). \\nThe authors also claimed that KL divergence performs worse than f-divergence due to the mode seeking issue. Do the experiments in GACA w/o AG support this claim?\", \"ablation_study\": \"The authors claimed that using zero reward trajectory can help with sample efficiency.\\nI wonder what the performance would be if we drop the zero reward trajectory buffer if we have a reasonable high frequency to reach the high trajectory reward sequence. \\nIs it necessary to incorporate the zero reward trajectory? \\n\\nWhat is the exact formula of GACA w/o GP and GACA w/o AG? \\n\\nThe proposed method consists of three parts (GP, AG, and separate buffer. ) \\nTwo variants (w/o GP, w/o AG) of GACA is conducted in the ablation study. \\nHow does the GACA perform if we drop the separate buffer? What if we incorporate separate buffer for baselines. Does GP/AG play an essential role in performance improvement,\\ncomparing to a separate buffer?\", \"other_questions\": \"Since the sequence of actions is considered as a group, the performance\\nmay highly depend on the size of action space and horizon. \\nWhat is the size of the horizon of the tested problems? \\nWhat is the value of WB and WC in each experiment?\", \"minor\": \"There are many typos or grammar issues in this version. e.g.,\\nL 3, Page 4, learn-able prior\\nLast paragraph, page 3, \\\" as as a combination of expectations\\\", \\nPage, 15 \\\"is actually equals mutual\\\"\\nEq 23 -> 24\"}"
]
} |
r1eBeyHFDH | A Theory of Usable Information under Computational Constraints | [
"Yilun Xu",
"Shengjia Zhao",
"Jiaming Song",
"Russell Stewart",
"Stefano Ermon"
] | We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannon’s information theory that takes into account the modeling power and computational constraints of the observer. The resulting predictive V-information encompasses mutual information and other notions of informativeness such as the coefficient of determination. Unlike Shannon’s mutual information and in violation of the data processing inequality, V-information can be created through computation. This is consistent with deep neural networks extracting hierarchies of progressively more informative features in representation learning. Additionally, we show that by incorporating computational constraints, V-information can be reliably estimated from data even in high dimensions with PAC-style guarantees. Empirically, we demonstrate predictive V-information is more effective than mutual information for structure learning and fair representation learning. Codes are available at https://github.com/Newbeeer/V-information . | [
"computational constraints",
"mutual information",
"theory",
"usable information",
"shannon",
"predictive",
"new framework",
"information",
"complex systems",
"foundation"
] | Accept (Talk) | https://openreview.net/pdf?id=r1eBeyHFDH | https://openreview.net/forum?id=r1eBeyHFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Jx2yatJbi",
"S1lklsZcor",
"Sket3aNBoB",
"Bkx9phEroS",
"r1eqrvhf5S",
"ryl1OrQ0KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798725019,
1573685991415,
1573371313448,
1573371074301,
1572157249699,
1571857767210
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1504/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1504/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1504/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"All reviewers unanimously accept the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the detailed answers\", \"comment\": \"Thank you for addressing all issues raised in a convincing and thorough manner and preparing a revised manuscript!\"}",
"{\"title\": \"Thank you for your review and suggestions\", \"comment\": \"Thank you for your review and suggestions\", \"q\": \"Reference Shannon and Weaver was published in 1963, not 1948. In Page 5, \\\"maybe not expressive\\\" should be \\\"may not be expressive\\\".\", \"response\": \"Thank you for the correction. We have fixed them.\", \"this_will_be_an_f_information_with_a_small_modification\": \"f[x] = N(ax+b, s), f[empty]=N(u, s) where a, b, u are parameters we can optimize. To check Eq 1 we can verify\\n\\n\\u201cThere exists f\\u2019 \\\\in F, f\\u2019[x] = P, f\\u2019[empty] = P\\u201d -> this can be achieved by choosing a=0, b=c, u=c \\n\\nIn fact, this quantity is equal to the R^2 coefficient (Proposition 1.5) \\u2014 a common measurement of dependence between two random variables. \\n\\nNote that many nice properties continue to hold, but without Eq. 1 F-information can be negative.\"}",
"{\"title\": \"Thank you for your review and suggestions\", \"comment\": \"Thank you for your review and suggestions.\", \"q\": \"When choosing function classes that allow for universal function approximation, would F-information degrade to Shannon information?\", \"response\": \"Yes, this is an expected and desirable property as in Proposition 1. Roughly speaking, if F contains every function and every probability measure \\u2014 there are no computational constraints \\u2014 then all information is usable, which is exactly what Shannon information measures. The statistical and computational burden however, makes this a poor design choice for many machine learning problems.\", \"fairness\": \"minimize F information between the learned representation and target (sensitive attributes) and maximize F information w.r.t input.\", \"information_bottleneck\": \"maximize F information between the learned representation and target (labels) and minimize F information w.r.t input.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a generalization of classical definitions of entropy and mutual information that can capture computational constraints. Intuitively, information theoretic results assume infinite computational resources, so they may not correspond to how we treat \\\"information\\\" in practice. One example is public-key encryption. An adversary that has infinite time will eventually break the code so the decrypted message conveys the same amount of information (in a classical sense) as the plaintext message. In practice, this depends on computational time.\\n\\nThe authors' approach is to first restrict the class of conditional probability distribution p(Y|X) to a restricted family F that satisfies certain conditions. Unfortunately, the main condition in Def 1 that the authors assume is not natural and is only added to ensure that mutual information remains positive. However, putting this aside, the subsequent definitions that general entropy, conditional entropy, and mutual information are well-motivated. \\n\\nThe authors, then, show that many measures of \\\"uncertainty\\\" can be viewed as \\\"entropies\\\" under this generalized definition including the Mean Absolute Deviation and the Coefficient of Determination. \\n\\nThe overall framework can justify practices that we commonly use in machine learning, which would be justifiable using classical information. One important example is Representation Learning, which is a post-processing of data to aid the prediction task. According to classical information theory, this post-processing shouldn't help because it cannot add more information about the label Y than what was original available in X. Under the formulation presented in this paper, postprocessing can help if we keep in mind information about Y in X are hard to extract to begin with. \\n\\nIn terms of practical applications, the main advantage of the new definition is that F-information can be estimated from a finite sample, simply because F is a restricted set. However, this restriction helps compared to using state-of-the-art estimators for Shannon mutual information as shown in the experiments. \\n\\nFinally, the literature review section is quite excellent. \\n\\nI find the overall approach to be quite interesting and definitely worth publishing. The only suggestion I have is that the authors include immediately after Definition 1 a concrete example that illustrates it. For example, suppose that Y is a scalar and X is a noisy estimate of Y. Suppose we restrict F to the family of Gaussian distributions. That is, with side information x, f[x](y) = N(x, s). Without side information, f[empty](y) = N(u, s). The functions f are parameterized by u and s. \\nIs this a \\\"predictive family\\\"? To make sure I understand it correctly, can you please walk me through the Eq 1 for this particular example?\", \"some_minor_remarks\": [\"Reference Shannon and Weaver was published in 1963, not 1948.\", \"In Page 5, \\\"maybe not expressive\\\" should be \\\"may not be expressive\\\".\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\nThe paper introduces a framework for quantifying information about one random variable, given another random variable (\\u201cside information\\u201d) and, importantly, a function class of allowed transformations that can be applied to the latter. This matches the typical scenario in machine learning, where observations (playing the role of side information) can be transformed (with a restricted class of transformation-functions) such that they become maximally predictive about another random variable of interest (\\u201clabels\\u201d). Using this framework, the paper defines the notion of conditional F-entropy and F-entropy (by conditioning on an empty set). Interestingly, both entropic quantities are shown to have many desirable properties known from Shannon entropy - and when allowing the function class of transformations to include all possible models F-entropies are equivalent to Shannon entropies. The paper then further defines \\u201cpredictive F-information\\u201d which quantifies the increase in predictability about one random variable when given side information, under a restricted function-class of allowed transformations of the side information. Importantly, transformations of side information can increase predictive F-information (which is the basis for the notion of \\u201cusable\\u201d information), which is in contrast to the data processing inequality that applies to Shannon information and states that no transformation of a variable can increase predictability of another variable further than the un-transformed variable (information cannot be generated by transforming random variables). The paper highlights interesting properties of the F-quantities, most notably a PAC bound on F-information estimation from data, which gives reason to expect F-information estimation to be more data-efficient than estimating Shannon-information (particularly in the high-dimensional regime). This finding is confirmed by four types of interesting experiments, some of which make use of a modified version of a tree-structure learning algorithm proposed in the paper (using predictive F-information instead of Shannon mutual information).\\n\\nContributions\\ni) Proposal of a framework for measuring and reasoning about information that transformed random variables have about other random variables, when the class of transformation functions is restricted. Interesting properties are highlighted and corresponding proofs are given. Important conclusions to Shannon-information measures are drawn.\\n\\nii) PAC guarantees for estimating F-information quantities from data. A nice result that justifies some optimism about the scalability of F-information estimation.\\n\\niii) Modification of a tree-structure learning algorithm, and application to four types of experiments with comparisons against methods for estimating Shannon(-mutual)-information. \\n\\nQuality, Clarity, Novelty, Impact\\nThe paper is very well written, the motivation and main results are clear and connections to known measures for information in complex systems are drawn (which often appear as corner-cases, or unrestricted cases of F-information). I am not an expert on various information measures, thus I cannot fully judge the novelty of the framework (given that the central idea is fairly simple and quite elegant, the main work lies in the proofs and connections to other frameworks). However, I have not seen the framework being discussed in the machine learning literature before. I personally would rate the potential impact of the F-information framework as high because it addresses many problems that Shannon-(mutual-)information has (hard to estimate, generality means complete blindness against model-classes). The experiments in the paper already illustrate how F-information could be very useful for a range of ML problems that cannot be tackled by strong competitor methods based on Shannon-information estimation. My only criticism is that the paper does not clearly state current limitations and shortcomings and does not comment on the difficulties / potential problems with solving the variational problem that is part of the definition of (conditional) F-information. I currently vote and argue for accepting the paper, though my assessment is of medium confidence only, and I am happy to take issues raised by the other reviewers and the rebuttal into account. I have not checked the proofs in the appendix in great detail.\\n\\nImprovements\\ni) Please add a short section of current shortcomings and caveats, especially with regard to applying the methods in practice. \\n\\nii) Please comment on solving the variational optimization problem (the infimum) which is part of the definition of (conditional) F-information. In particular, are there any theoretical statements / bounds / etc. to be made for the case where the infimum is not found exactly - does the measure degrade gracefully or can small errors in this optimization lead to wildly varying/divergent F-information? From a practical point-of-view: how was this optimization done in the experiments (particularly when involving a neural network model), how much computational overhead did this optimization add (and how does it compare against other methods, e.g. in terms of wall-clock time or other reasonable metrics, the more the better)?\\n\\niii) This is a minor one and feel free to completely ignore it. The name F-information might easily get confused with the use of f-divergences, perhaps there is a better, more informative name. Also, while I personally like the term \\u201cusable\\u201d in the title, I\\u2019m not so sure about \\u201ccomputational constraints\\u201d - the latter somehow suggests that the method has small computational footprint, or can easily scale to different computational budget. Perhaps there is a way that more strongly indicates that this refers to restrictions on the model-/function-class (which the term \\u201cusable\\u201d does already to some degree admittedly).\\n\\n\\nMinor Comments\\na) Have you had any thoughts on how F-information could be used in a rate-distortion / information-bottleneck type framework for a theory of \\u201crelevant usable information\\u201d? This is probably beyond the scope of this paper, just out of curiosity.\\n\\nb) The paragraph above 3.3 almost sounds a bit like Shannon (and the data processing inequality) was wrong. I\\u2019d rather phrase this as a \\u201cno-free-lunch problem\\u201d - while the DPI and Shannon (mutual) information is very elegant, it is necessary to make further assumptions/restrictions (the function class of allowed transformations) to make more fine-grained statements and define more precise (but less general) informational-quantities tailored to the specific function class.\\n\\nc) When choosing function classes that allow for universal function approximation, would F-information degrade to Shannon information?\"}"
]
} |
BJlVeyHFwH | On the Invertibility of Invertible Neural Networks | [
"Jens Behrmann",
"Paul Vicol",
"Kuan-Chieh Wang",
"Roger B. Grosse",
"Jörn-Henrik Jacobsen"
] | Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks. Invertible neural networks (INNs), however, provide several mathematical guarantees by design, such as the ability to approximate non-linear diffeomorphisms. One less studied advantage of INNs is that they enable the design of bi-Lipschitz functions. This property has been used implicitly by various works to design generative models, memory-saving gradient computation, regularize classifiers, and solve inverse problems.
In this work, we study Lipschitz constants of invertible architectures in order to investigate guarantees on stability of their inverse and forward mapping. Our analysis reveals that commonly-used INN building blocks can easily become non-invertible, leading to questionable ``exact'' log likelihood computations and training difficulties. We introduce a set of numerical analysis tools to diagnose non-invertibility in practice. Finally, based on our theoretical analysis, we show how to guarantee numerical invertibility for one of the most common INN architectures. | [
"Invertible Neural Networks",
"Stability",
"Normalizing Flows",
"Generative Models",
"Evaluation of Generative Models"
] | Reject | https://openreview.net/pdf?id=BJlVeyHFwH | https://openreview.net/forum?id=BJlVeyHFwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"wBjXJnBrhM",
"Hkei4tHisH",
"SylRsdHjir",
"HylpLOBioH",
"HkxxFvSoiH",
"HkeLVwHojS",
"rkxtlPrjsB",
"r1gig7Jf9r",
"rkxViW62KH",
"ryTvSq2Fr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724988,
1573767475088,
1573767334183,
1573767253138,
1573767032442,
1573766957806,
1573766897071,
1572102899469,
1571766683783,
1571755360986
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1503/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1503/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1503/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission analyses the numerical invertibility of analytically invertible neural networks and shows that analytical invertibility does not guarantee numerical invertibility of some invertible networks under certain conditions (e.g. adversarial perturbation).\", \"strengths\": \"-The work is interesting and the theoretical analysis is insightful.\", \"weaknesses\": \"-The main concern shared by all reviewers was the weakness of the experimental section including (i) insufficient motivation of the decorrelation task; (ii) missing comparisons and experimental settings.\\n-The paper clarity could be improved.\\n\\nBoth weaknesses were not sufficiently addressed in the rebuttal. All reviewer recommendations were borderline to reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of revision\", \"comment\": \"We thank the reviewers for their insightful and valuable comments. We have made several revisions to our paper which we summarize here:\\n\\n1) We corrected the bound on the local Lipschitz constants of the inverse mapping of the affine coupling blocks (see Table 1 and Appendix A.2.2).\\n\\n2) We restructured the experimental section based on four studied tasks (with a new section on classification):\\n---- Classification (Section 4.1): we show that invertible neural network (INN) classifiers can become unstable on CIFAR-10 and demonstrate implications for memory-efficient backpropagation.\\n---- Density estimation (Section 4.2): analysis of trained SOTA INN-based density models. Includes a new result on checking for non-invertible inputs of residual flows (Appendix E).\\n---- Generative modeling with adversarially trained INN models. Now includes sample evaluation using FID scores (Table 2), with a comparison to MLE-trained INN-models.\\n---- Decorrelation (Section 4.4): clarified the motivation.\\n\\nOverall we think this structure makes our experimental analysis clearer. The new classification results demonstrate that non-invertibility is a strong concern and emphasize the need to better understand the stability of invertible networks.\\n\\nLastly, we apologize for responding late in the discussion phase. We were working hard on running new experiments and thoroughly revising our manuscript.\"}",
"{\"title\": \"Response to R3 (comment 2)\", \"comment\": \"....continuation of comments:\", \"q\": \"The results would be more relevant if the architectures resembled the architectures used for invertible models used in the literature (e.g. GLOW) where we not only have coupling layers but they are interleaved with PLU linear flows.\\n---------------------\", \"a\": \"In our classification, generative modeling, and decorrelation experiments we only used shuffling permutations (squeeze layers, see Table 1) to make the design space of INNs smaller. Future work could analyze more building blocks like PLU-flows, i-ResNets [1] or MintNet [2] blocks.\\nLastly, we did analyze trained GLOW models that have PLU-flows in Section 4.2 by crafting non-invertible inputs. \\n---------------------\\n\\nFurthermore, we fixed the typos you spotted. \\n\\nThank you again for your thorough review, and we hope that you appreciate our revised manuscript.\\n\\n[1] Behrmann et al., \\u201cInvertible Residual Networks,\\u201d ICML 2019.\\n[2] Song et al., \\u201cBuilding Invertible Neural Networks with Masked Convolutions,\\u201d NeurIPS 2019.\"}",
"{\"title\": \"Response to R3 (comment 1)\", \"comment\": \"Thank you for your insightful review. We appreciate your concerns and respond to them below:\", \"q\": \"Why use decorrelation as opposed to density estimation?\\n---------------------\", \"a\": \"We do also have results related to density estimation. In Section 4.2 we analyze the invertibility of trained SOTA density models by crafting inputs. Furthermore, we included BPD and FID results for MLE-trained models in Section 4.3 (Table 2).\\n\\nHowever, we believe that decorrelation is a useful task to better understand the effects that influence the stability of INNs. While much energy has been invested in designing architectures and training settings for common tasks such as INN-based density estimation, some of these settings fail for other objectives, as we show with our decorrelation example. Furthermore, decorrelation is still related to density estimation, which is why we view this as an ablation study and a simple task to benchmark the influence of different architecture settings under a less standard objective.\\n\\nIn our revised manuscript we have moved the decorrelation results to the end of the experimental section (Section 4.4). Furthermore, we included classification results (Section 4.1) as other commonly-used examples and studied implications for memory-efficient backpropagation.\"}",
"{\"title\": \"Response to R2 (comment 2)\", \"comment\": \"...continuation of comments:\", \"q\": \"Finding non-invertible inputs for other popular INN models and non-image datasets\\n---------------------\", \"a\": \"We included a new result on searching for non-invertible inputs for trained residual flows [13], see Appendix E. As residual flows are based on i-ResNets [14], they are by design based on certain stability bounds. In line with the theory, we thus were not able to find strong examples of non-invertible inputs.\\nFurthermore, results on non-image datasets would be of interest but are less frequently used by the mainstream invertible net literature and thus out of the scope of this article.\\n---------------------\\n\\nThank you again for your thorough review and hope that you appreciate our revised manuscript.\\n\\n[1] Gomez et al., \\u201cThe Reversible Residual Network: Backpropagation without Storing Activations,\\u201d NeurIPS 2017 http://papers.nips.cc/paper/6816-the-reversible-residual-network-backpropagation-without-storing-activations\\n[2] Jacobsen et al., \\u201ciRevNet: Deep Invertible Networks,\\u201d ICLR 2018. https://arxiv.org/pdf/1802.07088.pdf\\n[3] Kolesnikov et al., \\u201cRevisiting Self-Supervised Visual Representation Learning,\\u201d CVPR 2019, https://arxiv.org/pdf/1901.09005.pdf\\n[4] Jacobsen et al., \\u201cExcessive Invariance Causes Adversarial Vulnerability,\\u201d ICLR 2019. https://arxiv.org/pdf/1811.00401.pdf\\n[5] Donahue & Simonyan, \\u201cLarge Scale Adversarial Representation Learning,\\u201d NeurIPS 2019, https://arxiv.org/pdf/1907.02544.pdf\\n[6] van de Leemput et al., \\u201cMemCNN: A Framework for Developing Memory Efficient Deep Invertible Networks,\\u201d ICLR 2018 Workshop. https://openreview.net/pdf?id=r1KzqK1wz\\n[7] van der Ouderaa & Worrall, \\u201cReversible GANs for Memory-Efficient Image-to-Image Translation,\\u201d CVPR 2019. http://openaccess.thecvf.com/content_CVPR_2019/papers/van_der_Ouderaa_Reversible_GANs_for_Memory-Efficient_Image-To-Image_Translation_CVPR_2019_paper.pdf\\n[8] Brugger et al., \\u201cA Partially Reversible U-Net for Memory-Efficient Volumetric Image Segmentation,\\u201d https://arxiv.org/pdf/1906.06148.pdf\\n[9] Putzky et al., \\u201ci-RIM applied to the fastMRI Challenge,\\u201d https://arxiv.org/pdf/1910.08952.pdf\\n[10] Toth et al., \\u201cHamiltonian Generative Networks,\\u201d https://arxiv.org/abs/1909.13789\\n[11] Hoogeboom et al., \\u201cInteger Discrete Flows and Lossless Compression,\\u201d https://arxiv.org/abs/1905.07376\\n[12] Song et al., \\u201cBuilding Invertible Neural Networks with Masked Convolutions,\\u201d NeurIPS 2019.\\n[13] Chen et al., \\u201cResidual Flows for Invertible Generative Modeling,\\u201d https://arxiv.org/abs/1906.02735\\n[14] Behrmann et al., \\u201cInvertible Residual Networks,\\u201d ICML 2019.\"}",
"{\"title\": \"Response to R2 (comment 1)\", \"comment\": \"Thank you for your insightful feedback. We have updated the paper (see summary of revisions) to incorporate your comments, which we discuss below.\", \"q\": \"Clarification of the decorrelation task: What exactly is controlled here that is not controlled in training an INN for, e.g., density estimation?\\n---------------------\", \"a\": \"At least in the given toy example (Appendix F), the decorrelation task offers both stable and unstable solutions which solve the objective perfectly. Thus, this task is controlled in the sense that there is a stable solution, which we hope to find. Interestingly, in many cases the INNs tend to choose the unstable solutions. On the other hand, for tasks like density estimation or classification we do not know if the task is solvable by stable mappings (there could be tradeoffs between stability and performance).\\n---------------------\"}",
"{\"title\": \"Response to R1\", \"comment\": \"Thank you for your positive feedback on our work! We hope that our added results confirm your opinion on the importance of our presented ideas.\\nAs you pointed out, we believe that this work will help researchers to better understand current INN design choices and improve future models. In our revision, we added classification experiments which further underline the importance of stability.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper points out invertible neural networks are not necessarily invertible because of bad conditioning. It shows some cases when invertible neural networks fail, including adding adversarial pertubations, solving the decorrelation task, and training without maximum likelihood objective (Flow-GAN). The paper also shows that spectral normalization improves network stability.\\n\\nI think this is a solid work. The main contribution is it points out a problem that is overlooked before, which can possibly explain some unstable behavior for training neural networks. The paper also has some study on various architectures, which sheds some light on the designing of invertible neural networks. I think this paper can be important for future researchers to design models and algorithms. \\n\\n===============\", \"update\": \"After reading other reviewer's comment I agree with other reviewers that the experimental section is problematic. It seems to be unrelated with the theoretical results proposed in this paper. I think currently the experiments only make a point that invertible networks can be non-invertible in practice. But the paper has large room to improve if it has\\n\\n1. A complete discussion on which invertible blocks / modeling tasks are easier to be non-invertible, and why (theoretically, and combine with direct experimental evidence)\\n2. A remedy (using additive coupling layer is not an acceptable one since it severely limits the modeling power)\\n\\nI still think posing the problem itself is important. Thus I will still give it an accept, but lower it to a weaker score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper analyses the numerical invertibility of analytically invertible neural networks (INN). The numerical invertibility depends on the Lipschitz constant of the respective transformation. The paper provides Lipschitz bounds on the components of building blocks for certain INN architectures, which would guarantee numerical stability. Furthermore, this paper shows empirically, that the numerical invertibility can indeed be a problem in practice.\\n\\nThis work is interesting and could be important to many researchers working with INNs. The worst case analysis and the corresponding table with Lipschitz bounds is useful. \\nHowever, I have some concerns regarding the experimental evaluation. \\n- Experiments in 4.1. nicely show that there exist non-invertible inputs for a GLOW model trained on CIFAR. But I wish the authors also considered other popular INN models and non-image datasets for this set of experiments (showing if this is also an issue in scenarios other than CIFAR/CELEBA + GLOW). \\n- Although the authors spend significant space in the main text and the appendix to motivate the experiments in 4.2, I cannot follow this motivation. For example, \\u201cdecorrelation is a simpler objective than optimizing outputs z = F(x) to follow a factorized Gaussian as in Normalizing Flows\\u201d. Why is this is simpler, and, more importantly, why would this be an argument? Another example is \\u201c\\u2026 this decorrelation objective offers a controlled environment to study which INN components steer the mapping towards stable or unstable solutions, \\u2026\\u201d. Why is this more controlled? What exactly is controlled here that is not controlled in training a an INN for, e.g., density estimation? \\nI am not sure if this set of experiments is any useful for determining whether numerical precision is actually problematic for posterior approximation with normalizing flows, density estimation, etc.\\n- the experimental sections is somewhat badly structured and makes it difficult to read. It is not clear if this paper is analysis-only or whether the authors propose a remedy. The authors write in the abstract and conclusion that they show how to guarantee invertibility for one of the most common INN architectures. After reading this, I would expect a designated experimental section which shows a fix. I suppose they refer to Additive blocks + Spectral Norm, discussed in 4.2.1. However, that reads more like a post-hoc insight (\\u201cit turns out that\\u2026\\u201d rather than \\u201cwe show how\\u201c). In short, the experiments section could be much better structured. \\n- The paper would be greatly improved, if the authors would propose how to tackle these numerical problems. I doubt that additive coupling is \\u201cone of the most common INN architectures\\u201d. It would be nice if the authors would conduct more extensive experiments and propose solutions for other building blocks. \\n- I expect at least a few experiments that quantify numerical instability with multiple different random seeds (for initialization etc.).\\n\\nFor these reasons I vote for rejection. \\nI think it would be advisable to rethink the goals of the experimental evaluation, come up with a better structure, and expand at several places. E.g. (i) expand 4.1 to other architectures and data, (ii) show how this is relevant in practice (e.g. posterior inference with NFs and density estimation) and how it questions published results (currently Sec. 5), and (iii) evaluate proposed solutions.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper claims that for invertible neural networks, mathematical guarantees on invertibility is not enough, and we also require numerical invertibility. To this end, the lipschitz constants/condition numbers of Jacobians of both the forward and inverse maps of invertible NNs based on coupling layers are examined mathematically and experimentally. The paper also displays cases that expose non-invertibility in these architectures via gradient-based construction of adversarial inputs, as well as a decorrelation benchmark task, and show that spectral normalization can be a remedy for stabilizing these flows.\\n\\nI think it\\u2019s a good point that we need to monitor the Lipschitz constant/bounds of both directions of these invertible functions. It\\u2019s true that the focus for stabilising NNs by bounding Lipschitz constants was always on the forward function, and for invertible functions we should also ensure that the inverse is numerically stable to compute.\\n\\nThe mathematical contribution of the paper is twofold - 1. deriving bounds on the lipschitz constants of the forward and inverse mapping of additive/affine coupling blocks 2. summarising known lipschitz bounds of forward and inverse mappings of other invertible layers (iResNet, neuralODE, invertible 1x1 convolutions etc). The main contribution lies in 1, and the derivation for the additive coupling block (volume preserving) is neat (although fairly straightforward), but the derivation for the affine coupling layer (NVP) is not useful nor insightful; they are local Lipschitz bounds (so require bounds on all intermediate activations, which is difficult as pointed out by the authors), and the numerical value of this bound was not used at all in relation to the numerical experiments - I imagine the bound is loose. Given that it seems difficult to find a tight global lipschitz bound, I think it would be more insightful to compute a lower bound to the lipschitz constant of the model (with fixed parameter values) by maximising the spectral norm of the Jacobian with respect to the inputs (or outputs if looking at the inverse map) - this will yield a lower bound by Lemma 3. This will be numerical, but more informative since it will give you an indication of where in the input space (or output space if looking at the inverse) there could be numerical instabilities. Also I think the bound on the local lipschitz constant of the inverse for the affine coupling block might be incorrect, because in A.1.1, the inverse map is F^{-1}(y)_I1 = y_I1, F^{-1}(y)_I2 = (y_I2 - t(y_I1))/g(s(y_I1)), so the scale and shift is s\\u2019(y_I1) := 1/g(s(y_I1)) and t\\u2019(y_I1):=- t(y_I1)/g(s(y_I1)), and hence I think this needs to be taken into account for computing the lipschitz bound of the inverse \\n\\nI have mixed feelings about the experimental section. In section 4.1, it is interesting to see that we can find inputs where trained flow models can show numerical non-invertibility, evident in the poor reconstructions. It would be a nice addition to investigate whether this is coming from the forward function or its inverse, by examining the norm of the Jacobian of F and F^{-1} at the input x_delta and output F(x_delta) respectively. \\n\\nHowever, the decorrelation task introduced in section 4.2 is puzzling. I don\\u2019t understand why for these invertible models, you are investigating invertibility for parameter values trained to decorrelate, as opposed to parameter values used in the usual task of density estimation with flows (or any other standard application of invertible NNs). The two reasons given in the paper are that 1) decorrelation is a simpler task and 2) it allows both stable and unstable transforms as solutions, but these are not convincing. Point 2) holds for flow-based density estimation as well, and regarding point 1), density estimation is the task we usually care about when using invertible NNs, and this is also computationally plausible/tractable, whereas even if decorrelation is a simpler task, it\\u2019s not a task that users of invertible NNs are interested in. It is good to know that these invertible NN architectures CAN admit values that are numerically non-invertible, but I would be much more interested to know whether this actually holds when they have been trained for flow-based density estimation. I\\u2019m not sure whether the experimental results on models trained for the decorrelation task are useful, because a model that is stable when trained for the decorrelation task may be unstable when trained for flow-based density estimation and vice versa. The observation that spectral normalization can help address numerical instability is useful, but from the perspective of someone who wants to use these invertible NNs for density estimation, I would like to know what is the sacrifice in expressivity/validation performance (if any) when using spectral normalization in these invertible architectures. Also, the results would be more relevant if the architectures resembled the architectures used for invertible models used in the literature (e.g. GLOW) where we not only have coupling layers but they are interleaved with PLU linear flows.\\n\\nIn section 5, the result that Flow-GANs can be numerically non-invertible is more relevant, and it is useful to know that spectral normalisation can help resolve this issue, but again it would be useful to quantify whether this comes at the cost of the quality of generated samples (Figure 3 shows several samples, but a more thorough quantitative & qualitative comparison would be welcome). Also regarding the point about likelihood in Section 5, where the authors state \\u201cit cannot be trusted as true likelihood due to lack of invertibility\\u201d, I think it should be emphasised that this point holds specifically for flow-GANs where for F: z -> x, you need a numerically accurate F^{-1} to compute the density, but for standard flow-based density estimation where F:x -> z, you never need to compute the inverse for computing the likelihood, hence if F has a small lipschitz constant then the likelihood will be accurate, regardless of whether the inverse is numerically stable or not.\\n\\nOverall I believe the experimental section can be largely improved, and given that the motivation of the paper is nice and the paper is clearly written and nicely presented, it would be a shame to leave the experiment section as it is.\\n\\nMinor typos/Qs:\", \"p2\": \"this problems <- this problem\", \"p8\": \"and with maximum likelihood (ML) - should this be removed?\", \"p13\": \"t(x_I2) <- t(x_I1)\"}"
]
} |
SkeNlJSKvS | Shallow VAEs with RealNVP Prior Can Perform as Well as Deep Hierarchical VAEs | [
"Haowen Xu",
"Wenxiao Chen",
"Jinlin Lai",
"Zhihan Li",
"Youjian Zhao",
"Dan Pei"
] | Using powerful posterior distributions is a popular technique in variational inference. However, recent works showed that the aggregated posterior may fail to match unit Gaussian prior, even with expressive posteriors, thus learning the prior becomes an alternative way to improve the variational lower-bound. We show that using learned RealNVP prior and just one latent variable in VAE, we can achieve test NLL comparable to very deep state-of-the-art hierarchical VAE, outperforming many previous works with complex hierarchical VAE architectures. We hypothesize that, when coupled with Gaussian posteriors, the learned prior can encourage appropriate posterior overlapping, which is likely to improve reconstruction loss and lower-bound, supported by our experimental results. We demonstrate that, with learned RealNVP prior, ß-VAE can have better rate-distortion curve than using fixed Gaussian prior. | [
"Variational Auto-encoder",
"RealNVP",
"learnable prior"
] | Reject | https://openreview.net/pdf?id=SkeNlJSKvS | https://openreview.net/forum?id=SkeNlJSKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"U3sPdqwc7",
"ryxeWqalqB",
"rkxnJ9o0YB",
"ryeDf7PxFB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724955,
1572030968155,
1571891683667,
1570956046688
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1502/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1502/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1502/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper provides an interesting insight into the fitting of variational autoencoders. While much of the recent literature focuses on training ever more expressive models, the authors demonstrate that learning a flexible prior can provide an equally strong model. Unfortunately one review is somewhat terse. Among the other reviews, one reviewer found the paper very interesting and compelling but did not feel comfortable raising their score to \\\"accept\\\" in the discussion phase citing a lack of compelling empirical results in compared to baselines. Both reviewers were concerned about novelty in light of Huang et al., in which a RealNVP prior is also learned in a VAE. AnonReviewer3 also felt that the experiments were not thorough enough to back up the claims in the paper. Unfortunately, for these reasons the recommendation is to reject. More compelling empirical results with carefully chosen baselines to back up the claims of the paper and comparison to existing literature (Huang et al) would make this paper much stronger.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose to use learned RealNVP with a shallow VAE instead of deep hierarchical VAE, which will not hurt the performance. The authors conduct thorough experiments to backup their claims and hypotheses.\", \"cons\": \"1. The writing needs to be improved. The four contributions listed do not have a clear logic flow.\\n2. The proposed method seems like a combination of previous studies, making the paper more like a technical report.\\n3. One advantage claimed by the authors is that only one latent variable has clear semantic meanings, which is not explicitly supported by the experiments.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper claims that learning prior from the data could achieve superior performance than using a standard unit Gaussian prior. Experimental results further show that the proposed method could achieve a lower or comparable negative log-likelihood compared to other VAE variants using a complex hierarchical architecture.\", \"i_have_the_following_concerns_about_the_paper\": \"It is widely accepted that the prior serves as the regularization for the Bayesian inference. Using a data-dependent prior is promising to derive a lower NNL than a data-independent prior. However, it would easily lead to an overfitting model with bad generalization, especially for a noisy dataset. \\n\\nIt is claimed in the abstract and conclusion that one latent variable is used for learning the RealNVP prior while the authors use the words \\\"shallow\\\" (refers to few latent variables) in the experiments. So how many latent variables are used in the experiment?\\n\\nIt is listed in the related work that \\\"Huang et al. (2017) applied RealNVP (Dinh et al., 2017) to learn the prior\\\", which means the idea using the learned RealNVP prior is not a new idea. Therefore, what is the contribution of this paper? A detailed discussion is needed to elaborate on the differences from previous methods using a prior learned from the data.\\n\\nThe equation of the aggregated posterior in page 2 after Eq.4 is wrong. The aggregated posterior should be the integration over x instead of z.\\n\\nThe drawn conclusion \\\"using both RealNVP posterior and prior shows no significant advantage over using RealNVP prior only, although the total flow depth of the former variant is twice as large as the latter one\\\" is quite unprofessional. Only one experiment set was conducted with k=20. One obvious reason is that the current setting with k=20 makes that the model over-parameterized. More comparisons are needed for smaller k. \\n\\nAs claimed in the paper that the clipping would be a navie method to promote overlapping among the posterior. A comparison with a navie baseline using clipping is needed before drawing a conclusion that the learned RealNVP prior is the reason for enhancing the overlapping.\\n\\nIs the likelihood function p(x|z) a Bernoulli or Gaussian?\\n\\n\\\"Although BIVA has a much lower NLL on StaticMNIST, in contrast to our paper, the BIVA paper (Maal\\u00f8e et al., 2019) ...... attributed to having fewer training data\\\". The author should confirm with the authors instead of giving a conjecture in a scientific paper.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This submission shows that using learned autoregressive priors (real NVP) allows shallow VAEs to achieve comparable log-likelihood performances compared to more complex deep VAE architectures.\\n\\nI found this paper an enjoyable read, and its results quite intriguing. While most of VAE research focuses on building more powerful encoder and decoder architectures, these results show that focusing on a learned prior distribution is as important. \\n\\nThe models introduced in this paper are not novel, but the authors introduce some tricks (gradient and std clipping) that allow them to achieve nearly SOTA results with relatively simple architectures. \\nI think these tricks should have been demonstrated more in detail in the experiments, for example:\\n* why is the std clipped exactly at e^-11? What happens if I increase/decrease this number?\\n* Can the authors clarify the differences between their model and that of Huang et al, which also uses real NVP priors? If I took the exact architecture of Huang et al and used the clipping trick would it perform similarly to your model?\\n* Would other more complex SOTA models also benefit from your clipping tricks?\\n\\nA very interesting addition to the paper would be running some experiments on more complex data distributions such as the natural images of celebA or CIFAR10, to understand whether your model could achieve:\\n(1) similar improvements in terms of ELBO \\n(2) more importantly, a quality of the generated samples comparable to deep VAE models such as VAE+IAF or BIVA.\\n\\nOverall I liked the paper so I am voting towards acceptance. However, while there are a considerable number of experiments in this paper, for me to increase the score I would like to see at least some of the experiments suggested above, since they could help better understand the behavior of VAEs with learned priors and make this an even more impactful paper.\"}"
]
} |
HJgEe1SKPr | GAN-based Gaussian Mixture Model Responsibility Learning | [
"Wanming Huang",
"Shuai Jiang",
"Xuan Liang",
"Ian Oppermann",
"Richard Yi Da Xu"
] | Mixture Model (MM) is a probabilistic framework which allows us to define a dataset containing K different modes. When each of the modes is associated with a Gaussian distribution, we refer it as Gaussian MM, or GMM. Given a data point x, GMM may assume the existence of a random index k ∈ {1, . . . , K } identifying which Gaussian the particular data is associated with. In a traditional GMM paradigm, it is straightforward to compute in closed-form, the conditional like- lihood p(x|k, θ), as well as responsibility probability p(k|x, θ) which describes the distribution index corresponds to the data. Computing the responsibility allows us to retrieve many important statistics of the overall dataset, including the weights of each of the modes. Modern large datasets often contain multiple unlabelled modes, such as paintings dataset containing several styles; fashion images containing several unlabelled categories. In its raw representation, the Euclidean distances between the data do not allow them to form mixtures naturally, nor it’s feasible to compute responsibility distribution, making GMM unable to apply. To this paper, we utilize the Generative Adversarial Network (GAN) framework to achieve an alternative plausible method to compute these probabilities at the data’s latent space z instead of x. Instead of defining p(x|k, θ) explicitly, we devised a modified GAN to allow us to define the distribution using p(z|k, θ), where z is the corresponding latent representation of x, as well as p(k|x, θ) through an additional classification network which is trained with the GAN in an “end-to-end” fashion. These techniques allow us to discover interesting properties of an unsupervised dataset, including dataset segments as well as generating new “out-distribution” data by smooth linear interpolation across any combinations of the modes in a completely unsupervised manner. | [
"Generative Adversarial Networks"
] | Reject | https://openreview.net/pdf?id=HJgEe1SKPr | https://openreview.net/forum?id=HJgEe1SKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"wvxD7G-G4N",
"rylsua2Q9S",
"SJlnIgtxqB",
"HklCTYa0KH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724918,
1572224371071,
1572012116494,
1571899845950
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1501/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1501/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1501/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to use GMM as the latent prior distribution of GAN. The reviewers unanimously agree that the paper is not well motivated, explanations are lacking and writing needs to be substantially improved.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to use GMM as the latent prior distribution of GAN. The model adds the means, covariances and the discrete priors of the GMM as learnable parameters to the GAN, which are jointly optimized with other GAN parameters. The model also adds a discrete classifier to the training process. During training, the classifier predicts the probability of each image falling into each of the GMM clusters, and uses these probabilities to re-weight the GAN generated samples.\\n\\n# motivation\\n\\nThe motivation of this work is not clear to me. Even with an isotropic Gaussian prior, a fully connected neural network is already sufficient to (approximately) simulate the GMM sampling. Thus, explicitly modeling GMM doesn't seem to be necessary, and could make the learning more difficult. \\n\\nI also don't quite understand why the authors have to add a discrete classifier to the modeling. It appears that the discrete classifier is only used for controlling the relative weights of clusters in the GAN training. If that's the case, then what's truly needed is just the prior distribution of each cluster, which doesn't depend on the individual images. For concrete datasets, this prior is usually known. For example, in MNIST each cluster has an equal prior of 10%.\\n\\n# experiments\\n\\nThe model is evaluated on MNIST and Oxford-102. I'd like to see it tested on more realistic and higher resolution images, and compared with state-of-the-art GAN models. Since the motivation of the modeling design is unclear, the bar on the empirical results should be much higher.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers the Gaussian mixture model at the latent space to have a better GAN training result. The proposed architecture consists of three networks, a classifier, a generator, and a discriminator. Every input image goes through the classifier and gets the softmax output. The softmax output is considered as the mixture weights of the GMM model and controls the loss function of the generator accordingly.\\n\\nAlthough this paper has some positive sides, I recommend \\\"weak reject\\\" because of the following reasons.\\n\\n1. This paper is hard to follow. Many concepts are not discussed enough. For instance, how to train mu_k and \\\\Sigma_k, why should we use Eq. (3) as the loss of the generator, where do we use L^I in Eq.(4),...\\n\\n2. The experiment section requires more works. It would be much better to do experiments with much higher dimensional data sets and compare with many other GAN algorithms e.g. InfoGAN.\\n\\n3. The pseudo-code has many errors. For instance, what is \\\\alpha_i^0? It is not introduced. What is \\\\hat{\\\\alpha}?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes a modification of GANs where the latent space follows a distribution modelled by a Gaussian Mixture Model. While the idea of using GMMs in GANs is not novel, the main contribution of the paper is to add a classification models that enables posterior inference. The whole model is trained jointly end-to-end using both an adversarial loss and a mutual information loss. The procedure is then tested on MNIST, Fashion-MNIST and a subset of Oxford-102 Flower.\", \"While the problem of posterior inference is interesting, the novelty of the paper is quite limited. The overall structure is clear, although writing can be improved.\", \"However, a part from the form, I have other concerns about the paper:\", \"The main purpose of the paper is to tackle large image datasets that are hard to address by classical GMMs. This being said, the used datasets are composed of small and modal enough images that it seems hard to validate the claim of the paper using only these data. It seems to me that for the claim of the paper to be verified, larger scale/more complex datasets are needed.\", \"In all experiments, the authors suppose they have access to the number of classes/modes in the data, which is a huge assumption. It would be interesting to see if it would be possible to automatically accurately select the number of modes, e.g. on a held out validation set.\", \"One problem of GANs that the authors do not seem to consider is mode collapse. It would be interesting to do experiments with unbalanced datasets (e.g. MNIST 1 vs all) to see if the proposed architecture will model the data correctly.\", \"I am confused about the use of Mutual Information loss. The author claim that they would like to enforce each of the generated images to be from the same class as the input image. This would make sense if the authors used the multinomial sampling for the latent variable generation. However, the K generated samples are from K different Gaussians. It seems unreasonable to require of the classifier to render the same result.\", \"On the same note, in order to both use the classical multinomial sampling in GMMs and not break the backpropagation, have the authors considered updating the classifier and the GAN in separately in expectation-maximization fashion?\", \"Finally, I don't see the point of weighting the adversarial loss by the weights of Gaussians. All the generated images are fake and should be equally detected as such.\", \"Although the idea is interesting, I think the paper, at its current status, not ready for publication.\"], \"minor\": [\"P. 4: Generator: The sampling density from the multinomial distribution seems incorrect. However, as the authors skip the sampling step to be able to back propagate through the model, this is not significant.\", \"P. 1: The paragraph before the last: may even synthesizing -> synthesize\", \"P. 3: Architecture: Possibility -> Probability\", \"P. 7: Figure title: CIFR10 -> Oxford-102\", \"Algorithm notation:\", \"*Indexes for alpha and alpha_hat from 1 and not 0 to be consistent with the rest of the text\", \"*Add hats to the entries of alpha_hat\", \"LI from alpha_i and alpha_hat_i as in Equation 4 -> maybe change alpha_hat by x_hat to be consistent with the notation of eq.4 (although the meaning is clear here).\"]}"
]
} |
BJlXgkHYvS | Information-Theoretic Local Minima Characterization and Regularization | [
"Zhiwei Jia",
"Hao Su"
] | Recent advances in deep learning theory have evoked the study of generalizability across different local minima of deep neural networks (DNNs). While current work focused on either discovering properties of good local minima or developing regularization techniques to induce good local minima, no approach exists that can tackle both problems. We achieve these two goals successfully in a unified manner. Specifically, based on the Fisher information we propose a metric both strongly indicative of generalizability of local minima and effectively applied as a practical regularizer. We provide theoretical analysis including a generalization bound and empirically demonstrate the success of our approach in both capturing and improving the generalizability of DNNs. Experiments are performed on CIFAR-10 and CIFAR-100 for various network architectures. | [
"local minima",
"generalization",
"regularization",
"deep learning theory"
] | Reject | https://openreview.net/pdf?id=BJlXgkHYvS | https://openreview.net/forum?id=BJlXgkHYvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oR6FABh8f3",
"HJeXEVW3sH",
"ryxOmflnir",
"BJx7S-pojr",
"r1xAnokijS",
"rygLJrqqsS",
"rylHBEqqjH",
"SJxX9rLYsB",
"rygL3EBFiH",
"r1xtg-7Ksr",
"SygnyAMKsr",
"Bke1Ojf_jH",
"SkxtuOMdir",
"SJezCZfdiH",
"H1l91QkdsS",
"Sye3GCAwoB",
"SkeWgp0PiH",
"HJe3TWyXjB",
"HyeG_bk7jr",
"HyxDSpLfir",
"HJeaomh79S",
"H1xq1PY2tr",
"rJgVQyi9FB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724887,
1573815338905,
1573810720433,
1573798203490,
1573743541952,
1573721310182,
1573721148759,
1573639562641,
1573635246364,
1573626096704,
1573625316003,
1573559143182,
1573558385259,
1573556682024,
1573544674012,
1573543443938,
1573543144774,
1573216708319,
1573216618419,
1573182783169,
1572221860932,
1571751649854,
1571626780489
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1500/Authors"
],
[
"~Micah_Goldblum1"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1500/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes using the Fisher information matrix to characterize local minima of deep network loss landscapes to indicate generalizability of a local minimum. While the reviewers agree that this paper contains interesting ideas and its presentation has been substantially improved during the discussion period, there are still issues that remain unanswered, in particular between the main objective/claims and the presented evidence. The paper will benefit from a revision and resubmission to another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Ping\", \"comment\": \"We have addressed all the concerns in the previous response and updated our paper accordingly. Since tomorrow is the deadline for revision, we would like to ask if the reviewer #3 has any updated assessment or further concerns about our paper. Thanks a lot.\"}",
"{\"title\": \"Final response to reviewer #2\", \"comment\": \"Dear reviewer #2,\", \"q1\": \"\\u201cValidation\\u201d of the connection between FIA and our proposed metric.\\n\\nWe have updated the Section 5.1.1 again including adding a remark at the end to clarify the difference and connection between FIA and our metric. We would like to reiterate that the FIA criterion is NOT used in the paper to derive any theory or practice. The entire paper does NOT require any information-theoretic interpretation to be correct. The FIA criterion is stated as an important related work of our approach. We hope this time your concern will be resolved.\", \"q2\": \"\\u201cProposition 10 in [1] implies that eigenvalue of the submatrix converges to the largest eigenvalues of the Gram matrix [and thus] the behavior of the two regularizers (directly optimizing det(I) vs. optimizing the upper bound tr(I)) are sharply different.\\u201d\\n\\nLet us first state the reviewer\\u2019s argument: the Proposition 10 in [1] implies that the sub-sampled eigenvalues in our proposed regularizer would be much closer to the largest eigenvalues of the underlying FIM rather than the smallest ones, so that optimizing the upper bound tr(I) would behave quite differently than directly optimizing det(I). The argument is evidenced by a concentration bound. However, in practice, the bound is vacuously loose. Let us focus on the practical scenarios to do the analysis since otherwise, one would simply optimize det(I) directly instead of figuring out a tractable and effective proxy. The concentration bound shown as Proposition 10 has the $\\\\kappa$ in the numerator and the $\\\\sqrt{n}$ in the denominator. The former is the largest eigenvalues of the full FIM, and the latter is the square root of the batch size, making the bound already quite loose. Note that when we compute gradients in Algorithm 1, we do not compute them individually for each data point in the batch. Instead, we split the mini-batch into several sub-batches and compute the averaged gradients of the sub-batch. This has its own practical reason described at the end of page 6. And accordingly, the number of effective \\u201cbatch size\\u201d used in approximating the tr(I) is reduced to a number normally smaller than 10. Together with the $2 \\\\sqrt{2}$ in the numerator, the bound is indeed vacuous. \\n\\nLet us clarify and give a quick intuition for why \\u201csub-sampled eigenvalues are not likely to be large\\u201d. Since each gradient computed in approximating the tr(I) is the averaged gradient across a sub-batch, roughly speaking, the resulting \\u201csub-sampled\\u201d Gram matrix over the averaged gradients has its spectral norm effectively reduced, thus alleviating the issue raised by the reviewer. \\n\\nFinally, we thank the reviewer for the time and effort during the week. We will try our best to update the revised version at the request of any further minor fixes..\\n\\n[1] Rosasco, Lorenzo, Mikhail Belkin, and Ernesto De Vito. \\\"On learning with integral operators.\\\" Journal of Machine Learning Research 11.Feb (2010): 905-934\"}",
"{\"title\": \"Acknowlging rebuttal and a few remarks\", \"comment\": \"Thank you for the revision, which has addressed most of my comments and substantially enhanced the presentation.\\n\\nI am writing here some further remarks here for the next round of revision or for potential future works.\\n\\nFor the definition of the observed Fisher information, the authors may consider cite \\\" Efron & Hinkley. Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher Information. 1978\\\".\\n\\nSomewhere around 5.1.1 (in the revised draft), it has to be mentioned the volume element defined by the Fisher information (including the observed information) that is \\\\sqrt{\\\\vert{I}(\\\\theta)\\\\vert}d\\\\theta, and the total volume, the exponential of the last term of the FIA, is invariant to reparameterization. In the FIA, w_0 should be the maximum likelihood estimation (that means the global optimum).\\n\\nIn the writing, it may help to emphasize that 5.1.1 is only a rough approximation to show the relationship between the proposed \\\\gamma quantity and the FIA, as the approximation is rough and the quality is not guaranteed, and the theoretical guarantee is given by 5.2. Actually, the authors may also consider approximate \\\\gamma based on Balasubramanian's book chapter \\\"MDL, Bayesian Inference and the Geometry of the Space of Probability Distributions\\\", where there is an explicit term of the log determinant of the observed Fisher information.\\n\\nRegarding the bound in section 6. It is better to have some remarks and/or numerical simulations (e.g. in the appendix) on the tightness of the bound so that the reader can have some intuitions. A potential extension is a variational bound (with free parameters).\\n\\nOverall, I don't think this paper in its current form has any major flaws and the contribution is valid with both a theorem and empirical results. It should be interesting to the ICLR community and could enlighten discussions.\"}",
"{\"title\": \"Thanks for the reply; still there are misunderstanding and unaddressed concerns\", \"comment\": \"It appears that there are still some misunderstanding (e.g. Q2 was never my question), as well as unaddressed concerns. Let me clarify.\\n\\n\\n(A) Relation between FIA and your regularized objective\\n=======================================================\", \"let_me_re_state_my_concern\": \"The FIA criterion is defined using the expected FIM; but at the end of Section 5.1.1, it is replaced with the observed FIM. This is only valid when the conditional distribution defined by the model, $p(y|x;w_0)$, coincides with the conditional data distribution $p_{data}(y|x)$ (or its smoothed version, in your setup).\\n\\nIt is true that expected and observed FIM do not always coincide, even at global minima. However it is reasonable to consider the asymptotic setup, which is also what you mentioned in the previous draft. In this case, the replacement can only be valid when $w_0$ is a global minima in the entire parameter space, not only in the neighborhood $\\\\mathcal{M}(w_0)$; in the finite sample case, it becomes even less clear when (or if) your replacement is valid or reasonable.\\n\\nIn the revised version you explicitly mentioned the replacement of $\\\\mathcal{I}_w$ with $\\\\mathcal{I}$ (which is not because of any \\\"asymptotic equivalence\\\"). It is certainly helpful for clarification, but the replacement itself is still not well-justified, outside the global optimas that could potentially be justified by an asymptotic argument; so is the *connection* of your work to FIA. \\n\\n\\n(B) On the connection between log|I|/W and log(tr(I)/W)\\n=======================================================\\n\\nUnfortunately I cannot follow your argument in this part. Most importantly, \\n\\n> By Theorem 3, the trace of the submatrix version is a ``sub-sampled'' version which is quite unlikely to have extremely large eigenvalues\\n\\nWhen the eigenvalues vary in a wide range, the bounds in Theorem 3 merely becomes vacuous. It does not indicate the eigenvalue of the principal submatrices are uniformly (in any sense) distributed in that large bound.\\n\\nFurthermore, from the first equation in Page 6 you appear to be randomly subsampling a Gram matrix (of the neural tangent kernel). In this case standard concentration results (e.g. Proposition 10, Rosasco et al, 2010) imply the eigenvalue of your submatrix converges to the *largest* eigenvalues of the Gram matrix. \\n\\n> We understand the reviewer\\u2019s concern that large eigenvalues have a larger impact on tr(I)\\n\\nThe equally important part is the tail that decays exponentially. It would dominate log(det(I)) and have vanishing impact on log(tr(I)). For this reason, one would expect the behavior of the two regularizers are sharply different.\\n\\nFinally, I can't follow the following sentence:\\n\\n> As most modern network architectures are over-parameterized, we believe that in practice the difference between using tr(I) in the proposed regularizer instead of using the intractable det(I) is not critical\\n\\nThis is actually exacerbated by over-parameterization, as theoretical results like (Karakida et al) only work in the over-parameterized scheme.\\n\\n\\nMinor Points\\n============\\n\\n* It is not clear from Section 4 that you focused on global optimas. The original words are\\n> Label smoothing enables us to assume a local minimum w0 (in this case, also a global minimum) of the training loss with [sum KL] = 0.\\nTo me, it only appears that you are using label smoothing to make sure useful local minimas exist. Please consider revising.\\n\\n* I fail to see the need to mention the efficiency of MLE. Efficiency is about the asymptotic variance, not recovery of the true conditional distribution in any finite-sample case. \\n\\n\\nReferences\\n==========\\n\\nRosasco, Belkin and De Vito, On Learning with Integral Operators, JMLR 11 (2010)\"}",
"{\"title\": \"Updated version submitted\", \"comment\": \"Thanks for the reviewers. We have updated our paper including some of our responses. The new sections being added have their titles marked red.\"}",
"{\"title\": \"Response to reviewer #2 regarding part (II) & (III)\", \"comment\": \"Thanks for the reply. We have updated the paper.\", \"q1\": \"The claims in (Pennington and Worah, 2018) that FIM is generally non-singular applies for expected FIM, not necessarily for observed FIM.\\n\\nThanks for pointing this out. We have removed this in the updated version and support the non-singularity argument (i.e., our Assumption 1 is reasonable) in Section 5.1.\", \"q2\": \"FIA criterion only works for global minima and thus assuming global minima in the paper is a must.\\n\\nThe FIA criterion does not require the local minimum to be globally optimal for the entire parameter space. When we introduce FIA criterion in Section 5.1, we state that it is for the model class of the local minimum (i.e., a statistical model incorporating all neural networks in the local minimum\\u2019s well-defined neighborhood), not for the entire parameter space. Otherwise, there is no comparison in the first place. By Assumption 1, a local minimum at our interest is indeed a unique global minimum in its model class. \\n\\nFurthermore, in the previous review, we stated that the local minima we care about are also assumed to be global minima. This assumption is not a requirement. We considered comparing global minima because this scenario is well-motivated, as explained in the beginning of Section 1.\", \"q3\": \"\\u201cTheorem 1 must apply to any local minima. Restricting it to the global minima would cut its link to the regularization method you developed below.\\u201d\\n\\nIndeed Theorem 1 applies to any local minima satisfying Assumption 1 & 2, as stated precisely in Theorem 1. We have mentioned in the answer above that global optimality is not a requirement.\", \"q4\": \"\\u201c \\u2018the last term [of FIA] becomes\\u2019 is misleading\\u201d\\n\\nWe believe this is a misinterpretation. We have clarified this in the updated version by changing the order we introduce our proposed metric and FIA. We meant to show the relationship between FIA and our metric, by no means to claim the terms are equivalent. As we have mentioned in the previous review, the FIA criterion is not used to derive or describe our proposed generalization bound.\", \"q5\": \"\\u201cIt is only true for the global minimum where observed and expected FIM coincide.\\u201d\\n\\nThough irrelevant to our response above, we would like to kindly point out that this is factually inaccurate. The global optimality is neither a sufficient nor a necessary condition of \\u201cobserved FIM coinciding expected FIM\\u201d unless the amount of training data goes to infinity. The unbiasedness of the maximum likelihood estimator does not indicate its efficiency.\", \"q6\": \"\\u201cthe eigenvalues decay very quickly, thus the average is dominated by the first few ones\\u201d\\n\\nThanks for clarifying this. As most modern network architectures are over-parameterized, we believe that in practice the difference between using tr(I) in the proposed regularizer instead of using the intractable det(I) is not critical. We understand the reviewer\\u2019s concern that large eigenvalues have a larger impact on tr(I) than the impact of that on det(I). However, as the reviewer has pointed out, Karakida et al, AISTATS 2019 demonstrates that the majority of the eigenvalue of the FIM is small with only a very few ones that can be large. For over-parameterized network where there are more parameters than training samples, whenever you compute the trace of FIM you actually compute the trace of FIM\\u2019s principal submatrices (see details in Section 5.3). By Theorem 3, the trace of the submatrix version is a \\u201csub-sampled\\u201d version which is quite unlikely to have extremely large eigenvalues being \\u201csampled\\u201d given that only a very small amount of the eigenvalues are large. The specific probability of picking such a large eigenvalue requires the study of extreme value theory of spectral density of the submatrices, which is beyond the scope of this paper. Furthermore, we apply gradient clipping in Algorithm 1 to restrict the effect, if any, of the extreme eigenvalues encountered in optimizing tr(I), \\n\\nAgain, as we have shown in the previous review, the numerical results suggest that the generalization boost obtained from our regularizer can be attributed to what we expected -- the better local minima characterized by our proposed metric.\", \"q7\": \"\\u201cjustification about the validity of regularizer not at a global optima\\u201d\\n\\nAs answered previously, the global optimality is not a requirement in both theory and practice of our work.\", \"q8\": \"\\u201cThere are ambiguity in the notations chosen in Sec 5.1\\u201d\\n\\nWe have clarified the potential ambiguities in the updated version.\"}",
"{\"title\": \"Quick question; just to confirm we understand the review correctly.\", \"comment\": \"Thanks for the response. We are working on and will post our response and the revision of our paper.\\n\\nIn the meantime, we are a little bit confused about the following request. For \\\"I strongly believe the numerical experiment, as well as additional references justifying the use of $\\\\log|\\\\mathcal{I}|$ instead of $\\\\log|\\\\mathcal{I}_w|$, are needed\\\", what does $\\\\log|\\\\mathcal{I}|$ refer to? I assume the reviewer asks for the justification of the use of [X] instead of [Y] for the regularizer.\"}",
"{\"title\": \"Current Response Not Convincing\", \"comment\": \"Thanks for your reply. I agree there is a discrepancy about the definition of FIM initially; however, I do not find part (II) and (III) of your response convincing for the reasons below:\\n\\n=============================================================\\n\\n(A) The role of Information Theory, and confusions in Section 5.1\\n\\nIn part (II) you did not address my concern, namely the FIA is defined using the expected FIM instead of the observed FIM, and it would be \\\"confusing, irrelavent to the rest of the paper, and cannot be used to justify the work.\\\" To be specific, here is one place where the introduction of FIA created confusion:\\n\\n1. Just above Sec 5.2, you defined $\\\\gamma(w_0)$ as $\\\\log|\\\\mathcal{I}(w_0)|$, which is \\\"undefined unless $|I(w_0)|\\\\ne 0$\\\". You then claims (Pennington and Worah, 2018) showed $I(w_0)$ is generally non-singular. However, at the second line on Page 3, Pennington and Worah stated that the matrix they studied \\\"is equal to the Fisher information matrix of the model distribution with respect to its parameters\\\". So it is clear they are studying the expected FIM $I_w$, not the observed FIM $I$ you claims.\\n\\n2. You might argue that $w_0$ in Sec 5.1 only refers to the global minimum, as it is defined in Sec 4.1. However, you mentioned Theorem 1 applies to \\\"*a* local minimum $w_0$\\\", i.e. *any* local minimum; yet in the bound $\\\\gamma(w_0)$ appeared. As $\\\\gamma$ is only defined in Sec 5.1, to readers it would appear the discussions about the validity of $\\\\gamma$ applies here, which cannot be true, as I've argued in point (1). \\nBesides, Theorem 1 must apply to any local minima, since it is about model parameter selection, not hyperparameter selection. Restricting it to the global minima would cut its link to the regularization method you developed below.\\n\\n3. Now that $w_0$ may refer to any local minima, the statement in Sec 5.1 that \\\"the last term [of FIA] becomes $ln V + ln \\\\sqrt{|I(w_0)|}$\\\" becomes misleading: It is only true for the global minimum where observed and expected FIM coincide. \\n\\nFor this reasons I believe this section, as well as Sec 5.2, must be revised.\\n\\n=============================================================\\n\\n(B) Whether the underlying behavior of the proposed regularizer is as expected\\n\\nYou claim\\n> Secondly, the reviewer argues that \\u201cdet(I) focusing on the average eigenvalue yet tr(I) focusing on the few largest ones\\u201d. This is not the case since at a local minimum tr(I) is the L1 norm of the eigenvalues of I,\\n\\nIndeed tr(I) is the L1 norm of the eigenvalues. But my point has been that as the eigenvalues decay very quickly, thus the average is dominated by the first few ones: consider the global optima $w_0$, where the observed FIM and the expected coincide. As empirically shown in Fig 1 in (Karakida et al, AISTATS 2019), eigenvalue of the FIM decays exponentially. In this case, log(tr(I)) will be determined by the largest few values (a small and constant number of them, to be precise), while log(det(I)) would be dominated by the large number of small values.\\n\\nAnother issue is that, as I've pointed out in (A), it appears that you do not have any justification about the validity of regularizer $log|\\\\mathcal{I}(w_0)|$, at any $w_0$ that is not a global optima; your justification in Sec 5.1 only applies to $\\\\log|\\\\mathcal{I}_w(w_0)|$. \\n\\nFor these reasons, I strongly believe the numerical experiment, as well as additional references justifying the use of $\\\\log|\\\\mathcal{I}|$ instead of $\\\\log|\\\\mathcal{I}_w|$, are needed.\\n\\n=============================================================\\n\\n(C)\\n\\nFinally, you claim there is no ambiguity in the notations chosen in Sec 5.1. In (A) I've given an example where ambiguity appears.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your appreciation. We would definitely consider mentioning the relationship with this work in the next version of our paper.\"}",
"{\"title\": \"A final summary of the review #2 and the corresponding responses from the authors\", \"comment\": \"The reviewer #2 has raised three concerns, of which all are minor and resolved (completely or partially).\\n\\nCONCERN (I): the Fisher information is given incorrectly in the paper\\n\\nBoth the reviewer and the authors now agree that this concern is due to the initial confusion of the reviewer. The Fisher information used in the paper is given precisely and correctly. Most previous discussions between the reviewer and the authors start because of the different terminologies being used. The reviewer argues that their definition of \\u201cmodel distribution\\u201d is not \\u201cnonstandard\\u201d by pointing out two papers on ArXiv, of which one considers the model distribution as p(x, y; \\u03b8) and the other defines it as p(y | x; \\u03b8), already an ambiguity.\\n\\nCONCERN (II): the relation and role of Information Theory in the lens of the FIA criterion in our paper\\n\\nWe have updated the Section 5.1 according to the request to clarify the relation of the FIA criterion and our propose metric. We would like to reiterate that the FIA criterion is NOT used in the paper to derive any theory or practice. The entire paper does NOT require any information-theoretic interpretation to be correct. The FIA criterion is stated as an important related work of our approach.\\n\\nCONCERN (III): the proposed regularizer that optimizes the upper bound tr(I), although works great in practice, might have underlying behavior quite different from directly optimizing det(I), which is intractable though.\\n\\nThe reviewer refers to [1] for the spectral density of the observed FIM, denoted I, and [2] for the following argument: the Proposition 10 in [2] implies that the sub-sampled eigenvalues in our proposed regularizer would be much closer to the largest eigenvalues of the underlying FIM rather than the smallest ones, so that optimizing the upper bound tr(I) would behave quite differently than directly optimizing det(I). The argument is evidenced by a concentration bound. \\n\\nHowever, in practice, the bound is vacuously loose. Let us focus on the practical scenarios to do the analysis since otherwise, one would simply optimize det(I) directly instead of figuring out a tractable and effective proxy. The concentration bound shown as Proposition 10 has the $\\\\kappa$ in the numerator and the $\\\\sqrt{n}$ in the denominator. The former is the largest eigenvalues of the full FIM, and the latter is the square root of the batch size, making the bound already quite loose. Note that when we compute gradients in Algorithm 1, we do not compute them individually for each data point in the batch. Instead, we split the mini-batch into several sub-batches and compute the averaged gradients of the sub-batch. This has its own practical reason described at the end of page 6. And accordingly, the number of effective \\u201cbatch size\\u201d used in approximating the tr(I) is reduced to a number normally smaller than 10. Together with the $2 \\\\sqrt{2}$ in the numerator, the bound is indeed vacuous. \\n\\nLet us clarify and give a quick intuition for why our sub-sampled eigenvalues are not likely to be large. Since each gradient computed in approximating the tr(I) is the averaged gradient across a sub-batch, roughly speaking, the resulting \\u201csub-sampled\\u201d Gram matrix over the averaged gradients has its spectral norm effectively reduced, thus alleviating the issue raised by the reviewer. \\n\\nFurthermore, in order to demonstrate that our regularizer indeed induces local minima which have smaller values according to our proposed metric (not merely smaller values by its upper bound), we compute our metric on local minima of similar training loss obtained with or without the regularizer. The numerical results are in Section 7.2.2.\\n\\n[1] Karakida, Ryo, Shotaro Akaho, and Shun-ichi Amari. \\\"Universal statistics of fisher information in deep neural networks: mean field approach.\\\" arXiv preprint arXiv:1806.01316 (2018).\\n\\n[2] Rosasco, Lorenzo, Mikhail Belkin, and Ernesto De Vito. \\\"On learning with integral operators.\\\" Journal of Machine Learning Research 11.Feb (2010): 905-934\"}",
"{\"title\": \"Another quick response; full response will follow later\", \"comment\": \"Q7: \\u201cRegarding the approximation to the bound, approximating log det(I) with log trace(I) is not a good idea.\\u201d\\n\\nI was referring to the process of *bounding* log det(I) with log trace(I), after appropriate re-scalings, since following my argument, it may not lead to informative gradients. Of course this is a upper bound following Jensen's inequality, but this mere fact would not be a sufficient justification - otherwise we would be training variational autoencoders without using encoders at all, instead relying on the variational bound using the prior p(z). The fact that your proposed regularization works could very possibly be attributed to other factors; it might even be superior to the true regularizer (log det(I)) since they have such different behaviors, with log det(I) focusing on the average eigenvalue and log tr(I) focusing on the a few largest ones. This is particularly problematic, since, as I've mentioned in the original comment, the eigenvalues of the FIM varies in a wide range.\\n\\nWe will need a link between them since your theory is about the original one. If a convincing argument is to be made, I would recommend to look at toy networks where calculating log det(I) is possible, as well as toy, potentially 1D, datasets, and compare the behavior of the two regularizers.\"}",
"{\"title\": \"I see there is a confusion about notion, but my points still hold and please revise Sec 5.1\", \"comment\": \"Indeed by model distribution I meant the conditional data distribution p(y|x;\\\\theta)p(x), not any distribution over the model parameters, which we don't have any to begin with. Still, I'm not completely certain this is a \\\"nonstandard notation\\\" - a Google search produces the following paper using the same notion: arxiv:1905.12558, arxiv:1906.07774; and in any case, I've made it clear from my first comment that I was referring to this distribution (\\\"the model distribution p(c_x|x;w)\\\", which, to be fully clear, refers to the joint distribution p(c_x|x;w)p(x)).\\n\\nAlso, note that in Section 5.1 you used the expected Fisher information and referred to it using the same notation, which is confusing, irrelavent to the rest of the paper, and cannot be used to justify the work. You also stated that you are approximating it in the end of Section 5.1. By \\\"lose the information theoretic interpretation\\\" I was also referring to this part, since this is the only place where \\\"information theory\\\" appeared in the entire paper. In this regard, I believe the \\\"information theoretic\\\" part in the title as well as this section should be revised / removed to clarify this.\"}",
"{\"title\": \"The reviewer #2 might be confused about some basic concepts\", \"comment\": \"Thanks for the quick reply. After reading the response, we believe the reviewer might have notions of basic concepts different from those of most researchers in this field. There are a lot of textbooks out there as references. For the convenience of the reviewer, we will use the one he/she provided, referred to as [7].\", \"q\": \"\\u201c... model distribution ...\\u201d\\n\\nThe reviewer here confuses the concept of probability mass (density) function of the data with that of the model. p(x; \\u03b8) is called the probability mass function (of the data), not the \\u201cmodel distribution\\u201d; similarly, f(x; \\u03b8) is called the probability density function (of the data), not the \\u201cmodel density\\u201d. The data distribution refers to the distribution of the data; similarly, the model distribution refers to the distribution of the model. The model here refers to a statistical model, which, as given in Section 7.2 of [7], is a set of parameters \\u0398 or a set of densities. Each density here refers to a specific probability density function of the data, denoted f(x; \\u03b8) and attained by specific model parameters \\u03b8 in the parameter set \\u0398. The model distribution here can refer to either the prior p(\\u03b8) or the posterior p(\\u03b8|x).\", \"q2\": \"\\u201c[our] response regarding the definition of Fisher information is simply incorrect.\\u201d\\n\\nAccording to the previous answer, our response is correct.\", \"q3\": \"\\u201cUsing the observed FIM will lose the information-theoretic interpretation\\u2026 and would be hugely misleading.\\u201d\\n\\nWe wonder what exactly would be misleading. The proposed generalization bound does not require any information-theoretic interpretation to be correct. It is derived solely based on the theory of PAC-Bayes. \\n\\n[7] \\\"All of Statistics: a Concise Course in Statistical Inference\\\", electronic version from https://www.ic.unicamp.br/~wainer/cursos/1s2013/ml/livro.pdf\"}",
"{\"title\": \"Quick Response regarding FIM Definition; Complete Response will Follow\", \"comment\": \"Thank you for your response. I will upload a detailed response in 1-2 days regarding all aspects in your rebuttal, but I'd like to point out your response regarding the definition of Fisher information is simply incorrect.\\n\\n> the Fisher information, no matter the observed *or the expected one*, has nothing to do with expectation w.r.t. the model distribution\\n\\nSee e.g.\\n* (Karakida et al, 2018), below Eqn (1):\\n\\\"The expectation E[\\u00b7] is taken over the input-output pairs (x, y) of the joint distribution p(x, y; \\u03b8).\\\"\\n* (Gr\\u00fcnwald, 2007), Eqn (4.18) (regarding the expected FIM), where there is a subscript of theta in the expectation; this is further clarified in Eq 18.48 (expected FIM, equivalent to Eq 4.18 assuming suitable differentiability), where you can see the model density appears in the integral.\\n* Eq 10.10 and 10.11, \\\"All of Statistics: a Concise Course in Statistical Inference\\\", electronic version from https://www.ic.unicamp.br/~wainer/cursos/1s2013/ml/livro.pdf , where, again, the model density appears in the integral.\\n\\nWhile it may be valid to formally use the observed FIM as a model selection criteria, it will lose the information theoretic interpretation as I've argued, and would be hugely misleading in my opinion. I would also need to re-check the proof of your results.\", \"references\": \"\", \"karakida_et_al\": \"Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Dear reviewer1,\\n\\nThanks for your appreciation.\", \"q1\": \"\\u201cSection 5.1, explain the abbreviation FIA\\u201d\\n\\nFIA in Section 5.1 stands for Fisher information approximation, originally coined for Normalized Maximum Likelihood Estimation in [1].\", \"q2\": \"\\u201cRegarding the choice of the neighborhood \\\\mathcal{M}(w_0), what is the reason to define the model (neighbourhood of w_0) based on the loss? Why not simply take a coordinate neighborhood?\\u201d\\n\\nGiven the local minimum at w_0, we find it natural to define its neighborhood by a sublevel set w.r.t. the training loss. The issue of using the local coordinate (e.g., using an \\u0190-ball to define the neighborhood) is that the amount of change of the underlying model measured by training loss varies for different dimensions of the parameter space. For instance, moving in one direction might change the model a lot while moving in the other might change little.\", \"q3\": \"\\u201cIn section 5.1, there have to be some remarks on the intuition and related works on the flatness of the local minimum that is related to generalization.\\u201d\\n\\nWe will update the paper to add a discussion in Section 5.1. regarding the intuition and related works on \\u201cflatness/sharpness\\u201d, some of which are briefly discussed in Section 2.\", \"q4\": \"\\u201cthe reviewer points the authors to [2] and [3], which have similar MDL formulations expressed in terms of the spectrum of the Fisher information matrix\\u201d\\n\\nWe will definitely consider mentioning the relation with these two papers in our next version.\\n\\n[1] Rissanen, Jorma J. \\\"Fisher information and stochastic complexity.\\\" IEEE transactions on information theory 42.1 (1996): 40-47.\\n\\n[2] Karakida, Ryo, Shotaro Akaho, and Shun-ichi Amari. \\\"Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach.\\\" The 22nd International Conference on Artificial Intelligence and Statistics. 2019.\\n\\n[3] Sun, Ke, and Frank Nielsen. \\\"Lightlike Neuromanifolds, Occam's Razor and Deep Learning.\\\" arXiv preprint arXiv:1905.11027 (2019).\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Dear reviewer2,\\n\\nThanks for your time and effort. There seems to be quite a lot of misunderstandings and confusions from the reviewer.\", \"q1\": \"\\u201cThe definition of Fisher information is incorrect \\u2026 The expectation should be taken w.r.t the model distribution\\u201d.\\n\\nThe statement in this question is factually wrong. First of all, what we use in the paper is the observed Fisher information, not the (expected) Fisher information. Secondly, by definition, the Fisher information, no matter the observed or the expected one, has nothing to do with expectation w.r.t. the model distribution.\", \"q2\": \"\\u201cThere is a quantity called observed Fisher information that coincide with Eq (1) in the paper.\\u201d\\n\\nThis is by no means a coincidence. We clearly and precisely describe the term as observed Fisher information in the very first place when we introduce Eq (1).\", \"q3\": \"\\u201cThe observed Fisher information is a function of the dataset instead of the model parameter, as in Gr\\u00fcnwald (2007)\\u201d\\n\\nThis is factually wrong. Professor Peter Gr\\u00fcnwald has never said so. The Fisher information, no matter the observed one or the expected version, is a quantity involving both the model parameters and the input data. In Gr\\u00fcnwald (2007), simply omitting model parameter \\u03b8 in the notation I(X) does not mean I(X) is not a function of \\u03b8.\", \"q4\": \"\\u201cThe observed Fisher information can only used to study model parameters near the global optima; it cannot help with choosing between different local optima as the work claims\\u201d\\n\\nIt is clearly stated in Section 4 that the different local minima we focus on to compare are also global minima. The comparison between these local minima is well-motivated, as pointed out at the beginning of Section 1, that learning algorithms such as SGD tend to end up in one of the many local (global) minima that are not distinguishable from their similar close-to-zero training loss [2, 3, 4, 5].\", \"q5\": \"\\u201cThe FIA criterion, which is used in this paper to develop the generalization bound, is defined using the expected Fisher information rather than the observed one.\\u201d\\n\\nWe give the FIA criterion precisely in Section 5.1, only to illustrate the connection between our approach and Rissanen's formulation of the MDL principle. In fact, we do not use the FIA criterion to derive or describe the generalization bound.\", \"q6\": \"\\u201clocal optima will not be unique in their neighborhoods, as in [1]\\\"\\n\\nIn our paper, we assume that local minima we care about are well isolated, mentioned at the end of Section 5.1. For state-of-the-art network architectures used in practice, this isolation assumption is often the fact. The reviewer pointed out that, introduced in [1], two kinds of singularity in neural networks prevent the local minima from being unique, namely the eliminating singularity and the overlapping singularity. As well demonstrated in [6], network with skip connections (such as ResNet, WRN, and DenseNet used in our experiments) can effectively eliminate both. We would like to add a discussion paragraph about the isolation assumption in our paper.\", \"q7\": \"\\u201cRegarding the approximation to the bound, approximating log det(I) with log trace(I) is not a good idea.\\u201d\\n\\nWe do not use log trace(I) as an approximation to measure local minima as indeed it can be inaccurate. Instead, we use it as an upper bound of what we intend to optimize during training. Optimizing such upper bound, in return, enables us to develop a tractable regularization technique in search of the good local minima. Our experiments in Section 7.2 well demonstrate the effectiveness of our proposed regularizer in finding better local minima of greater generalizability.\\n\\n\\n[1] Amari, Shun-ichi. Information geometry and its applications. Vol. 194. Berlin: Springer, 2016.\\n\\n[2] Dauphin, Yann N., et al. \\\"Identifying and attacking the saddle point problem in high-dimensional non-convex optimization.\\\" Advances in neural information processing systems. 2014.\\n\\n[3] Kawaguchi, Kenji. \\\"Deep learning without poor local minima.\\\" Advances in neural information processing systems. 2016.\\n\\n[4] Nguyen, Quynh, and Matthias Hein. \\\"Optimization Landscape and Expressivity of Deep CNNs.\\\" International Conference on Machine Learning. 2018.\\n\\n[5] Du, Simon S., et al. \\\"Gradient Descent Finds Global Minima of Deep Neural Networks.\\\" International Conference on Machine Learning, 2019.\\n\\n[6] Orhan, A. Emin, and Xaq Pitkow. \\\"Skip connections eliminate singularities.\\\" International Conference on Learning Representations, 2018.\"}",
"{\"title\": \"Response to reviewer #3 (cont.)\", \"comment\": \"Q5: Whether the regularization \\u201cconverges to flatter minima characterized by the proposed flatness measure\\u201d?\\n\\nOur regularizer essentially optimizes an upper bound of the proposed metric during training. As requested, for the following neural network architecture trained on CIFAR-10, we compute our metric on local minima of similar training loss obtained with or without the proposed regularizer. The following numerical results (each entry represents mean \\u00b1 std among 5 runs) show that the resulting generalization boost indeed can be attributed to the \\u201cflatter\\u201d minima measured by our metric:\\n\\n=========================================================\\n \\t ResNet \\t| \\t WRN | DenseNet\\nw\\\\o reg -979.3 \\u00b1 22.3 -737.6 \\u00b1 20.3 -850.3 \\u00b1 23.5\\nwith reg -1138.1 \\u00b1 11.0 -804.8 \\u00b1 18.7 -886.2 \\u00b1 20.5\\n========================================================= \\n\\n\\n[1] Dinh, Laurent, et al. \\\"Sharp minima can generalize for deep nets.\\\" International Conference on Machine Learning, 2017.\\n\\n[2] Draxler, Felix, et al. \\\"Essentially No Barriers in Neural Network Energy Landscape.\\\" International Conference on Machine Learning, 2018.\\n\\n[3] Orhan, A. Emin, and Xaq Pitkow. \\\"Skip connections eliminate singularities.\\\" International Conference on Learning Representations, 2018.\\n\\n[4] Wilson, Ashia C., et al. \\\"The marginal value of adaptive gradient methods in machine learning.\\\" Advances in Neural Information Processing Systems, 2017.\\n\\n[5] Keskar, Nitish Shirish, and Richard Socher. \\\"Improving generalization performance by switching from adam to sgd.\\\" arXiv preprint arXiv:1712.07628 (2017).\\n\\n[6] Chaudhari, Pratik, et al. \\\"Entropy-SGD: Biasing Gradient Descent Into Wide Valleys.\\\" International Conference on Learning Representations, 2017.\\n\\n[7] Clevert, Djork-Arn\\u00e9, Thomas Unterthiner, and Sepp Hochreiter. \\\"Fast and accurate deep network learning by exponential linear units (elus).\\\" International Conference on Learning Representations, 2016.\\n\\n[8] Ioffe, Sergey, and Christian Szegedy. \\\"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.\\\" International Conference on Machine Learning. 2015.\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Dear reviewer3,\\n\\nThank you for the constructive review.\", \"q1\": \"\\u201cThere are degenerate eigendirections\\u201d or otherwise \\u201clocal minima are all isolated\\u201d, which is \\u201ccontrary to recent work [2] ...\\u201d\\n\\nThroughout our paper, we make the assumption that the local minima we care about are isolated, mentioned at the end of Section 5.1. We would like to update our paper to add a discussion on this subject, including the answer to this and the next question. This assumption is not necessarily contradictory to the argument that local minima are connected, as suggested by [2] that a relatively flat path exists between any pair of local minima of low training loss. We point out that the claim in [2] is not conclusive, that all the local minima are \\u201cperhaps best seen as\\u201d one connected component. In other words, the local minima can still be isolated. \\n\\nFor state-of-the-art network architectures used in practice, the isolation assumption is often the fact. To be precise, this assumption is violated when the Hessian matrix at a local minimum is singular. Specifically, [3] summarizes three sources of the singularity: (i) due to a dead neuron, (ii) due to identical neurons, and (iii) linear dependence of the neurons. As well demonstrated in [3], network with skip connection (such as ResNet, WRN, and DenseNet used in our experiments) can effectively eliminate all the aforementioned singularity. There is another kind of singularity specifically for ReLU networks, which we will discuss next.\", \"q2\": \"In practice how can the proposed metric \\u201cdeal with rescaling layer parameters in deep networks\\u201d, i.e., the rescaling issue described in [1]?\\n\\nIn practice, the rescaling issue is not critical. There are three reasons:\\n(I) This issue can only happen in neural networks equipped with scale-invariant activation functions, such as ReLU. Many state-of-the-art models use other activation functions such as ELU [7] that is not scale-invariant.\\n(II) Even for ReLU networks, most modern DNNs are free of this issue, since they have normalization layers such as BatchNorm [8] applied before the activation. BatchNorm shifts all the inputs to the ReLU function, which is equivalent to shifting the ReLU function horizontally. The shifted ReLU is no longer scale-invariant. The ResNet, WRN, and DenseNet used in our experiments all fall into this category.\\n(III) Due to the ubiquitous use of normal distribution based weights initialization scheme and the L2 regularization / weight decay, most of the local minima obtained by gradient-based learning algorithms have weights of a relatively small norm. Consequently, in practice, we will not compare two local minima essentially the same but have one as the rescaled version of the other with a much larger norm of the weights. \\n\\nIn summary, the rescaling issue is another source of the singularity but only for networks equipped with scale-invariant activation functions. And in practice, it is effectively eliminated.\", \"q3\": \"\\u201cHow the authors decided that training had converged to a local minimum\\u201d in Section 7.1?\\n\\nFor the experiments of local minima characterization in Section 7.1, in all scenarios, we train the model for 200 epochs with an initial learning rate 0.1, divided by 10 when the training loss plateaus. Within each scenario, we find the final training loss very small and very similar across different models and the training accuracy essentially equal to 1, indicating the convergence.\", \"q4\": \"How is the proposed regularization method compared to \\u201cAdaGrad/Adam and other techniques that purport to condition the gradient based on local curvature\\u201d?\\n\\nOur regularizer aims to find better \\u201cflatter\\u201d minima to improve generalization whereas adaptive optimization methods such as AdaGrad and Adam try to boost up convergence, yet at the cost of generalizability. Recent works such as [4] and [5] show that adaptive methods generalize worse than SGD+Momentum. In specific, very similar to our setup, [5] demonstrates that SGD+Momentum consistently outperforms the others on ResNet and DenseNet for CIFAR-10 and CIFAR-100. Other approaches that also utilize local curvature, such as the Entropy-SGD [6] mentioned in Section 2, have empirical results rather preliminary compared to ours. Furthermore, as described in Algorithm 1, our proposed regularizer is not specific to a certain optimizer. We perform experiments with SGD+Momentum because it is chosen to be used in ResNet, WRN, and DenseNet, helping all of them achieve current or previous state-of-the-art results.\"}",
"{\"title\": \"An Interesting Connection\", \"comment\": \"Hi Authors,\\nThank you for your interesting paper. I noticed that your work concerning generalization is related to our paper which visualizes the sharp-flat phenomenon, including experiments on realistic architectures, as an explanation for generalization.[1] Please consider mentioning the relationship with our work in your next version.\\n\\n[1] Huang, W. Ronny, et al. \\\"Understanding Generalization through Visualizations.\\\" arXiv preprint arXiv:1906.03291 (2019).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides a metric to characterize local minima of deep network loss landscapes based on the Fisher information matrix of the model parameterized by the deep network. The authors connect the Fisher information to the curvature of the loss landscape (the loss considered is the negative loss likelihood) and obtain generalization bounds through PAC Bayes analysis. They further propose regularizing the training of deep networks using the local curvature of the loss as a regularizer. In the final experimental section of the paper, the relationship between the empirical measures and generalization is shown on a variety of networks.\\n\\nThis is an interesting paper, but I have a few concerns.\\n\\n1. The information-theoretic measure that is proposed is essentially the (log) determinant of the hessian of the loss function. If there are degenerate eigendirections (zero eigenvalues) then the proposed measure would not be able to distinguish between minima with different numbers of degenerate directions / same number of degenerate directions but different spectral norms of the hessians. If the authors contention is that there will be no zero eigenvalues, that suggests that local minima of deep networks are all strict, isolated minima, contrary to recent work on connected solutions (See Draxler et. al. 2018, Essentially No Barriers in Neural Network Energy Landscapes, ICML 2018).\\n\\n2. I would like to see how the authors believe their measure deals with rescalings layer parameters in deep networks, ie the issue brought up by Dinh et. al. in \\\"Sharp Minima can Generalize for Deep Networks\\\" ICML 2017. While I can see that the log determinant is invariant, it is not clear that the proposed approximation will be invariant to rescaling of deep network layer parameters. If the parameters corresponding to the eigenvalues sampled in the approximation are rescaled, I believe the proposed measure will not be invariant.\\n\\n3. The experiments regarding the local minima characterization are well constructed, though some details are missing such as how the authors decided that training had converged to a local minimum. As far as regularization based on the local curvature is concerned, I would like to see some more experiments that compare the proposed technique to adagrad/adam and other techniques that purport to condition the gradient based on local curvature. It would also be interesting to see whether the regularization indeed converges to flatter minima characterized by the proposed flatness measure. Since the claim is that the regularizer gets you flatter solutions, that information is important to decide whether the proposed technique is performing as advertised.\\n\\nI am willing to update my score based on responses to these concerns.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Post-rebuttal update: I have just noticed the authors modified their summary post below and claimed \\\"[my concerns] are all minor or resolved\\\". This is not true. Here is my summary of unresolved concerns written after the discussion period.\\n\\nThis work has been substantially improved during the rebuttal process, and some of my concerns are addressed. But there are still major issues, as raised in my [last comment]( https://openreview.net/forum?id=BJlXgkHYvS¬eId=r1xAnokijS ), that remains unanswered. Specifically,\\n\\n(A) the relation between this work and information theory\\n\\nIn the revision, the authors have make it very clear that the relation between FIA and their proposed regularized objective is very vague, relying on the crude approximation of expected Fisher information with observed Fisher information. Therefore the \\\"information-theoretic\\\" part in the title seems awkward and to some extent, misleading.\\n\\nAs Reviewer 1 has pointed out, it would have been better if the authors relate their theory and method to the observed FIM, instead of information theory, from the beginning. Since the observed FIM and the neural tangent kernel (NTK) share the same eigenspectrum, it would also be interesting to relate this work to the NTK.\\n\\n(B) the different behavior of the proposed regularization (log det(I)) and its bound that is actually implemented (log tr(I))\\n\\nThis is the more important issue. My concern is that the observed FIM (or the NTK) is known to have fast decaying spectrum; (Karakida et al) has shown empirically that the decay can be exponential. Thus log det(I) would be dominated by the long tail (since after taking logarithm it is the sum (or average) of an arithmetic sequence), while log tr(I) would be dominated by the first largest few values. \\n\\nThe authors claim that this is not an issue since they replaced the observed FIM with a subsampled, low-rank (<=10), version. It corresponds to consider a small submatrix of (the gram matrix of) the NTK. Denote this matrix as .\\n(a) This does not help with the problem, since we now have no chance of recovering the smaller eigenvalues that would have dominated log(det(I)), and it is impossible that the proposed regularizer has a similar behavior to log(det(I)).\\n(b) One could verify easily, using small feed-forward networks (or even simpler, computing a gram matrix using RBF kernels, since the FIM shares its eigenspectrum with NTK which is a p.d. kernel), that the new matrix still has a fast-decaying eigenspectrum, so the behavior of and \\\\tilde{I} are still significantly different, even though this cannot be established by concentration bounds as the authors argue. While FFN and modern deep architectures can have different behaviors, I believe the above evidence suggests that a numerical experiment comparing the behavior of the two bounds is a must.\\n\\nFollowing this argument we can see *another issue* of this work, namely the proposed generalization bound will be vacuous given the fast-decaying spectrum of the FIM, since it contains gamma=log(det(I)).\\n\\nReviewer 1 mentioned this work could enlighten future discussions on this subject. While I agree this paper presents interesting empirical observations (namely its final algorithm, which is vaguely connected to the proposed objective, leads to improved performance on CV tasks), I think this submission in its current form is a bit too misleading to serve this purpose well, and overall I believe it would be better to go through another round of revision.\\n\\n\\nOriginal Review\\n============================================\\n\\nThis paper presents a generalization bound based on Fisher information at a local optima, and proposes to optimize (an approximation to) it to get better generalization guarantees. There are issues in both parts, and I don't think it should be accepted. Specifically,\\n\\n1. The definition of Fisher information is incorrect (for almost every parameter). The expectation should be taken w.r.t the model distribution p(c_x|x;w), instead of the data distribution S. \\n2. Assumption (1) (loss locally quadratic) is not reasonable for DNNs, since local optimas will not be unique in their neighborhoods. See e.g. Section 12.2.2, \\\"Information Geometry and Its Applications\\\".\\n3. Regarding the approximation to the bound, approximating log det(I) with log trace(I) is not a good idea: adding a very small eigenvalue will lead to noticeable change in the former, but negligible change in the latter. This is particularly problematic for DNNs, since the spectrum of their Fisher information matrix varies in a wide range: see \\\"Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach\\\". \\n\\n(Edit 11.8:\\n* regarding point (1), there is a quantity called observed Fisher information in e.g. Grunwald (2007) that coincide with Eq (1) in the paper, but it is a function of the dataset instead of the model parameter, and can only used to study model parameters near the *global optima* (as it is applied in Grunwald (2007)); it cannot help with choosing between different local optimas as this work claims. Additionally, the FIA criterion, which is used in this paper to devleop the generalization bound, is defined using the standard form of Fisher information (i.e. taking expectation w.r.t model distribution), see Rissanen (1996). These facts lead me to believe this is a confusion on the authors' part.\\n* in point (2) I was referring to the authors' argument \\\" Since L(S,w) is analytic and w_0 is the *only local minimum* of L(S,w) in M(w_0)\\\", which is incorrect.)\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper contributes to the deep learning generalization theory, mainly from the theoretical perspective with experimental verifications. The key proposition is given by the unnumbered simple equation in the middle of page 4 (please number it), where \\\\mathcal{I} is the Fisher information matrix. According to the authors, this simple metric, which is the log-determinant of the Fisher information matrix, can characterize the generalization of a DNN.\\n\\nRemarkably, this piece of work is well written in terms of English and formulations, and complete, with a rigorous theoretical analysis (section 5.1, 5.2), practical approximations (section 5.3) and empirical verifications (section 6).\\n\\nOn the theoretical side, this work builds upon Rissanen's formulation of the MDL principle, which has two parts (describing data given the model as well as the model complexity). Under rough approximations, the complexity term becomes the log-determinant of the Fisher information matrix evaluated at the local (global) optimum. This simple approximation is further proved to upper-bounds the generalization error as stated in theorem 1.\\n\\nTo make the criterion to be practically useful, the author used the Jensen inequality so that the metric simply depends on the trace of the Fisher information matrix.\\n\\nThe empirical study showed the usefulness of the proposed metric which can well approximate the testing error and a regularization term (based on the trace of the Fisher information matrix) that can improve generalization on real DNN experiments.\", \"the_reviewer_has_the_following_minor_comments_to_further_improve_this_contribution\": \"section 5.1, explain the abbreviation FIA\\n\\nRegarding the choice of the neighborhood \\\\mathcal{M}(w_0), what is the reason to define the model (neighbourhood of w_0) based on the loss? Why not simply take a coordinate neighborhood?\\n\\nAccording to your metric, the smaller the scale of the Fisher information matrix, the better the generalization. In section 5.1, there has to be some remarks on the intuition and related works on the flatness of the local minimum that is related to generalization.\\n\\nAs this contribution is related to the spectral properties of the Fisher information matrix, the reviewer points the authors to \\\"Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach. Karakida et al. 2018.\\\" and \\\"Lightlike Neuromanifolds, Occam's Razor and Deep Learning. Sun and Nielsen. 2019\\\", which deals with asymptotic cases and have similar MDL formulations expressed in terms of the spectrum of the Fisher information matrix.\"}"
]
} |
BJg7x1HFvB | Well-Read Students Learn Better: On the Importance of Pre-training Compact Models | [
"Iulia Turc",
"Ming-Wei Chang",
"Kenton Lee",
"Kristina Toutanova"
] | Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available. | [
"NLP",
"self-supervised learning",
"language model pre-training",
"knowledge distillation",
"BERT",
"compact models"
] | Reject | https://openreview.net/pdf?id=BJg7x1HFvB | https://openreview.net/forum?id=BJg7x1HFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ySebnAYVww",
"BJxP-MjjiH",
"SyeWIxtuiH",
"HJlMayFujB",
"B1lrwkYuoB",
"SJe6IsB6KH",
"rkgyLdyntS",
"H1xlKlknFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724858,
1573790207355,
1573584969374,
1573584825995,
1573584732700,
1571801941457,
1571711047076,
1571709048293
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1499/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1499/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1499/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1499/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Though the reviewers thought the ideas in this paper were interesting, they questioned the importance and magnitude of the contribution. Though it is important to share empirical results, the reviewers were not sure that there was enough for this paper to be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"I read the authors' responses and am not satisfied.\\n\\nKD does not require that the teacher and student have the same hidden dimension size. This can be done following [FitNets: Hints for Thin Deep Nets](https://arxiv.org/pdf/1412.6550).\"}",
"{\"title\": \"Response to official blind review #1\", \"comment\": \"We believe the reviewer has misunderstood the contribution of the paper: our work does not present technical novelty, but an empirical demonstration that there has been significant overclaiming in the area where pre-training and distillation interact. In particular, multiple papers have advocated for highly restrictive yet complex strategies when more general, simpler baselines shown in our paper are just as effective.\\n\\nWe argue that the ubiquity of distillation is not a strong enough reason to reject a paper that merely uses it as a tool. We do not claim novelty for using distillation in the context of building compact models, but rather investigate its interaction with pre-training. It was not clear a priori that a student with access to a powerful pre-trained teacher can (somewhat redundantly) benefit from its own pre-training. Prior cited studies initialize their model by truncating taller models, without questioning whether pre-training is necessary, or whether truncation is the best strategy for pre-training. Our in-depth ablation studies fill this void in the literature. Indeed, the results are empirical, not unlike the majority of neural network research.\"}",
"{\"title\": \"Response to official blind review #2\", \"comment\": \"We thank the reviewer for taking the time to understand the subtleties of this problem, and reflect on how it can help the community.\", \"regarding_the_misc_comments\": \"Our statement that pre-training+fine-tuning has been overlooked in prior work is exemplified by the two instances of prior work that we compared against (DistilBert and Patient Knowledge Distillation), which propose more elaborate methods without showing results for the simpler approach of directly applying the Bert recipe to smaller models.\\nAcknowledged the confusion regarding which of the models is pre-trained; we are happy to clarify the wording in a future version.\"}",
"{\"title\": \"Response to official blind review #3\", \"comment\": \"We would like to reiterate the main take-aways of our paper, which are: 1) Pre-trained Distillation (PD) is an effective recipe in the specific setup where there is very little labeled data, but a more significant amount of task unlabeled data, and 2) PD is *just as good* as more elaborate techniques that make restrictive assumptions about the model architecture. We should have made it clearer that comparison against prior work in Table 3 is for completeness only, and it is not our goal to beat SoTA in the traditional setup; rather, we propose a solution for the case where there is unlabeled task data.\\n\\nComparison to Patient Knowledge Distillation (PKD): PKD requires initialization with a pre-trained Transformer which has the same hidden dimension size and same or larger depth; it also requires that the teacher and student have the same hidden dimension size. It is possible that transferring intermediate map values will bring improvements on top of our method pre-trained distillation; this would be a possibility in the restricted case that the student and teacher have the same hidden dimension size, limiting the accuracy and efficiency of the compact model. Our preliminary experiments (not reported in the paper) showed no gains from intermediate map matching objectives when combined with pre-trained distillation but further experiments will be interesting.\\n\\nBy eliminating the intermediate layer transfer between students and teachers, our method is more *general*, with no architectural restrictions. For instance, in Table 3, the comparison is limited to 6/768 models because of the extreme restrictions from the baselines. Also, we disagree that our method is more *elaborate* just because it requires a pre-training step; note that PKD also requires a (deeper) pre-trained Transformer. Once our pre-trained models are released, future efforts can simply reuse them, the same way PKD reuses a pre-trained Bert checkpoint. Exploring more flexible intermediate layer knowledge transfer following PKD but generalizing to mismatched dimensionality of student and teacher would be an interesting avenue for future work.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to pre-train a student before training with a teacher, which is easy to understand. Although the authors provide extensive empirical studies, I do not think they can justify the claims in this paper.\\n\\n\\n** Argument\\n\\nOne concern is that compared to other baselines such as \\\"Patient knowledge distillation\\\" [1], the proposed method is not consistently better. The authors argue that [1] is more sophisticated in that they distill task knowledge from intermediate teacher activations. However, the proposed method introduces other extra complexities, such as pre-training the student. I do not agree that the proposed method is less elaborate than previous methods. \\n\\n\\nAlthough the investigation on influence of model size and the amount/quality of unlabeled data is interesting in itself, this does not help justify the usefulness of pre-training a student. I hypothesize that when considering the intermediate feature maps as additional training signals, randomly initialized students can catch up with pre-trained students. \\n\\nFurthermore, the mixed results shown in Table 3 do not justify the proposed method well enough. \\n\\n[1] Patient Knowledge Distillation for BERT Model Compression, https://arxiv.org/abs/1908.09355\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This submission revisits the student-teacher paradigm and shows through extensive experiments that pre-training a student directly on masked language modeling is better than distillation (from scratch). It also shows that the best is to combine both and distill from that pre-trained student model.\\n\\nMy rating is Weak Accept. I think the submission highlights a very useful observation about knowledge distillation that I imagine is overlooked by many researchers and practitioners. The decision of Weak as opposed to a Strong accept is because the submission does not introduce anything truly novel, but simply points out observations and offers a recommended training strategy. However, I do argue for its acceptance, because it does a thorough job and presents many interesting findings that can benefit the community.\", \"comparison_with_prior_work\": \"The submission focuses on comparison with Sun et al. and Sanh. These comparisons are important, but not the most compelling part of the paper. Comparison with more prior work that show large benefits would make the paper even stronger.\", \"interesting_experiments\": \"The paper presents many interesting experiments useful for anyone trying to develop a compressed model. First, it shows that distillation (from scratch) by itself may be overrated, since simply repeating the pre-training+fine-tuning procedure on the small model directly is effective. However, distillation remains relevant since it also shows that pre-training the student, then distilling against a teacher, is a potent combination. In the case when the transfer set is the same size as the pre-training set, it surprisingly still has some benefits. This is not experimentally explained, but I suspect there are optimization benefits that are hard to pin down exactly. The paper hypothesizes that the two methods learn different \\u201clinguistic aspects,\\u201d but I think it is a bit too speculative to put it in such terms.\\n\\nThe experiments are thorough, with many student sizes, transfer set sizes, transfer set/task set correlation, etc. It also compares against the truncation technique, where the student is initialized with a truncated version of the teacher. There are no error bars in the plots, but there are so many plots with clear trends, that this is not a big concern. I can\\u2019t think of any experiments that are obviously missing.\", \"misc\": [\"The introduction says that the pre-training+fine-tuning baseline has been overlooked. It would be great to point out papers that has actually overlooked this baseline. Including this in the results would be even better.\", \"During my first read-through, I got confused because I didn\\u2019t realize \\u201cpre-training\\u201d in most of the paper refers to \\u201cstudent pre-training\\u201d (as opposed to simply training the teacher). Making this a bit more explicit here and there can avoid this confusion.\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors investigate the problem of training compact pre-trained language model via distillation. Their method consists of three steps:\\n1. pre-train the compact model LM\\n2. distill the compact model LM with a larger model (teacher)\\n3. fine-tune the compact model on target task \\n\\nThis idea is not significantly new since it is quite common to apply distillation to compress models, and the results are largely empirical. From Table 3 the results on test sets are better than previous works, but not by much. The authors spend quite a of space on ablation studies to investigate the contribution of different factors, and on cross-domain transfers. They do manage to show that using a teacher for distilling a compact student model does better than directly pre-training a compact model on the NLI* task in section 6.3. It would be better if they could show it for other tasks on the benchmark as well. \\n\\nOverall I think this work is somewhat incremental, and falls below the acceptance threshold.\"}"
]
} |
BJeGlJStPr | IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks | [
"Michael Luo",
"Jiahao Yao",
"Richard Liaw",
"Eric Liang",
"Ion Stoica"
] | The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO. | [
"Reinforcement Learning",
"Artificial Intelligence",
"Distributed Computing",
"Neural Networks"
] | Accept (Poster) | https://openreview.net/pdf?id=BJeGlJStPr | https://openreview.net/forum?id=BJeGlJStPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"FWywWnOTIf",
"ByeZ6yrjor",
"B1x4iit9iS",
"rygAOsKcsH",
"B1eKVot9sB",
"ryx_H5FcoS",
"Byl-Rcma9B",
"HJx4ydyAKr",
"SJlsXw06tr",
"H1ePESipKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724829,
1573765048890,
1573718940474,
1573718901974,
1573718833072,
1573718591741,
1572842185011,
1571842012348,
1571837731395,
1571824943421
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1498/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1498/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1498/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1498/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1498/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors propose a novel distributed reinforcement learning algorithm that includes 3 new components: a target network for the policy for stability, a circular buffer, and truncated importance sampling. The authors demonstrate that this improves performance while decreasing wall clock training time.\\n\\nInitially, reviewers were concerned about the fairness of hyper parameter tuning, the baseline implementation of algorithms, and the limited set of experiments done on the Atari games. After the author response, reviewers were satisfied with all 3 of those issues.\\n\\nI may have missed it, but I did not see that code was being released with this paper. I think it would greatly increase the impact of the paper at the authors release source code, so I strongly encourage them to do so.\\n\\nGenerally, all the reviewers were in consensus that this is an interesting paper and I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updated Rating\", \"comment\": \"Thanks for your careful analysis of IMPACT and the additional details in the Appendix. I have updated my rating accordingly.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review!\\n\\n>> The experimental section could definitely be improved--I was hoping to see more results on Atari or DMLab.\\n\\nAdditional results are in Appendix Section A; we randomly selected three additional environments from Atari-57 (Qbert, BeamRider, Gravitar). \\n\\n>>> It would be great to see experiments showing how learning curves scale with the number of workers.\\n\\nSimilarly, we have attached scalability studies as an additional ablation study (Section 4.4). In short, performance increases as number of workers increases.\\n\\n>>> The V-trace equations (page 3 and 5) don't mention any clipping of the importance weights c and rho--can you clarify if this is a typo or if you don't use clipping.\\n\\nThis is fixed. We used clipping in our code and there was a typo in our paper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \">>> One question: you mentioned in section 4.3: \\\"For fairness, same network hyperparameters were used across PPO, IMPALA, and IMPACT.\\\" I suppose it would be a fair comparison if you choose the hyperparameters for each algorithm separately ( according to the highest value they achieve on the measured metric.)\\n\\nFor network hyperparameters, we meant that we used the same policy architecture across all agents. We edited the paper to more clear about this, thanks for pointing this out! We did do a sweep across other hyperparameters to select the optimal one for each algorithm. In Appendix B, we show the hyperparameter search space used and the final hyperparameters chosen.\\n\\n>>>How did you end up choosing the hyperparameters for your own experiments? Are they fine-tuned for IMPACT?\\n\\nWe performed coordinate descent on the search spaces show in Appendix B, searching across several choices of learning rate, batch size, gradient clipping, etc. for each algorithm.\\n\\n>>> It seems like IMPACT is not always doing better than PPO in the discrete control domain as shown in Figure 6. Specifically, in part (a) it looks like PPO is beating both IMPALA and IMPACT for BreakoutNoFrameskip and PongNoFrameskip. It would be nice if authors could do an analysis of these cases and add a discussion as to why this is happening.\\n\\nIn Figure 6, IMPACT (orange) beats both IMPALA (Blue) and PPO (Green) in terms of time. Breakout is close, where IMPALA is first beating IMPACT, but IMPACT eventually attains higher performance. The bottom charts show that IMPACT has comparable timestep-efficiency performance relative to PPO.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your response! We believe our results are significant for the following reasons:\\n\\n1. Our method is the first policy-gradient based agent to perform well across both continuous control and discrete environments, in both real-time and wall-time performance. Our results show improvements over IMPALA in continuous control tasks and PPO in real-time efficiency for discrete tasks.\\n\\n2. While IMPACT does incorporate well known techniques such as replay to improve its performance, we also introduce novel algorithmic improvements such as the stabilized surrogate objective, which our ablation studies show are critical for the best performance. These algorithmic innovations are critical for \\\"allowing\\\" the more standard techniques such as replay and asynchrony to be used effectively.\\n\\nIn addition, we also address your specific comments on the paper:\\n\\n>> Further, the trade-offs / adaption of update frequency etc. are standard ways to improve performance in distributed training. \\n\\nWe studied the tradeoffs between N and K for circular buffer and update frequency for the target network as ablation studies, not a way to improve performance. The techniques we used to improve performance are what you listed in the beginning of the review: target network, circular buffer, and IS-clipping. To make the role of these components more clear, in Appendix C we introduce a further study interpolating between the IMPALA and IMPACT algorithm, showing clear improvements for each component added.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for this detailed review! We found your suggestions very helpful and have implemented all your suggestions in our revision.\", \"one_note\": \"the IMPALA curves for the requested new experiments are still WIP in Appendix A. We will update them again shortly once runs complete (hopefully in a few hours).\\n\\n>> How were the discrete control games selected?\\n\\nOur three original discrete control games were chosen based on popular environments in existing distributed Reinforcement libraries such as Intel Coach. Based on your suggestions, we also selected three more games: two from Mnih et al. (2016), and also Gravitar, and have added results in Appendix A. All these games were selected by us without knowing their performance on any particular algorithm beforehand.\\n\\n>> What was the hyperparameter tuning budget for IMPACT versus PPO or IMPALA?\", \"the_budget_per_algorithm_was_similar\": \"(19, 17, 21) and (21, 17, 20) distinct trials for (IMPACT, IMPALA, and PPO) respectively on the discrete and continuous environments. We show the search space, which we searched over with coordinate descent, in Appendix B.3 and B.4. We note that the optimal hyperparameters for IMPACT are quite close to IMPALA's and PPO's.\\n\\n>> If a fixed hyperparameter budget is allocated in advance and new environments are randomly selected, does IMPACT favorably compare to IMPALA and PPO?\\n\\nYes. We tested our final hyperparameter choice on Qbert, BeamRider, and Gravitar. In terms of real-time efficiency, IMPALA was the best for all three new environments. Timestep-wise, the agent did well on Qbert and BeamRider, but was beaten by PPO on Gravitar.\\n\\n>>I will increase my rating if the robustness and improvements of this algorithm can be validated in randomly chosen games/continuous control environments for a fixed hyperparameter budget for IMPACT and both baselines. \\n\\nWe have chosen three additional discrete environments at random (Qbert, BeamRider, Gravitar) from the Atari-57 and we included their performance in Appendix A. Furthermore, we have evaluated our environments with a fixed set of hyperparameters for all of these environments: the same as chosen for previous experiments.\\n\\nWe found IMPACT to attain very similar gains in performance when we used this universal hyperparameter baseline. The exact hyperparameters used are shown in Appendix B.1 and B.2.\\n\\n>>The IMPALA baseline should be validated for the continuous control tasks - it's surprising that this once SOTA-algorithm flounders in even simple tasks like Hopper-v2 or HalfCheetah-v2.\\n\\nWe investigated this further, and believe our baseline is reasonable. We support this as follows:\\n\\nIn Appendix C we added an ablation study interpolating between the IMPALA and IMPACT algorithms. This was done by incrementally adding the PPO objective, replay, and target network respectively on top of the baseline IMPALA implementation. It shows that each IMPACT component provides an improvement on top of IMPALA. We also note that, beyond these component changes, the code of our IMPACT implementation and IMPALA baseline are identical.\\n\\nIn Appendix D, we discuss prior work that support our result that VPG agents fail to attain good performance in non-trivial continuous tasks (Achiam, 2018). Our results with IMPALA reaches similar performance compared to other VPG-based algorithms.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Reinforcement learning (RL) training speed is broadly evaluated on two dimensions: sample efficiency (the number of environment interactions required) and wall-clock time. Improved wall-clock training time has been achieved through distributed actors and learners, but often at the expense of sample efficiency. IMPACT repurposes successful concepts from deep RL - the target network, importance sampling and a replay buffer to demonstrate improvements on both axes in on three continuous environments and three games from the Atari Learning Environment.\\n\\nPositives\\nThis was a well-written paper proposing to address the sample efficiency of distributed RL algorithms. The diagrams of the algorithm were also well-done. Improving the sample efficiency of algorithms is an important objective and the approaches followed here are sensible.\\n\\nAdditionally, the ablations and examination of the sensitive hyperparameters of the algorithm are useful analyses. These indicate relative insensitivity to the target network update frequency, but both the importance sampling equation and the circular buffer hyperparameters are described.\\n\\n\\nNegatives\\nIMPACT introduces additional hyperparameters which are tuned for each continuous control task and discrete control task. However, there is no description of the hyperparameter tuning budget allocated to IMPACT, PPO, IMPALA. \\n\\nRegarding the discrete environment, the game selection should be elaborated upon and if sufficient compute is available, the algorithm should be tested elsewhere. Specifically, it is atypical (though not necessarily incorrect) to tune specific hyperparameters for each game in the Atari Learning Environment. Traditionally, algorithms have been justified as robust and useful by the lack of need to tune per game. Table 4 demonstrates a high degree of tuning for IMPACT due to game-specific changes for clip param, grad clip, lambda, num sgd iter, train_batch_size, value function loss coeff, kl coeff. However, Table 5 (IMPALA) and Table 6 (PPO) have fewer noted changes.\", \"small_nits\": [\"Define advantage in the policy gradient equation\", \"Figure 4 is ahead of Figure 3 in the compiled LaTeX\", \"Questions\", \"How were the discrete control games selected?\", \"What was the hyperparameter tuning budget for IMPACT versus PPO or IMPALA?\", \"If a fixed hyperparameter budget is allocated in advance and new environments are randomly selected, does IMPACT favorably compare to IMPALA and PPO?\", \"IMPALA performs remarkably badly in the three continuous control tasks, even on wall-clock time. What validations have been done here to ensure the algorithm is operating as intended?\", \"I will increase my rating if the robustness and improvements of this algorithm can be validated in randomly chosen games/continuous control environments for a fixed hyperparameter budget for IMPACT and both baselines. Also, the IMPALA baseline should be validated for the continuous control tasks - it's surprising that this once SOTA-algorithm flounders in even simple tasks like Hopper-v2 or HalfCheetah-v2.\", \"-----------------\"], \"update\": \"The authors have addressed my initial concerns carefully through extra experiments and details in the Appendix and I have updated my rating accordingly. Thanks!\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new distributed algorithm for reinforcement learning. The paper lists three main contributions: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling.\\n\\nI'm not that familiar with RL, however I'm very familiar with distributed training in other contexts. Therefore, the significance of the contributions in the RL domain is a bit unclear to me. However, the contributions in the area of distributed training is relatively fair. The introduction of a circular buffer is not very novel. Further, the trade-offs / adaption of update frequency etc. are standard ways to improve performance in distributed training. \\n\\nThe evaluation of the proposed algorithm is reasonably well done (considering the page limits), with a suitable set of benchmarks (although relatively few). The results are promising and could have a significance for practitioners.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces IMPACT which is a distributed RL algorithm that shortens training time of RL systems while maintaining/ improving the sample efficiency. It is built on top of the famous PPO algorithm (https://arxiv.org/abs/1707.06347). The authors break down the novel component of their model into three categories: target network, circular buffer, and importance sampling. They evaluate the effectiveness of each component through different experiments.\\n\\nOverall the paper is well-written and the ideas are communicated clearly. I like how the evaluation is done in different environments (discrete and continuous action-space) and improves the results independent of the task settings.\", \"one_question\": \"you mentioned in section 4.3: \\\"For fairness, same network hyperparameters were used across PPO, IMPALA, and IMPACT.\\\" I suppose it would be a fair comparison if you choose the hyperparameters for each algorithm separately ( according to the highest value they achieve on the measured metric.) How did you end up choosing the hyperparameters for your own experiments? Are they fine-tuned for IMPACT?\\n\\nIt seems like IMPACT is not always doing better than PPO in the discrete control domain as shown in Figure 6. Specifically, in part (a) it looks like PPO is beating both IMPALA and IMPACT for BreakoutNoFrameskip and PongNoFrameskip. It would be nice if authors could do an analysis of these cases and add a discussion as to why this is happening.\\n\\nOverall I think this is an interesting paper which can motivate more work in this area.\\n\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------\", \"updates\": \"I would like to thank the authors for their response. I have read the revised version and it looks good to me.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies a novel way for distributed RL training which combines the data reuse of PPO with the asynchronous updates of IMPALA. The main contribution is the observation that using a target network is necessary for achieving stable learning. I think this is an important result which seems to be validated by another ICLR submission (https://openreview.net/forum?id=SylOlp4FvH). The experimental section could definitely be improved--I was hoping to see more results on Atari or DMLab.\", \"two_comments\": \"The V-trace equations (page 3 and 5) don't mention any clipping of the importance weights c and rho--can you clarify if this is a typo or if you don't use clipping?\\nIt would be great to see experiments showing how learning curves scale with the number of workers.\\n\\n-----------------------------------------------------------------------------------\\nThanks for clarifying and for the extra experiments. I'm keeping my score as I still think it's appropriate.\"}"
]
} |
HkgMxkHtPH | UWGAN: UNDERWATER GAN FOR REAL-WORLD UNDERWATER COLOR RESTORATION AND DEHAZING | [
"Nan Wang",
"Yabin Zhou",
"Fenglei Han",
"Lichao Wan",
"Haitao Zhu",
"Yaojing Zheng"
] | In real-world underwater environment, exploration of seabed resources, underwater archaeology, and underwater fishing rely on a variety of sensors, vision sensor is the most important one due to its high information content, non-intrusive, and passive nature. However, wavelength-dependent light attenuation and back-scattering result in color distortion and haze effect, which degrade the visibility of images. To address this problem, firstly, we proposed an unsupervised generative adversarial network (GAN) for generating realistic underwater images (color distortion and haze effect simulation) from in-air image and depth map pairs. Secondly, U-Net, which is trained efficiently using synthetic underwater dataset, is adopted for color restoration and de-hazing. Our model directly reconstructs underwater clear images using end-to-end autoencoder networks, while maintaining scene content structural similarity. The results obtained by our method were compared with existing methods qualitatively and quantitatively. Experimental results on open real-world underwater datasets demonstrate that the presented method performs well on different actual underwater scenes, and the processing speed can reach up to 125FPS on images running on one NVIDIA 1060 GPU. | [
"underwater image",
"image restoration",
"image enhancement",
"GAN",
"CNNs"
] | Reject | https://openreview.net/pdf?id=HkgMxkHtPH | https://openreview.net/forum?id=HkgMxkHtPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ZG27KFOTGM",
"Bken178tjr",
"SJx8OG8Fsr",
"SkxGSfLtsH",
"rJl7F-UFjS",
"Skl-8kPxjr",
"HJei8hV35H",
"HyegdV42cB",
"SyeZlLA6Yr",
"S1gemwT6YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1576798724799,
1573638883787,
1573638766483,
1573638714412,
1573638523183,
1573052232785,
1572781138670,
1572779111525,
1571837417314,
1571833623786
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1497/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1497/Authors"
],
[
"~Chenxu_John_Wang1"
],
[
"ICLR.cc/2020/Conference/Paper1497/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1497/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposed to improve the quality of underwater images, specifically color distortion and haze effect, by an unsupervised generative adversarial network (GAN). An end-to-end autoencoder network is used to demonstrate its effectiveness in comparing to existing works, while maintaining scene content structural similarity. Three reviewers unanimously rated weak rejection. The major concerns include unclear difference with respect to the existing works, incremental contribution, low quality of figures, low quality of writing, etc. The authors respond to Reviewers\\u2019 concerns but did not change the rating. The ACs concur the concerns and the paper can not be accepted at its current state.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"Following reviewers' suggestions, we have updated the paper and uploaded a revision on Nov 13. Here we give a summary of the major changes.\\n\\n1. We have rewritten section 2.1 and section 2.2. Now we clearly state the improved underwater imaging model and our technical approaches.\\n2. In section 4, we add the results of underwater target detection, which demonstrates that our proposed model can help in underwater high-level computer vision tasks.\\n3. We have replaced all figures with higher resolution versions in this paper.\\n4. We add FLOPs in Table 5.\\n5. We fixed small errors and inappropriate sentence expression.\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank the reviewer for pointing out some problems in our work. Please find our response to your questions below. We have updated the paper and uploaded a revision on Nov 13.\\n\\n**1) The literature is limited.** \\n\\n**Response:** We have cited and listed some new references in the background. In section 4, the method we chose to compare with ours can be roughly divided into three types: model-free algorithms, model-based algorithms, and deep-learning-based algorithms. These methods are classical and representative, we can't list too much due to paper length limit.\\n\\n> Anwar S, Li C, Porikli F. Deep underwater image enhancement[J]. arXiv preprint arXiv:1807.03528, 2018.\\n>\\n> Ancuti C, Ancuti C O, De Vleeschouwer C. D-hazy: A dataset to evaluate quantitatively dehazing algorithms[C]//2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016: 2226-2230.\\n>\\n> Uplavikar P, Wu Z, Wang Z. All-In-One Underwater Image Enhancement using Domain-Adversarial Learning[J]. arXiv preprint arXiv:1905.13342, 2019.\\n>\\n> Anwar S, Li C. Diving Deeper into Underwater Image Enhancement: A Survey[J]. arXiv preprint arXiv:1907.07863, 2019.\\n>\\n> Ding X, Wang Y, Yan Y, et al. Jointly Adversarial Network to Wavelength Compensation and Dehazing of Underwater Images[J]. arXiv preprint arXiv:1907.05595, 2019.\\n>\\n> Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv:1804.02767, 2018.\\n\\n**2) The underwater imaging model presented in this paper derives from the Jaffe-McGlamery model, which is a common sense in this field. The authors use a generator to produce underwater images that only implements the common model by a neural network. Moreover, the statement of section 2.2 is not clear. Please rewrite this section.**\\n\\n**Response:** We have rewritten section 2.1 and 2.2. Inspired by in-air images dehazing algorithms, we improved the underwater imaging model in this paper. Then, we employed GAN for generating more realistic underwater-style images based on the improved model (taken both light attenuation and haze effect in real-world underwater images into consideration), which can be found in Sections 2.1 and 2.2 in this paper.\\n\\n**3) The authors used U-Net without any improvement to enhance the results generated from UWGAN, which is the integration of existing models.**\\n\\n**Response:** U-Net is an efficient tool for the proposed pipeline. We employed U-Net as an enhancement network structure, but not only that, we studied the effect of different loss functions in U-Net (The detailed content can be found in Page 11, section APPENDIX), which could provide a new idea for further research about loss functions on underwater image enhancement. Considering the inference speed and Flops, U-Net is better than other networks and could run on real-time compared to other deep-learning-based methods mentioned in this paper.\\n\\n**4) The authors claimed that their model is better than others, while there is no evidence to indicates that.**\\n\\n**Response:** We have revised some imprecise sentences of the result analysis part in section 4, \\u201cTable 1 and Table 2 quantitatively show the scores of sample images in Figure 5 and Figure 6 respectively. It can be seen that our proposed method has achieved the highest scores in (a), (c) and (f). In addition, the average quantized scores evaluated on RealA, RealB, and RealC datasets are shown in Table 3. Our model achieves the best score in terms of color restoration.\\u201d Besides, we add FLOPs of deep-learning-based methods in Table 5.\\n\\n**5) Please carefully check the references.**\\n\\n**Response:** We have revised small errors in references. \\n\\n> \\u201cHummel R. Image enhancement by histogram transformation[J]. Computer Graphics and Image Processing, 1977, 6(2):184-195.\\u201d\\n\\n**6) High-resolution figures should be given in the manuscript.**\\n\\n**Response:** We have improved the resolution of images in this paper. It should support higher magnifications.\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank the reviewer for pointing out some problems in our work. Please find our response to your questions below. We have updated the paper and uploaded a revision on Nov 13.\\n\\n**1) This paper points out that the previous work (i.e. WaterGAN) generates color noise and the camera model is not suitable, how does this proposed method overcome these points?**\\n\\n**Response:** The network structure of WaterGAN can be found here (https://github.com/kskin/WaterGAN). \\n\\nThe image synthesized by WaterGAN suffers color noise due to the input of noise vector z, which was observed when we tested WaterGAN.\\n\\n\\u201cOne limitation of our model is in the parameterization of the vignetting model, which assumes a centered vignetting pattern. This is not a valid assumption for the MHL dataset, so our restored images still show some vignetting though it is partially corrected.\\u201d, This sentence is mentioned in the original paper of WaterGAN (In page 6, above VI. Conclusion). It is our mistake to call vignetting model as camera model in Introduction part.\\n\\nIn order to solve the color noise problem, noise vector z is no longer necessary in our UWGAN model, UWGAN takes color image and its depth map as input, so our model can avoid color noise problem. And, we didn't use \\u201cvignetting model\\u201d in our model. Inspired by in-air images dehazing algorithms, we improved the underwater imaging model in this paper. Then, we employed GAN for generating more realistic underwater-style images based on the improved model (taken both light attenuation and haze effect in real-world underwater images into consideration), which can be found in Sections 2.1 and 2.2 in this paper. \\n\\n**2) The figures in this paper are too blurry to see them**\\n\\n**Response:** We have improved the resolution of images in this paper. It should support higher magnifications.\\n\\n**3) The technical contribution of the proposed method is not clear**\\n\\n**Response:** As mentioned in the first paragraph, inspired by in-air images dehazing algorithms, we improved the underwater imaging model in this paper. Then, we employed GAN for generating more realistic underwater-style images based on the improved model (taken both light attenuation and haze effect in real-world underwater images into consideration), which can be found in Sections 2.1 and 2.2 in this paper. We employed U-Net as an enhancement network structure, but not only that, we studied the effect of different loss functions in U-Net (The detailed content can be found in Page 11, section APPENDIX), which could provide a new idea for further research about loss functions on underwater image enhancement. Considering the inference speed and Flops, U-Net is better than other networks and could run on real-time compared to other deep-learning-based methods mentioned in this paper.\"}",
"{\"title\": \"Response\", \"comment\": \"We would like to thank the reviewer for pointing out some problems in our work. Please find our response to your questions below. We have updated the paper and uploaded a revision on Nov 13.\\n\\n**1) Many existing works used the physical model to represent the imaging principles and using deep network to learn prior knowledge**\\n\\n**Response:** Inspired by in-air images dehazing algorithms, we improved the underwater imaging model in this paper, but it has not been stated clearly before. Then, we employed GAN for generating more realistic underwater-style images based on the improved model (taken both light attenuation and haze effect in real-world underwater images into consideration), which can be found in Sections 2.1 and 2.2 in this paper. \\n\\nWe employed U-Net as an enhancement network structure, but not only that, we studied the effect of different loss functions in U-Net (The detailed content can be found in Page 11, section APPENDIX), which could provide a new idea for further research about loss functions on underwater image enhancement. Considering the inference speed and Flops, U-Net is better than other networks and could run in real-time compared to other deep-learning-based methods mentioned in this paper.\\n\\n**2) High-level vision tasks **\\n\\n**Response:** We applied YOLO v3 target detector on degraded underwater images and their enhanced versions generated by our model. The performance of underwater target detection is better on enhanced versions of degraded images\\uff0cwhich demonstrated our proposed method on high-level underwater computer-vision tasks. This part can be found in section 4, Page 8.\\n\\n**3) Small errors** \\n\\n**Response:** We have modified some sentence expressions and small errors in this paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this article, the authors propose a generative adversarial network named UWGAN to generate realistic underwater images from the pairs of in-air images and depth images. Then, a U-Net was leveraged to enhance the results.\\nHowever, the text suffers from too many language problems. The authors should consult professional proofreading services. As a courtesy towards referees, the quality of writing needs meticulous attention before a scientific paper should be submitted.\", \"other_comments\": \"1.\\tThe literature is limited. I found some novel works being done in the field that must be addressed and listed in the background and experiments.\\n2.\\tThe underwater imaging model presented in this paper derives from the Jaffe-McGlamery model, which is a common sense in this field. The authors use a generator to produce underwater images that only implements the common model by a neural network. Moreover, the statement of section 2.2 is not clear. Please rewrite this section.\\n3.\\tThe authors used U-Net without any improvement to enhance the results generated from UWGAN, which is the integration of existing models. \\n4.\\tThe authors claimed that their model is better than others, while there is no evidence to indicates that. For example, 1) in (page 5, line 4 from bottom), \\u201cIt can be seen that our proposed method has achieved a higher score.\\u201d, can we observe this from the Table 1 and 2? 2) \\u201cThe method we proposed has the fastest processing speed compared to other methods. Moreover, the method proposed in this paper has the fewest parameters compared to other deep-learning-based methods.\\u201d, it is suggested that a study about the parameters and FLOPs of the involved methods should be given.\\n5.\\tPlease carefully check the references. For example, \\u201cHummel R. Image enhancement by histogram transformation[J]. Unknown, 1975.\\u201d lacks the journal name.\\n6.\\tHigh-resolution figures should be given in the manuscript.\"}",
"{\"title\": \"Reply to \\\"author names are shown in paper\\\"\", \"comment\": \"This is our first-time submission to ICLR. I am very sorry to have made a mistake. Is there anything we can do to correct this mistake? Can we resubmit our paper with hiding the authors' names?\"}",
"{\"title\": \"Author names are shown in paper\", \"comment\": \"Ain't this a violation of double blind reviewing policy?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper uses U-net for underwater image restoration and enhancement. But, it is difficult to obtain realistic underwater images, thus this paper introduces a GAN-based method to generate realistic underwater images from in-air image and depth map pairs.\", \"Although this paper points out that the previous work (i.e. WaterGAN) generates color noise and the camera model is not suitable, how does this proposed method overcome these points? Please make it clear.\", \"The figures in this paper are too blurry to see them. To evaluate the effectiveness of the proposed method, the figures are important, thus, it would be better to make them clear.\", \"The technical contribution of the proposed method is not clear. The proposed method seems to be just using the existing techniques.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"[Update after rebuttal period]\\nIn response, the authors cannot clearly clarify the difference between this work with existing works integrating the physical model into the network. Thus I stay my original score.\\n\\n\\n[Original reviews]\\nThis paper proposed an unsupervised generative adversarial network for underwater generating realistic underwater images and haze removal, which can simultaneously deal with the color restoration and haze in the realistic underwater environment.\\n\\nFirstly, according to the widely used physical model in the image processing area, employed the UnderwaterGAN to trained parameters in advanced, and then use U-Net for color restoration and haze removal of underwater images. However, many existing works used the physical model to represent the imaging principles and using deep network to learn prior knowledge. Thus, I think the proposed idea is a little bit incremental.\\n\\nFor the experimental part, the experimental results fully demonstrate the effectiveness of the proposed method in comparison with state-of-the-art methods. Additionally, the ablation studies in the appendix also give us the intuition by using the different loss functions. Also, I suggest the authors demonstrate the proposed method on not only low-level, but also high-level vision tasks, e.g., underwater image target detection. \\n\\nFinally, the paper is well organized and sentence expression is also clear, but small errors that are correctable.\"}"
]
} |
r1lZgyBYwS | HiLLoC: lossless image compression with hierarchical latent variable models | [
"James Townsend",
"Thomas Bird",
"Julius Kunze",
"David Barber"
] | We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results. | [
"compression",
"variational inference",
"lossless compression",
"deep latent variable models"
] | Accept (Poster) | https://openreview.net/pdf?id=r1lZgyBYwS | https://openreview.net/forum?id=r1lZgyBYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"P-F2reYdG",
"B1lCFME2iH",
"r1lq-uOFiB",
"SJx8SmutiS",
"HJl-XGOYir",
"BJeuOnp19S",
"BygJ9qoy9S",
"BygIsY5aKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724765,
1573827206450,
1573648386251,
1573647166401,
1573646872918,
1571966064069,
1571957383243,
1571821981713
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1496/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1496/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1496/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1496/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a lossless image compression consisting of a hierarchical VAE and using a bits-back version of ANS. Compared to previous work, the paper (i) improves the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS (ii) increases compression speed by implementing a vectorized version of ANS (iii) shows that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution.\\n\\nThe authors addressed properly reviewers' concerns. Main critics which remain are (i) the method is not practical yet (long compression time) (ii) results are not state of the art - but the contribution is nevertheless solid.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"I am satisfied with the author's rebuttal, and will keep my rating at \\\"Accept\\\".\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your review. To address the points you raised:\\n\\n>I would like the authors to revise their statement of state of the art compression performance on page 7 directly below table 2. ... It would be beneficial for the author to include the scores of [IDF generalization].\\n\\nWe have revised the statement below Table 2 to more accurately reflect the data in the table. We have also added the results for IDF generalization to the table.\\n\\n>Because of the buffer of initial bits required by bit-back coding, the compression/decompression of several data points has to be sequential if one wants to amortize this cost over several data points. Compression methods that don\\u2019t rely on bits-back coding, such as IDF [3], do not have this issue and can compress/decompress data points in parallel. Since this influences the practical usability of the model, it would be transparent to mention this.\\n\\nWe have added mention of these models to the last paragraph of the Discussion section, where we felt this point fitted.\\n\\n>My final main question is on the equivalence of evaluation methods of Bit-Swap and Hilloc on imagenet. The Bit-Swap paper states: \\u201cFor MNIST, CIFAR-10 and Imagenet (32 \\u00d7 32) we report the bitrates, shown in Table 5, as a result of compressing 100 datapoints in sequence (averaged over 100 experiments)...\\u201d. This means that Bit-Swap is not evaluated on the full test set of Imagenet 32 (as this contains 50000 images), as opposed to Hilloc. Do the authors think this is a problem?\\n\\nThis implies that the Bit-Swap results may be noisier than ours. They also give \\u2018average net bitrate\\u2019 values in tables 2-4, which are close to the values in their table 5. We presume that the error bounds that they give are 2 standard deviations, from the empirical distribution of the \\u2018100 experiments\\u2019 they ran. We think it\\u2019s likely that the figures they give are accurate enough to reasonably compare to ours.\\n\\n>Furthermore, in the case of \\u201cfull\\u201d Imagenet, Bit-swap uses a subset of 100 images for evaluation and crops them to a multiple of 32 pixels in height and width, so that bit-swap can compress patches and the result is the average of patches for on image. Hilloc appears to take 500 random images and does not state anything about cropping. Could the authors comment on this?\\n\\nWe have updated our results after benchmarking on a larger subset of 2000 (not 500) images, and have updated the paper to reflect this. The Bit-Swap result here may be affected by noise due to the smaller scale of their experiment, however again we have assumed that it is accurate enough for comparison. The BitSwap images are indeed cropped so that the side lengths are multiples of 32. For HiLLoC this was not necessary. We think the comparison still makes sense even with this slight difference, and we have added a footnote to explain the difference between our full size ImageNet experiment and the one in Bit-Swap.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your review. To address the points you raised:\\n\\n> ... put a full description of the neural network used\\n\\nWe have now added a detailed description of the VAE that we used, in Appendix E.\\n\\n> the authors also need to disclose how long it took to compress an average ImageNet image\\n\\nWe\\u2019ve found the encode/decode times are roughly linear in the number of pixels, and you can extrapolate from the graph. To demonstrate this point, we\\u2019ve timed compressing ImageNet images with dimension 500x374, which is slightly over the average size. The compression takes 29s. We agree that it's important to disclose this and we\\u2019ve added this information to the paper near the end of Section 4. We also agree that these times are slow, and mean that the method is not yet practical. However, we have improved significantly over existing work, and we see plenty of scope for further optimizing the runtime of the algorithm. In particular, quite a lot of code is still running in the Python interpreter, which could be written in another, compiled language. Also the hierarchical VAE that we used was mainly chosen to demonstrate the scalability of the method and to ensure an excellent compression rate, and not for its practicality. A smaller model would almost certainly be more appropriate in the long run, and distillation could be used to minimize runtime whilst maintaining similar compression performance.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your review. We address the following point:\\n\\n> However, I have to say that the part describing the vectorized implementation of their method was rather confusing and the paper could benefit a lot from clarifying this part.\\n\\nIt\\u2019s difficult to give a proper description of this without going into a lot more detail about ANS implementation. To aid readers who are confused and/or curious, we\\u2019ve added a recommendation, in the second paragraph of Section 3.2, to refer to our code and to Giesen (2015) for more detail.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method for lossless image compression consisting of a VAE and using a bits-back version of ANS. The results are very impressive on a ImageNet (but maybe not so impressive on the other benchmarks). The authors also discuss how to speed up inference and present some frightening runtime numbers for the serial method, and some better numbers for the vectorized version, though they're nowhere close to being practical.\\n\\nI think this paper should be accepted. It has a better description of the BB ANS algorithm than I have read before, and it's a truly interesting direction for the field, despite the lack of immediate applicability.\\n\\nIf we are to accept this paper, I suggest the authors put a full description of the neural network used (it's barely mentioned). I think the authors also need to disclose how long it took to compress an average imagenet image (looking at the runtime numbers for 128x128 pixels is scary, but at least we'd get a better picture on the feasability).\\n\\nOverall, due to the fact that the authors pledge to open source the framework, I think some of the details will be found in the code, once released. I think this is an important step because there are so many details in this paper that one cannot reasonably reproduce the work by simply reading the text of this paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a method for lossless image compression based on using\\nfully convolutional VAE models. These models are shown to generalize well when\\nthey are trained on small images (e.g. 32x32 and 64x64) and then applied to\\nmuch larger images. The method is based on a fully vectorized implementation of\\nbits back with asymetric numeral systems coding which is much faster than\\nprevious non-vertorized implementations. An improvement with respect to similar\\nmethods is to use a dynamic discretization of the latent variables which avoids\\nhaving to callibrate a static discretization (as in previous methods).\\nFinally, the authors initialize the bis back process with information about a\\nfew initial images which are coded using a different codec. The experiments\\nperformed illustrate the gains of the method in terms of compression ratio and\\nspeed.\", \"clarity\": \"The paper is extremelly well writen and it is very easy to read. The athors\\nindicate that they will release open-source code to implement all their\\nresults, which is very wellcome to improve reproducibility. However, I have to\\nsay that the part describing the vectorized implementation of their method was\\nrather confusing and the paper could benefit a lot from clarifying this part.\", \"quality\": \"The experiments performed are sound and illustrate the gains produced by their\\nmethod (although they do not achieve state of the art results). In particular,\\nthe experiments show the speed up gain by the proposed vectorization and the gains\\nproduced by the dynamic discretization. The experiments also show how the methods\\ntrained on smaller images generalize well to larger images.\", \"novelty\": \"The proposed approach is novel up to my knowledge. Although the methodological\\ninnovations are not that advanced, the vectorization in the specific\\napplication considered is novel, as well as the dynamic discretization.\", \"significance\": \"The proposed contributions are significant in my opinion. The vectorization\\napproach can be very useful in practice and the dynamic discretization can also\\nbe useful as shown by the experiments. One criticism could be that the authors\\ndo not achieve state of the art results, but I consider this a minor thing.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper focuses on lossless source compression with bits back coding for hierarchical fully convolutional VAEs. The focus/contribution is three-fold: 1. Improve the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS. The newly proposed discretization scheme allows for a dependency structure that is not restricted to a Markov chain structure in the encoder model q(z|x) and in the generative part of the model p(x,z). This is in contrast with bit-swap[1], which requires a markov chain structure. The dependency structure that is allowed in the proposed method is widely known to perform better than a markov chain structure, which can explain why it improves significantly over Bit-swap [1] (another hierarchical VAE compression algorithm that uses bits back coding.) 2. Increasing compression speed by implementing a vectorized version of ANS, and heaving an ANS head in the shape of a pair of arrays matching that of the latent variable and the observed variable. The latter allows for simultaneous encoding of the latent with the prior distribution and the image with the decoder distribution. 3. Showing that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution datasets with convincing results.\", \"decision\": \"Accept.\\nThis paper is clearly written, makes clear claims and supports these claims with convincing experiments. The contributions are of practical use and I expect future work to benefit from this paper.\", \"supporting_arguments_for_decision\": \"The paper is well motivated; off the shelf compression algorithms such as PNG are also not trained on every dataset separately, and cross-dataset generalization is important if this model should be used in practice for many different images from different datasets and of different resolutions. \\n\\nThe paper clearly supports the main claims. It improves upon the previous bits-back coding-based hierarchical VAE [1]. The only hypothesis that is not checked is the one that hypothesizes that the lower bpd for higher resolution images is due to the lower ratio of edge pixels versus non-edge pixels, but this is not a dealbreaker from my point of view. \\n\\nI would like the authors to revise their statement of state of the art compression performance on page 7 directly below table 2. \\u201cThe fact that HiLLoC achieves state of the art compression rates relative to the baselines even under a change of distribution is striking, and provides strong evidence of its efficacy as a general method for lossless compression of natural images\\u201d. This is sentence should be made more nuanced as the proposed model only improves on Bit-Swap, but is still significantly outperformed by Local bits back coding (LBB [2]), and in the case of cifar-10 also by integer discrete flows (IDF [3]). On the other hand, it would be useful to still state that LBB is trained on every dataset separately, as well as IDF. Note also that in [3], a model that is trained on Imagenet32 and evaluated on the other datasets is also reported (see table 1 in [3]). It would be beneficial for the author to include the scores of this model, as the proposed method seems to perform slightly better at generalizing to new datasets.\\n\\nBecause of the buffer of initial bits required by bit-back coding, the compression/decompression of several data points has to be sequential if one wants to amortize this cost over several data points. Compression methods that don\\u2019t rely on bits-back coding, such as IDF [3], do not have this issue and can compress/decompress data points in parallel. Since this influences the practical usability of the model, it would be transparent to mention this. \\n\\nMy final main question is on the equivalence of evaluation methods of Bit-Swap and Hilloc on imagenet. The Bit-Swap paper states: \\u201cFor MNIST, CIFAR-10 and Imagenet (32 \\u00d7 32) we report the bitrates, shown in Table 5, as a result of compressing 100 datapoints in sequence (averaged over 100 experiments)...\\u201d. This means that Bit-Swap is not evaluated on the full test set of Imagenet 32 (as this contains 50000 images), as opposed to Hilloc. Do the authors think this is a problem? \\nFurthermore, in the case of \\u201cfull\\u201d Imagenet, Bit-swap uses a subset of 100 images for evaluation and crops them to a multiple of 32 pixels in height and width, so that bit-swap can compress patches and the result is the average of patches for on image. Hilloc appears to take 500 random images and does not state anything about cropping. Could the authors comment on this?\\n\\n\\n\\nAdditional feedback to improve paper (not part of decision assessment):\\n- In the introduction, first paragraph: \\u201c the method can achieve an expected message length equal to the variational free energy, often referred to as the evidence lower bound (ELBO) of the model. \\u201c \\u2192 \\u201c the method can achieve an expected message length equal to the variational free energy, often referred to as the negative evidence lower bound (ELBO) of the model. \\u201c\\n- Section 3.2, last paragraph: It is not clear if in practice the latent and image are actually encoded in parallel as the author states that this is \\u201cin theory\\u201d possible. \\n- Page 4: \\u201c... we found that most of the compute time for our compression was spent in neural net inference, \\u2026\\u201d I assume you mean \\u201cinference\\u201d in any part of the encoder or decoder, and not specifically approximate inference of the encoder network. Perhaps clarify this to avoid confusion?\\n- Section 4: When referring to the ResnetVAE by Kingma et al, it would be appropriate to also cite [4], as this is very similar to resnetVAE\\u2019s and was released earlier.\\n\\n\\n\\n[1] F. H. Kingma, P. Abbeel, and J. Ho. Bit-Swap: recursive bits-back coding for lossless compression with hierarchical latent variables. In International Conference on Machine Learning (ICML), 2019.\\n[2] Jonathan Ho, Evan Lohn, and Pieter Abbeel. Compression with Flows via Local Bits-Back Coding. arXiv e-prints, 2019.\\n[3] Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, and Max Welling. Integer Discrete Flows and Lossless Compression. arXiv e-prints, 2019.\\n[4] C. K. S\\u00f8nderby, T. Raiko, L. Maal\\u00f8e, S. K. S\\u00f8nderby, and O. Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems (NIPS), 2016.\"}"
]
} |
rJebgkSFDB | Learning to Learn Kernels with Variational Random Features | [
"Haoliang Sun",
"Yingjun Du",
"Jun Xu",
"Yilong Yin",
"Xiantong Zhen",
"Ling Shao"
] | Meta-learning for few-shot learning involves a meta-learner that acquires shared knowledge from a set of prior tasks to improve the performance of a base-learner on new tasks with a small amount of data. Kernels are commonly used in machine learning due to their strong nonlinear learning capacity, which have not yet been fully investigated in the meta-learning scenario for few-shot learning. In this work, we explore kernel approximation with random Fourier features in the meta-learning framework for few-shot learning. We propose learning adaptive kernels by meta variational random features (MetaVRF), which is formulated as a variational inference problem. To explore shared knowledge across diverse tasks, our MetaVRF deploys an LSTM inference network to generate informative features, which can establish kernels of highly representational power with low spectral sampling rates, while also being able to quickly adapt to specific tasks for improved performance. We evaluate MetaVRF on a variety of few-shot learning tasks for both regression and classification. Experimental results demonstrate that our MetaVRF can deliver much better or competitive performance than recent meta-learning algorithms. | [
"Meta-learning",
"few-shot learning",
"Random Fourier Feature",
"Kernel learning"
] | Reject | https://openreview.net/pdf?id=rJebgkSFDB | https://openreview.net/forum?id=rJebgkSFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ixswxdHvoU",
"B1lEFrL9iB",
"S1xu541KiH",
"Bkg53sYroB",
"rJeFIsYHiH",
"rygBguFrjr",
"SygxY8KriH",
"HJlxcBYSiH",
"BkgAgpcVcr",
"rkx5Y4zMcr",
"BygdMZIoYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724732,
1573705083768,
1573610640064,
1573391281695,
1573391184825,
1573390317487,
1573389944290,
1573389703680,
1572281590282,
1572115585578,
1571672335645
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1495/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1495/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1495/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1495/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper looks at meta learning using random Fourier features for kernel approximations. The idea is to learn adaptive kernels by inferring Fourier bases from related tasks that can be used for the new task. A key insight of the paper is to use an LSTM to share knowledge across tasks.\\n\\nThe paper tackles an interesting problem, and the idea to use a meta learning setting for transfer learning within a kernel setting is quite interesting. It may be worthwhile relating this work to this paper by Titsias et al. (https://arxiv.org/abs/1901.11356), which looks at a slightly different setting (continual learning with Gaussian processes, where information is shared through inducing variables).\\n\\nHaving read the paper, I have some comments/questions:\\n1. log-likelihood should be called log-marginal likelihood (wherever the ELBO shows up)\\n2. The derivation of the ELBO confuses me (section 3.1). First, I don't know whether this ELBO is at training time or at test time. If it was at training time, then I agree with Reviewer #1 in the sense that $p(\\\\omega)$ should not depend on either $x$ or $\\\\mathcal {S}$. If it is at test time, the log-likelihood term should not depend on $\\\\mathcal{S}$ (which is the training set), because $\\\\mathcal S$ is taken care of by $p(\\\\omega|\\\\mathcal S)$. However, critically, $p(\\\\omega|\\\\mathcal S)$ should not depend on $x$. I agree with Reviewer #1 that this part is confusing, and the authors' response has not helped me to diffuse this confusion (e.g., priors should not be conditioned on any data).\\n3. The tasks are indirectly represented by a set of basis functions, which are represented by $\\\\omega^t$ for task $t$. In the paper, these tasks are then inferred using variational inference and an LSTM. It may be worthwhile relating this to the latent-variable approach by Saemundsson et al. (http://auai.org/uai2018/proceedings/papers/235.pdf) for meta learning. \\n4. The expression \\\"meta ELBO\\\" is inappropriate. This is a simple ELBO, nothing meta about it. If we think of the tasks as latent variables (which the paper also states), this ELBO in equation (9) is a vanilla ELBO that is used in variational inference.\\n5. For the LSTM, does it make a difference how the tasks are ordered?\\n6. Experiments: Figure 3 clearly needs error bars, and MSEs need to be reported with error bars as well; \\n6a) Figures 4 and 5 need error bars.\\n6b) Error bars should also be based on different random initializations of the learning procedure to evaluate the robustness of the methods (use at least 20 random seeds). I don't think any of the results is based on more than one random seed (at least I could not find any statement regarding this).\\n7. Table 1 and 2: The highlighting in bold is unclear. If it is supposed to highlight the best methods, then the highlighting is dishonest in the sense that methods, which perform similarly, are not highlighted. For example, in Table 1, VERSA or MetaVRF (w/o LSTM) could be highlighted for all tasks because the error bars are so huge (similar in Table 2).\\n8. One of the things I'm missing completely is a discussion about computational demand: How efficiently can we train the model, and how long does it take to make predictions? It would be great to have some discussion about this in the paper and relate this to other approaches. \\n9. The paper evaluates also the effect of having an LSTM that correlates tasks in the posterior. The analysis shows that there are some marginal gains, but none of the is statistically significant. I would have liked to see much more analysis of the effect/benefit of the LSTM.\", \"summary\": \"The paper addresses an interesting problem. However, I have reservations regarding some theoretical bits and regarding the quality of the evaluation. Given that this paper also exceeds the 8 pages (default) limit, we are supposed to ask for higher acceptance standards than for an 8-pages paper. Hence, putting everything together, I recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your feedback again.\", \"comment\": \"We are very glad to hear that our responses resolve most of your questions.\\n\\nWe now would like to further explain the meta prior $p(\\\\omega| x, S)$. We show technically in the derivation of the meta ELBO (eqs. 14-17 in the appendix) how the meta prior is conditioned on the input $x$, from which we provide some intuitive explanation to hopefully make it clear to you. The derivation starts with the conditional predictive log-likelihood $\\\\log p(y|x,S)$. That is, we would like to estimate the probabilistic distribution of the target $y$ of the input $x$, and we need to condition on the input data and as well as the context that is the support set $S$. We introduce the latent variable $\\\\omega$ which is the random base in our case. $\\\\omega$ is used to generate the random features for $x$ and therefore should be dependent on $x$ and we further make it conditional on $S$ to leverage the support set under the meta-learning setting. This gives rise to the conditional distribution $p(\\\\omega|x,S)$, based on which the conditional predictive log-likelihood can be re-written as (14). By introducing the variational distribution $q(\\\\omega|S)$ and applying the Jensen's inequality, we achieve the meta ELBO in eq. (17). In analogy to the ELBO in conventional variational inference, we name $p(\\\\omega|x,S)$ as the meta prior. Note that in the variational distribution $q(\\\\omega|S)$ we also make $\\\\omega$ condition on the support set $S$ to leverage the meta-learning setting. Maximizing the meta ELBO is to minimize the KL between the variational distribution $q(\\\\omega|S)$ and the meta prior $p(\\\\omega|x,S)$, which encourages the model to extract information from the support set for the representation of $x$ in terms of random base $\\\\omega$. By optimizing over samples in the query set, the obtained variational distribution $q(\\\\omega|S)$ is able to infer from the support set the distribution over the base $\\\\omega$ that can generate informative random features for each sample $x$ in the query set. In addition, the meta prior is the conditional distribution on the input data $x$ can also be shown from the perspective of the minimization of the KL divergence between variational distribution $q(\\\\omega|S)$ and the posterior $p(\\\\omega|x,y,S)$ (eqs. (18)-(23)).\"}",
"{\"title\": \"Thanks for your detailed rebuttal\", \"comment\": \"Thanks for your detailed rebuttal, I think it resolves most of my questions. But I am still kind of skeptical with respect to the prior. Conditioning on $S^t$ is more similar to CVAE, while I don't see the intuitive explanation of conditioning on $x$.\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"Thanks for your insightful review and constructive comments. We especially thank you for your careful and detailed reviews on the notations, which helps make our presentation more precise.\\n\\nThank you for acknowledging that the idea of learning kernels in meta-learning is interesting. Yes, we totally agree with you on that learning a kernel is equivalent to learning a distance between objects. Indeed, our inference of spectral distributions depending on also previous episodes (leveraging shared knowledge across related tasks) is to learn such a powerful kernel that can provide a reasonable distance between objects. The promising performance on the versatility experiments with inconsistent settings between training and test also, to some extent, demonstrates the effectiveness of learning a power kernels by exploring related tasks for few shot learning.\\n\\nOur detailed responses to your questions are provided below.\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"1. Thank you for your careful review. We have updated the second term in eq. (2) with $\\\\alpha^t = \\\\Lambda\\\\left(\\\\Phi^{t}(X), Y\\\\right)$, where $S^t = \\\\{ X, Y \\\\}$, which is what we implemented. We also replaced $\\\\omega^{1:t-1}$ with $\\\\omega^{t}$ in Section 3.2 and Figure 2. The current base $\\\\omega^{t}$ is conditioned on $\\\\omega^{t-1}$ rather than $\\\\omega^{1:t-1}$. In our implementation, the current LSTM cell receives the previous state $c$ and combines it with the input to infer the adaptive spectral distribution, which is consistent with the updated notation now.\\n\\n\\n2. We now explain the likelihood $\\\\log p(y|x,S,\\\\omega)$, which is a predictive conditional log-likelihood. $y$ is the output random variable, whose distribution is a conditional distribution on the input variable $x$, the data from the support set $S$ and the random Fourier base $\\\\omega$. In our implementation, we obtain $\\\\alpha$ by solving the kernel ridge regression (KRR) in (eq. (3)) in which kernel is computed by using the random bases $\\\\omega$ and the support set $S$. Then we apply eq. (4) for the prediction of $y$ which is obtained by applying the softmax operation. In the optimization, we use the cross-entropy loss. \\n\\n\\n3. Thank you for this insightful comment. Yes, the meta-prior is slightly different from common practice in variational inference, e.g., variational auto-encoder (VAE). Since our task is supervised learning, the target $y$ to be predicted is conditioned on its input $x$ and the support set, we therefore makes the prior also conditioned on the input data and the support set, rather than using an uninformative prior $N(0,I)$. This is actually similar in spirit to the practice of conditional variational auto-encoder (CVAE).\\n\\nIn our implementation, we choose the permutation-invariant instance-pooling operation (Zaheer et al.,2017) to process the support set into a single vector. Given a data vector $x$, these two vectors are concatenated as the input of the prior network, which outputs the mean and variance for the prior distribution.\\n\\n4. Thank for your this insightful comment. The motivations of the modified LSTM are mainly in two folds: (i) to better explore the task dependence and (ii) to implement efficiently.\\n\\nWe're glad to re-clarify the last paragraph in Section 3.2 and all operations in eq .(13). \\nDuring inference, the cell state $c$ stores and accumulates the shared knowledge which is updated for each task throughout the course of learning. \\nWe simplify LSTM by removing the short-term memory and mainly rely on the \\\"forget-remember\\\" operations to accumulate the shared knowledge from a series of episodes. In eq. (13), the forget gate layer (the 1st line) and input gate layer (the 2nd line) in our LSTM can help refine the cell state (the 4-th line) and gain experience from a batch of tasks. After updating $c$, the task-specific information $e$ combined with the shared knowledge in $ c$ are used for inferring the adaptive spectral distribution (the 5-th line). We use $[ e]$ rather than $[ e, c^{t-1}]$ in the 3-rd line, because we consider the operation as a non-linear mapping of the input, which is unnecessary to use the previous state. In addition, removing the short-term memory can simplify LSTM structure and then promote efficiency during inference. We tried the regular LSTM initially and found our modified version performs better in our tasks.\\n\\n\\n5. Thank you for the suggestions on more details and results of baselines. For SNAIL, we use the number from Bertinetto et al., 2019) , in which the result of SNAIL is obtained using similar shallow networks as ours. In the original work of SNAIL, they use the deep ResNet-12 embedding (with larger scale filters from 64C to 256C) for miniImageNet and therefore their results are not directly comparable. Actually, we tried to implement SNAIL but we didn't find the official code from the authors, and we adopted the top rate code on Github (https://github.com/eambutu/snail-pytorch). With this code, we trained the SNAIL model with the same embedding network as ours but was not able to achieve the reasonable results (5-way 1-shot: 39.8$\\\\%$; 5-way 5-shot: 55.7$\\\\%$ on miniImageNet). Our finding is consistent with that in Bertinetto et al., 2019) (Here, we cite the sentence from Bertinetto et al. (2019): \\\"Moreover, it is paramount for SNAIL to make use of such deep embedding, as its performance drops significantly with a shallow one.\\\" ). \\n\\nWe have added the comparison with TADAM. To make relatively fair comparison, we compare TADAM with our result by using pre-trained embeddings since it uses the ResNet-12 as the backbone embedding. Detailed settings and comparison results are added in Appendix A.\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"Thank you for your insightful review and constructive comments. Thank you for pinpointing the novel part of this work: using an LSTM based inference network for kernel learning with random features depending on previous tasks.\\n\\n1. Yes, we choose the meta prior distribution to be the spectral distribution of the basic Gaussian RBF family. Since the spectral distribution is Gaussian, we can conduct the inference efficiently by inferring $\\\\mu$ and $\\\\sigma$, while achieving highly representational kernels due to the strong nonlinear learning capability of the Gaussian kernel. Thank you for this comment. Yes, it is a great idea to use a mixture of Gaussian, which would be able to generate more informative kernels compared to a single Gaussian. We would like to explore it in our future work.\\n\\n2. Thank you for this comment. The improvements by using LSTM are statistically significant, which is based on a large number of runs (3,000) on each dataset. On the Omniglot dataset, the performance of most methods saturates, with the accuracy over $99\\\\%$, which would explain that the improvement using LSTM is slight on this dataset. On other datasets, such as miniImageNet, on the 5-way 1-shot task, the improvement is relatively large from 52.9$\\\\%$ to 54.3$\\\\%$.\\n\\n3. Thanks for this insightful comment. Indeed, kernel alignment offers a nice principle for kernel learning, which has been proven useful in conventional learning tasks [1][2]. However, it would not be directly applicable to few shot learning tasks by learning kernels using task-independent way due to that only a few training samples is available in each task (only one sample in one-shot learning) for training. This makes it hard, if not impossible, to learn a reasonable kernel for the task.\\n\\nActually, the our model can incorporate the kernel alignment principle into the objective for the base-learner, that is, we add a kernel alignment term in conjunction with the cross-entropy loss. We tried this in our original experiment, while we did not observe any performance gain by using kernel alignment in our experiments. We conjecture that it is because the cross-entropy loss is already powerful enough in this scenario.\\n\\nWe would also like to add that our method is fundamentally different from [1][2] though we are all kernel learning based on random features. \\n\\n(1) We learn to infer the spectral distributions from data of a specific task while exploring dependency of a set of related tasks. [1][2] learn an optimal configuration, i.e., weights of random bases, of random features, where the bases are drawn from the fixed spectral distribution.\\n\\n(2) Our method is a one-stage learning, for few shot recognition tasks, while [1][2] are in a two-stage learning way, for conventional learning tasks, where the kernel is learned in a separate prior stage.\\n\\n4. Thank you for this comment. The order of tasks does not matter in our model. We mainly leverage the \\\"remember-forget\\\" mechanism in LSTM to accumulate and refine the shared knowledge from a sequence of tasks. In our experimental implementation, we randomly sample a bunch of episodes from training data in each iteration, which also makes the order of tasks not matter.\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"Thank you for your insightful review and very supportive comments.\\n\\nWe're very glad to receive your comment that our work is well-motivated. Indeed, it was a gap to explore kernels that is proven a powerful tool in conventional learning scenarios for few shot learning under the meta-learning framework. Motivated to fill this gap, we propose learning adaptive kernels in the meta-learning setting for few shot learning, where we formulate it as a conditional variational inference problem. Moreover, our kernel learning based on random Fourier features is achieved by inferring task-specific spectral distributions while exploring task dependency by using an LSTM based inference network.\\n\\nThank you for your great suggestion. For your interest, we implement with pre-trained embeddings for comparison to see the difference in performance. Specifically, we use the pre-trained embedding extracted from widen residual networks (WRN-28-10) as used in (Rusu et al.,2019). As expected, the performance is improved over that trained from scratch. This is reasonable since features based on large-scale pre-trained embedding can be more informative. We have also made a head-to-head comparison with LEO (Rusu et al.,2019) using the same pre-trained embedding on the miniImageNet dataset. Our MetaVRF performs better than LEO, especially on the 5-way-1-shot task, which again shows the effectiveness of our MetaVRF. Detailed experimental settings and results have been added in Appendix A.\"}",
"{\"title\": \"Brief summary\", \"comment\": \"First of all, we would like to thank all reviewers for their insightful reviews, supportive comments and great suggestions.\", \"we_summarize_our_major_updates_as_follows\": \"1. We have added more experimental results and comparison with other methods, e.g., SNAIL and TADAM, which include the result using pre-trained embeddings and deep architectures. More comparison results are added in Table 3 associated with discussion in Appendix A.\\n2. We have updated the notations in eq. (2) and eq. (11) in Section 3.2 and in the caption of Figure 2.\\n3. We have added detailed settings for SNAIL in the penultimate paragraph of Section 5.2. The result of SNAIL on Omniglot has been added in Table 2.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper focuses on the topic of meta-learning for few-shot learning and explores kernel approximation with random fourier features for this problem. The authors propose to learn adaptive kernels by meta variational random features, and evaluate their approach on different few-shot learning tasks, comparing it against recent meta-learning algorithms.\\n\\nThe paper is well-motivated and well-written. On page 8, the authors mention related works that were not included for comparison because they rely on pre-trained embeddings or large-scale deep architectures. It would have been interesting to see the difference in performance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies meta-learning problem with few-shot learning settings. The author proposes a learn each task predictive function via the form of random Fourier features, where the kernel is jointly learned from all tasks. The novel part is the parametrization of inference network using LSTM such that the random feature samples of t-th task conditional depending on all previous task 1,...,t-1, which is an interesting way of modeling kernel spectral distribution. The experiment results show improvement of the proposed methods compared to SoTA meta learning algorithms.\\n\\nIn general, the writing of the paper is clear, and the proposed method is interesting and novel. However, there are parts missing in the experiment setting. I would love to increase my score if the author could address the following questions/comments:\\n(1) How do you choose the meta prior distribution? It should be a basic kernel family such as RBF Gaussian or mixture of RBF?\\n(2) In Table 1 and Table 2, the benefit of using LSTM only gives very marginal improvement over w/o LSTM. Are the results statistically significant? \\n(3) The experiment missed the simple kernel learning baseline, such as kernel alignment [1] and its variants [2]. If using these task-independent way to do kernel learning, what\\u2019s their performance compared to you proposed method? \\n(4) When learning the RFF spectral distribution using LSTM over a sequence of tasks, does the order of task matter? \\n\\n\\n[1] Learning kernels with random features, NIPS 2016.\\n[2] Implicit kernel learning, AISTATS 2019.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a meta-learning framework for learning adaptive kernels using a meta-learner. For representing kernels, the paper learns a variational posterior for the kernel features, by maximizing the Evidence lower Bound. Furthermore, to plug the kernel learning into the meta-learning framework, they let the variational feature posterior to condition on the current support set for adapting and to use a modified LSTM network for accumulating information. Empirically, they compare the proposed MetaVRF with multiple baselines in the standard fewshot classification benchmarks and demonstrate superior performance. They also illustrate that their adaptively-learnt Fourier feature outperforms the standard variational Fourier features.\\n\\nStrengths, \\n1, The idea of learning kernels in meta-learning is interesting. In fact, learning a kernel is equivalent to learning a distance between objects. If a reasonable distance between objects can be learnt, using the corresponding kernel should be able to achieve superior performance even if the kernel doesn't adapt in each episode. \\n2, The proposed method achieves competitive performances. In particular, Figure 5 shows how the performance changes when the test-shot and test-way are varied. It seems surprising that the MetaVRF achieves >90% accuracy for 100-way test when trained on only 5-way 5-shot.\\n\\nWeakness,\\n1, The notations in the paper are not well presented. (a) In eq(2), the formula $alpha^t=\\\\Lambda(\\\\Phi^t(x), y)$ is not exact, cuz $\\\\alpha^t$ should depend on the whole support set $S^t$ while $(x,y)$ is only one instance in $S^t$. (b) In eq(11), the variational posterior $q(w | S^t, w^{1:t-1})$ is not exact either. Because $w^{1:t-1}$ are random variables, they cannot be observed and cannot be conditioned on. Similar issues also exist in the caption of Figure 1.\\n2, The paper doesn't introduce what is the likelihood $log p(y| x, S, w)$. It is unclear how the kernel regression is adopted in classification. \\n3, The meta-prior $p(w| x, S)$ depends on the feature of the query point, which doesn't seem to be a common practice in variational inference. It would be beneficial if the authors could explain this and probably validate it empirically.\\n4, The motivations of the modified LSTM should be clarified more. Does the paper remove h_t in LSTM for removing short-term memory ? Why $\\\\hat{c}_t$ only depends on $e^t$ instead of $[e^t, c^{t-1}]$ ? \\n5, The paper compares with multiple competitive baselines. However, the settings for the baselines should be better presented. For example, it is strange that the numbers of SNAIL are different with the numbers in their paper. And SNAIL on Omniglot is not reported. Furthermore, another competitive method TADAM (Oreshkin et. al., 2019) should also be compared with.\"}"
]
} |
SkgWeJrYwr | Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination | [
"Sharan Ramjee",
"Aly El Gamal"
] | We propose a computationally efficient wrapper feature selection method - called Autoencoder and Model Based Elimination of features using Relevance and Redundancy scores (AMBER) - that uses a single ranker model along with autoencoders to perform greedy backward elimination of features. The ranker model is used to prioritize the removal of features that are not critical to the classification task, while the autoencoders are used to prioritize the elimination of correlated features. We demonstrate the superior feature selection ability of AMBER on 4 well known datasets corresponding to different domain applications via comparing the accuracies with other computationally efficient state-of-the-art feature selection techniques. Interestingly, we find that the ranker model that is used for feature selection does not necessarily have to be the same as the final classifier that is trained on the selected features. Finally, we hypothesize that overfitting the ranker model on the training set facilitates the selection of more salient features. | [
"Wrapper Feature Selection",
"AMBER",
"Ranker Model",
"Generative Training",
"Wireless Subsampling"
] | Reject | https://openreview.net/pdf?id=SkgWeJrYwr | https://openreview.net/forum?id=SkgWeJrYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"p3FCnyfAX",
"BJxxp9v2oH",
"r1eJ2pRojS",
"SJl8rr9qsH",
"SJlJ-nrGir",
"H1gEqkE0qr",
"r1g9DMyP9B",
"BygvQJvJ5B",
"SylKDMA2FB",
"SklM7HIbdH",
"Syx0hfvpDB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798724699,
1573841591742,
1573805478634,
1573721405724,
1573178359252,
1572908939869,
1572430433765,
1571938078600,
1571770976825,
1569969434303,
1569710773634
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1494/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1494/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1494/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1494/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1494/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1494/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1494/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1494/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1494/Authors"
],
[
"~Ian_Connick_Covert1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"In this paper the authors propose a wrapper feature selection method that selects features based on 1) redundancy, i.e. the sensitivity of the downstream model to feature elimination, and 2) relevance, i.e. how the individual features impact the accuracy of the target task. The authors use a combination of the redundancy and relevance scores to eliminate the features.\\n\\nWhile acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns that were viewed by AC as critical issues:\\n(1) all reviewers agreed that the proposed approach lacks theoretical justification or convincing empirical evaluations in order to show its effectiveness and general applicability -- see R1\\u2019s and R2\\u2019s requests for evaluation with more datasets/diverse tasks to assess the applicability and generality of the proposed model; see R1\\u2019s, R4\\u2019s concerns regarding theoretical analysis; \\n(2) all reviewers expressed concerns regarding the technical issue of combining the redundancy and relevance scores -- see R4\\u2019s and R2\\u2019s concerns regarding the individual/disjoint calibration of scores; see R1\\u2019s suggestion to learn to reweigh the scores;\\n(3) experimental setup requires improvement both in terms of clarity of presentation and implementation -- see R1\\u2019s comment regarding the ranker model, see R4\\u2019s concern regarding comparison with a standard deep learning model that does feature learning for a downstream task; both reviewers also suggested to analyse how autoencoders with different capacity could impact the results.\\nAdditionally R1 raised a concern regarding relevant recent works that were overlooked. \\nThe authors have tried to address some of these concerns during rebuttal, but an insufficient empirical evidence still remains a critical issue of this work. To conclude, the reviewers and AC suggest that in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response addresses a few issues, but others remain\", \"comment\": \"I have read the other reviews and authors' responses. The response does clarify several important details about the experimental design, but my basic concerns still remain that there are many empirical choices which need further exploration to justify the described approach. Other reviewers also raised this issue, and the authors acknowledge that theoretical analysis is not in the scope of this paper.\\n\\nSo, the additional information about the experiments improve my view of the paper, I still do not believe it is ready for acceptance at this time.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": [\"Thank you for the review! We specially thank you for bringing our attention to important recent work on using Autoencoders for unsupervised feature selection. We have cited the mentioned work with brief description \\u2013 due to the page limit. We would like to note that while the goal of these works was unsupervised feature selection, here we have a classification task and our goal is to select the important features with respect to that task. It is possible, however, to replace our autoencoder with more complex designs that are detailed in these works, but since this was our first attempt at an AMBER-like framework, we chose the simplest architecture that delivers good results, as a proof of concept. We agree that these could be exciting venues of research for extending our and these works.\", \"Theoretical justification is out of scope of this work. We used 4 standard datasets belonging to 4 different applications to demonstrate the generality of the proposed algorithm.\", \"We believe that the Reuters dataset \\u2013 one of the 4 used for testing AMBER \\u2013 is based on categorical features. This may have been a mistake in the review.\", \"It is true that the focus of this work is on neural network ranker models. This is because we exploit the transferability property to lesser dimensions by \\u201cshutting down\\u201d features. We further clarified that in the revised submission.\", \"We agree with your comment: \\u201cIt would be interesting to explore more deeply how autoencoders with more capacity impact the results\\u201d. As illustrated in the first point of this response, since this is a first attempt at the proposed framework, we chose the simplest architecture that works as a proof of concept.\", \"The autoencoders are retrained in each step, because we use a different autoencoder architecture (different number of input and hidden layer units) for each step. This is unlike the fixed-architecture ranker model used for classification. It does not look straightforward to us how to use one autoencoder architecture for all steps (for example, what would be the number of units in the bottleneck layer?).\", \"In our trials, we could not find an advantage to unequal relevance and redundancy score weights, but we agree that this is an interesting topic of research.\", \"We believe that our method is suitable for backward feature selection, as the interactions between important features are maintained in every step, and hence always contribute to the ranking of features. On the other hand, with greedy forward feature selection, these interactions are no longer fully captured: Consider a simple forward feature selection example where we have already selected the most important feature a, and simulate the second selection of candidate features b and c; that simulation does not take into how selecting b would affect the relevance of c, and how selecting c would affect the relevance of b. This could be problematic since at this early stage, all these features could be very important to the task, and their combinations should be fully considered.\", \"We used K-fold cross validation, but we found it to lead to negligible results compared with fixing a validation set, so for simplicity of presentation, we replaced the term \\u201ccross-validation set\\u201d with \\u201cvalidation set\\u201d in the revised submission. Thanks for the careful observation!\", \"A method like RFE could take excessive amount of computation times to select features, particularly with a large number of features, and when it\\u2019s desired to select a small set (using backward elimination), as it requires retraining the model in every iteration. To capture the computational cost of retraining, we report in Table 2 of the revised submission the computation times needed to rank all the features for the different datasets with AMBER, and another version of AMBER that retrains the model in every iteration.\", \"Thanks for raising the point about the overlap of the selected feature sets. We are working on including these ratios \\u2013 and investigating potential insights \\u2013 in the final version.\", \"The hyperparameters used for training the models, and not mentioned in the main document, are now stated in a text file in the Github folder. Thanks for raising this important point!\", \"We fixed the references consistency issue in the revised submission.\", \"We fixed the Section 2 headers issue in the revised submission.\", \"For standard deviations, we performed the experiments 3 times, and provide the error bars and accuracy results for all 3 experiments in the Github folder. Because of the page limit, it was difficult to include them in the main document.\"]}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": [\"Thank you for the review! Please find below responses to each of the points raised:\", \"We believe that the implementation details were sufficiently described. As illustrated in the draft, the relevance and redundancy scores are normalized by their ranges. Further, the losses are computed based on the training rather than validation dataset for two reasons: a) It is typically larger, allowing for more accurate feature selection, as we found in our experiments. b) As illustrated in the discussion section, overfitting of the ranker model is not necessarily a bad phenomenon for our feature selection method, and that advantage can be best exploited by measuring losses based on the training dataset.\", \"We believe that the 4 datasets used demonstrate the generality of the algorithm, since they are well known standards and belong to 4 different applications. We are working on including results for the CIFAR-10 dataset as well. We would like to also note that AMBER can be easily modularized to avoid custom-tailoring for each application. Basically, all what is needed to \\u201ctailor\\u201d it for a new application is to provide the ranker model.\"]}",
"{\"title\": \"Response to Official Blind Review #4\", \"comment\": \"Hi,\\n\\nThanks much for your review! For the comments in your summary, we believe that simplicity is a good thing, if the proposed method is novel and useful. No justification or explanation for the comment about the work being incremental, so we choose not to respond to it.\\n\\nFor the first detailed comment, the main reason for choosing not to embed feature selection in the neural network classifier \\u2013 besides using the autoencoder and the explainability advantage of explicitly identifying important features - is that the ranker model can be different from the classifier, more specifically as overfitting the ranker model can be advantageous, since it is only used to distinguish between features, and not for final classification. Please check the discussion section for further illustration.\\n\\nFor the second detailed comment, theoretical analysis is beyond the scope of this work, and we are not claiming an optimality conclusion for this architecture, but it is rather used to affirm the usefulness of the autoencoder\\u2019s redundancy score in feature selection, and leads to sufficiently good results in our experimental setup. The intuition behind the d-1 dimension for the autoencoder\\u2019s bottleneck layer is that we are using it to assess how successful can the representation be reconstructed in the absence of one input feature, so it first tries to find the best d-1 dimensional representation, and then reconstruct from it. Our empirical trials also suggested this option for the architecture over few other alternatives.\\n\\nFor the third detailed comment, it is true that a feature can be both relevant and redundant, and in this case, its relevance score will suggest its saliency but its redundancy score will not. These two suggestions would then be considered, and depending on the final rank of the feature, it will be determined whether to include it. Having separate relevance and redundancy scores was a simple intuitive option that turned out to deliver impressive results, but we are investigating other more intricate scoring strategies as follow-ups to this work.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a wrapper feature selection method AMBER to use a single ranker model along with autoencoders to perform greedy backward elimination of features. Experimental results on various datasets show that their criterion outperforms other baseline methods. Generally, the paper is well written and easy to follow. However, the idea is simple and the originality seems incremental.\\n\\nFirst, although taking advantage of the power of neural network to help select better features sounds interesting, it is important for the author to discuss the benefit of doing it. As mentioned by the author, neural network essentially has the ability of selecting features. Instead of selecting features with AMBER explicitly, a more straightforward way is using all features as input and solving the downstream task with deep learning model. Feature selection will be automatically conducted during the learning process. It will be better if the author can explain more about the benefit and motivation of introducing feature selection explicitly in this scenario.\\n\\nSecond, the author proposed to use an autoencoder with one hidden layer consisting of d\\u22121 hidden neurons to calculate feature\\u2019s redundancy score. Is there any specific reason for including d-1 hidden neurons. It will be better if the author can give some theoretical analysis of it.\\n\\nThird, the author calculates the redundancy score and relevance score independently and combine them together to obtain the saliency score. However, it seems unreasonable to regard relevance and redundancy as two independent factors and stiffly combine them. For example, a feature can be both relevant and redundant. Should we eliminate it? How the proposed method solve this case?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The article \\\"Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination\\\" considers the problem of feature selection for a broad class of machine learning models. The authors argue that it is important to consider the relevance of features for the considered supervised ML problem and redundancy of features. They propose the wrapper feature selection method based on this paradigm and report the results of the experimental comparison of the method with some approaches from the literature.\", \"the_proposed_amber_approach_consists_of_2_parts\": \"1. The ranking of features based on the sensitivity of some supervised ML model with respect to the particular feature.\\n2. The ranking of features with respect to their individual impact on the accuracy of the autoencoder trained on the features of the training data set.\\n\\nThe scores obtained on these 2 steps are added and the algorithm iteratively removes features with the lowest total score.\\n\\nI should note that the proposed approach is very general, but the paper gives very few details on the actual implementation. For example, it seems important to properly normalize relevance and redundancy scores before computing the final score but the paper doesn't discuss this issue. Also, there are many possible ways to compute losses. For example, one can use training or validation sets for that but the authors choose the training set without motivation. \\n\\nMost importantly, the experimental part of the paper considers just 4 datasets. I believe that the algorithms of such generality should be evaluated on a much broader selection of problems. The most important thing is whether it is possible to select hyperparameters of the method in a way that few human interventions are needed to achieve high-quality results.\\n\\nOverall, I am very concerned with making the particular instance of the proposed approach working on a vast selection of applied problems. The provided repository with code confirms my concerns as it doesn't provide the single algorithm but rather the collection of scripts tailored for particular problems considered.\\n\\nTo sum up, I think that while the motivation behind the paper is very natural, I am not convinced with experimental results and the overall applicability of the approach.\"}",
"{\"title\": \"Experiments Done\", \"comment\": \"We have conducted the aforementioned experiments, and will update the draft as soon as the rebuttal period begins. For AMBER with only relevance score (no autoencoder), the accuracy seems to be quite lower, which validates our intuition about the benefit of using the Autoencoder to capture correlations to reduce the generalization error. The entries in the new corresponding row in Table I would be (Fisher-25%: 97.38, Reuters-25%: 76.60, Cancer-25%: 94.04, RadioML-25%: 95.21, Fisher-10%: 93.29, Reuters-10%: 73.45, Cancer-10%: 89.65, RadioML-10%: 89.54).\\n\\nWith retraining the ranker model in every iteration, only a slight increase in accuracy is observed. The entries in the new corresponding row in Table I would be (Fisher-25%: 98.37, Reuters-25%: 81.25, Cancer-25%: 98.25, RadioML-25%: 99.73, Fisher-10%: 97.21, Reuters-10%: 78.11, Cancer-10%: 97.37, RadioML-10%: 97.49). However, retraining in every iteration incurs significant computation cost. In our experimental setup described in the draft, the training time increases from 10552 to 24202 seconds for MNIST, from 21710 to 29005 seconds for Reuters, from 40 to 739 seconds for Cancer, and from 26417 to 42533 seconds for RadioML , which validates our intuition about simulating the removal of features without retraining for computational efficiency while maintaining good performance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors present an iterative approach for feature selection which selects features based both on the relevance and redundancy of each feature. The relevance of each feature is determined using a mild variant of the Feature Quality Index; essentially, the relevance is computed as the loss in model performance when setting each feature value to the mean and measuring the change in performance. Similarly, the redundancy of each feature is determined by comparing the reconstruction loss of an autoencoder when setting the feature value to its mean for all training samples. These two values are combined to give a single score for each feature at each iteration. The feature with the worst value is removed. A limited set of experiments suggests the proposed approach mildly outperforms other efficient feature selection methods.\\n\\nMajor comments\\n\\nThe paper does not include relevant, recent work on using autoencoders for feature selection, such as [Han et al., ICASSP 2018; Bal\\u0131n et al., ICML 2019], among others. Thus, it is difficult to discern how this paper either theoretically or empirically advances the state of the art.\\n\\nI found the proposed approach to efficient feature selection reasonable. However, there is no theoretical justification for the approach. Thus, I would expect a thorough empirical analysis. Only a few limited experiments on toy datasets (and one slightly more challenging one) are given.\\n\\nThe paper is not well-written. For example, it seems as though the proposed approach is not applicable to datasets with categorical features. It is not obvious (and, presumably, would need to be shown empirically) if the mode could be used to replace categorical values analogously to how the mean is used for real-valued features. Alternatively, one could imagine one-hot encoding the categorical variables and grouping them in some manner similar to that used for the RadioML pairs (since the one-hot values are obviously highly correlated). However, the authors do not address these issues.\\n\\nSimilarly, the entire discussion in Section 3 seems to assume the ranker model will be some sort of neural network. However, as far as I can tell, the ranker model is treated as a black box, so it could easily be some random forest model, etc. If there are some implicit assumptions that the ranker model is a neural network, this should be made explicit; if not, the discussion should be revised (and, of course, non-neural models should be used in the experiments).\\n\\nThe approach seems to heavily depend on the ability of the autoencoder to reconstruct the input; however, it is unclear how the structure/capacity of the autoencoder affects the performance of the algorithm. For example, the authors propose a relatively simple structure, presumably to maintain computational efficiency. It would be interesting to explore more deeply how autoencoders with more capacity impact the results.\\n\\nIt is unclear why the autoencoder is retrained at each step compared to just setting the removed feature values to the respective means, as is done with the ranker model.\\n\\nClearly, the relevance and redundancy scores could be weighted unequally when selecting the feature to remove. It would be interesting to explore how different combinations affect the results.\\n\\nIt seems that the experiments only consider backward feature selection approaches. Including forward feature selection approaches would add useful context for how the proposed approach compares to other strategies.\\n\\nMinor comments\\n\\nThe cross-validation scheme used is not clear. While the authors mention that three runs are used to estimate performance variance, they do not describe if this is 3-fold cross validation, some Monte Carlo cross validation, or if the same splits are used all three times and just the random seeds are different.\\n\\nWhile methods like RFE have significantly higher computational cost than the methods considered here, it would be helpful to include it for at least one of the datasets to provide context on how much the less costly methods \\u201close\\u201d.\\n\\nWhat is the overlap in the selected features? both among the different methods and among the different folds for the same method.\\n\\nHow were the hyperparameters for the various models chosen?\\n\\nTypos, etc.\\n\\nThe references are not consistently formatted.\\n\\nThe Section 2 headers all have an unnecessary \\u201c0\\u201d in them (e.g., \\u201c2.0.1\\u201d).\\n\\nTable 1 should include the standard deviations.\"}",
"{\"comment\": \"Hi Ian,\\n\\nThanks for your valuable suggestions. We agree that these are important points, and are currently working on incorporating them by adding:\\n\\n- Accuracy and Runtime comparisons when the ranker model is retrained in every iteration\\n\\n- Accuracy comparisons when only the relevance score is used\", \"title\": \"Excellent Suggestions\"}",
"{\"comment\": \"I have two questions about your method.\\n\\n1) What is the impact of not retraining your \\\"ranker model\\\" after each set of feature eliminations? It seems like your method would make suboptimal selections, because the ranker model isn't able to adapt to the removed features.\\n\\nThe basic idea of measuring the loss when a feature is imputed by its mean, which you call a \\\"relevance score,\\\" is discussed in a highly cited paper from the 90's, see \\\"Neural Network Feature Selector\\\" (Setiono & Liu, 1997). It's like FQI, but it measures the change in loss instead of the change in the output. However, one difference with this work is that they retrain the model after each elimination, which seems like it would work better.\\n\\nYou could easily adapt your method to include model retraining. It shouldn't change the runtime significantly, because your method already requires retraining the autoencoder at every iteration. If you don't adapt your method, you should at least provide a comparison so you can convincingly claim that retraining isn't necessary.\\n\\n2) Why is the autoencoder aspect of your method necessary? If a feature is redundant, that should already be apparent from the \\\"relevance score,\\\" because the model should be able to make accurate predictions without it.\\n\\nTo make a compelling case for the necessity of the autoencoder, you may consider performing an ablation experiment.\", \"title\": \"Model retraining, and importance of autoencoder\"}"
]
} |
r1gelyrtwH | Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics | [
"Sungyong Seo*",
"Chuizheng Meng*",
"Yan Liu"
] | Sparsely available data points cause numerical error on finite differences which hinders us from modeling the dynamics of physical systems. The discretization error becomes even larger when the sparse data are irregularly distributed or defined on an unstructured grid, making it hard to build deep learning models to handle physics-governing observations on the unstructured grid. In this paper, we propose a novel architecture, Physics-aware Difference Graph Networks (PA-DGN), which exploits neighboring information to learn finite differences inspired by physics equations. PA-DGN leverages data-driven end-to-end learning to discover underlying dynamical relations between the spatial and temporal differences in given sequential observations. We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations. | [
"physics-aware learning",
"spatial difference operators",
"sparsely-observed dynamics"
] | Accept (Poster) | https://openreview.net/pdf?id=r1gelyrtwH | https://openreview.net/forum?id=r1gelyrtwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CgcRRDdKTd",
"S1x-cmNsiB",
"HJlGM8Fwsr",
"SJg0q7YDjH",
"H1lB2JFwiS",
"ByetFVF0tB",
"Bkgw7nNRKH",
"S1gtmEPptS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724671,
1573761929229,
1573520905893,
1573520277688,
1573519276520,
1571882113171,
1571863583209,
1571808288829
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1493/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1493/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1493/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"All reviewers agree that this research is novel and well carried out, so this is a clear accept. Please ensure that the final version reflect the reviewer comments and the new information provided during the rebuttal\", \"title\": \"Paper Decision\"}",
"{\"title\": \"\\u201c1. Evaluation on More datasets\\u201d\", \"comment\": \"We tested our proposed method and baselines on the NEMO sea surface temperature (SST) dataset (available at http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_024)\\nWe first download the data in the area between 50N-65N and 75W-10W starting from 2016-01-01 to 2017-12-31, then we crop the [0, 550] - [100, 650] square from the area and sample 250 points from the square as our chosen dataset. We divide the data into 24 sequences, each lasting 30 days, and truncate the tail. All models use the first 5-day SST as input and predict the SST in the following 15 and 25 days. We use the data in 2016 for training all models and the left for testing.\\n\\nFor StandardOP, MeshOP and SDL, we test both options using linear regression and using RGN for the prediction part and report the best result. The results show that all methods incorporating spatial differences gain improvement on prediction and that our proposed learnable SDL outperforms all other baselines.\\n\\nMean absolute error (10^\\u22122) for SST prediction\\n+--------+-----------+----------+----------+-----------+------------------+-------------+-----------+\\n| Step | VAR | MLP | GRU | RGN | StandardOP | MeshOP | SDL |\\n| 15 | 15.123 | 15.058 | 15.101 | 15.172 | 14.756 | 14.607 | 14.382 |\\n| 25 | 19.533 | 19.473 | 19.522 | 19.705 | 18.983 | 18.977 | 18.434 |\\n+--------+-----------+----------+----------+-----------+------------------+-------------+-----------+\"}",
"{\"title\": \"Response #3\", \"comment\": \"Thank you for your comments and suggestions to improve the paper. Below are our responses to the main points of your comments:\\n\\n>> \\u201c1. Evaluation on More datasets\\u201d\\nThanks for suggesting the evaluation on more datasets. The proposed method assumes that there are continuous phenomena governed by physics rules or equations and the phenomena are only observed at some locations (i.e., sparsely-observed dynamics). Under this assumption, we thought that climate data is ideal to evaluate our idea. \\nCurrently, we are looking for more datasets to support our idea and we are investigating to apply on Sea Surface Temperature dataset.\\n\\n>> \\u201c2. Uncover the relationship of gradients\\u201d\\nThis is a great suggestion and actually, discovering (or uncovering) hidden physics is one of the research topics related to physics with deep learning. In fact, while the main motivation of our work is different to these kinds of work, we believe that it is a great idea to extend our work to discovering latent rules since it makes data-driven models more interpretable.\\n\\n>> \\\"3. How sparse the data is?\\\"\\nFor the synthetic experiment in Section 3.1, we sampled 200 points in 2D space (x,y)\\u2208[-5.0, 5.0] x [-5.0, 5.0].\\nFor the synthetic experiment in Section 3.2, we sampled 250 points in 2D space (x,y)\\u2208[0, 2\\u03c0] x [0, 2\\u03c0].\\nFor the temperature prediction experiment in Section 4, \\n+---------------+----------------+---------------------+\\n| | Western | Southeastern |\\n| # Nodes | 191 | 230 |\\n+---------------+----------------+---------------------+\\n\\n>> \\\"3. Controlling the sparsity and evaluation the performance\\\"\\nWe changed the number of nodes to control the sparsity of data. Our proposed model outperforms others under various settings of sparsity on the synthetic experiment in Section 3.2.\\n\\nMean absolute error (10^\\u22122) for graph signal prediction\\n+---------------+-----------+-----------+------------------+-------------+------------+\\n| Graph | VAR | MLP | StandardOP | MeshOP | Ours |\\n| 250 nodes | 0.1730 | 0.1627 | 0.1200 | 0.1287 | 0.1104 |\\n| 150 nodes | 0.1868 | 0.1729 | 0.1495 | 0.1576 | 0.1482 |\\n| 100 nodes | 0.1723 | 0.1589 | 0.1629 | 0.1696 | 0.1465 |\\n+---------------+-----------+-----------+------------------+-------------+------------+\\n\\nFurthermore, we sampled 400 points and trained SDL as described in Section 3.1, and resampled fewer points (350,300,250,200) to evaluate if SDL generalizes less sparse setting. As the following table shows, MSE increases when fewer sample points are used. However, SDL is able to provide much more accurate gradients even if it is trained under a new graph with different properties. Thus, the results support that SDL is able to generalize the c setting.\\n\\nMean squared error (10^\\u22122) for approximations of directional derivatives.\\n+---------------+-------------------+---------------+-------------------+---------------+\\n| Functions | FinGrad(350) | SDL(350) | FinGrad(300) | SDL(300) |\\n| f2(x,y) | 2.88\\u00b10.11 | 1.03\\u00b10.09 | 3.42\\u00b10.14 | 1.14\\u00b10.12 |\\n+---------------+-------------------+---------------+-------------------+---------------+\\n| | FinGrad(250)| SDL(250) | FinGrad(200) | SDL(200) |\\n| | 3.96\\u00b10.17 | 1.40\\u00b10.10 | 4.99\\u00b10.31 | 1.76\\u00b10.10 |\\n+---------------+-------------------+---------------+-------------------+---------------+\\n\\n>> \\u201c4. Can your method handle the noisy data/labels?\\u201d\\nIn this work, we assume that SDL is able to learn more effective approximations of derivatives, which are essential elements in physics dynamics, and some noisy factors can be handled by data-driven learning similar to many deep models. We choose graph neural networks for prediction in temperature and the learnable parameters handle the noisy factors and missing physics.\\n\\n>> \\u201c5. Releasing the code and dataset\\u201d\\nWe will release the code and the dataset upon acceptance.\"}",
"{\"title\": \"Response #2\", \"comment\": \"Thank you for your comments and suggestions to improve the paper. Below are our responses to the main points of your comments:\\n\\n>> \\u201cJustification of why this particular parameterization is selected\\u201d\\nThanks for pointing out the motivation of the form, Eq1. As we mentioned in the draft, the main idea of SDL is to provide \\u201cmodulated gradients and Laplacian\\u201d which are more effective approximations of the derivatives than the approximation of finite derivatives. Since there are two variables (f_i and f_j) involved in the gradient and Laplacian, it is natural to introduce two learnable parameters. In fact, Eq 1 is a form of affine transform (excluding bias term). The reason why we didn\\u2019t follow w1*f_j + w2*f_i, which is more generic, is to distinguish the role of each term. In other words, w^(1) is a scaling term and w^(2) is a differencing term. By doing that, we can enforce some constraints to the learnable parameters (e.g., make 0<w<1 or w is positive, etc.) separately for some purpose and it is easier to see how the constraints affect the derivatives.\", \"several_confusions_or_concerns_about_the_synthetic_experiments\": \">> \\u201c1. Evaluation task and training task\\u201d\\nThe synthetic experiments (3.1 and 3.2) are under the supervised setting and therefore, the evaluation and training tasks are the same.\\n\\n>> \\u201c1. What does the method generalize?\\u201d\\nIn terms of the generalization, the SDL generalizes the b setting, New graphs with a similar number of different sampled points. In other words, the method can learn parameters to compute the derivatives from discrete samples and the parameters are still valid for the different but same number of sample points. This generalization is verified by our synthetic experiments in Section 3.1.\\n\\nFurthermore, if the number of samples is enough, it also generalizes the c setting, New graph with different properties (e.g. more or less sparse). \\nWe sampled 400 points and trained SDL as described in Section 3.1, and resampled fewer points (350,300,250,200) to evaluate if SDL generalizes less sparse setting. As the following table shows, MSE increases when fewer sample points are used. However, SDL is able to provide much more accurate gradients even if it is trained under a new graph with different properties. Thus, the results support that SDL is able to generalize the c setting.\\n\\nMean squared error (10^\\u22122) for approximations of directional derivatives.\\n+---------------+-------------------+---------------+-------------------+---------------+\\n| Functions | FinGrad(350) | SDL(350) | FinGrad(300) | SDL(300) |\\n| f2(x,y) | 2.88\\u00b10.11 | 1.03\\u00b10.09 | 3.42\\u00b10.14 | 1.14\\u00b10.12 |\\n+---------------+-------------------+---------------+-------------------+---------------+\\n| | FinGrad(250)| SDL(250) | FinGrad(200) | SDL(200) |\\n| | 3.96\\u00b10.17 | 1.40\\u00b10.10 | 4.99\\u00b10.31 | 1.76\\u00b10.10 |\\n+---------------+-------------------+---------------+-------------------+---------------+\\n\\nThe parameters consider the function values at each sampled point and spatial displacement between two points, thus if the dynamics/functions are changed, our parameters won\\u2019t be applicable without training on the new dataset.\\n\\n>> \\u201c2. Error bars on the synthetic experiment\\u201d\\nWe provide the standard deviation for the synthetic experiments in Section 3.1.\", \"table_1\": \"Mean squared error (10^\\u22122) for approximations of directional derivatives.\\n+---------------+--------------+---------------+--------------+---------------+--------------+\\n| Functions | FinGrad | MLP | GN | One-w | SDL |\\n| f1(x,y) | 6.42\\u00b10.47 | 2.12\\u00b10.32 | 1.05\\u00b10.42 | 1.41\\u00b10.44 | 0.97\\u00b10.39 |\\n| f2(x,y) | 5.90\\u00b10.04 | 2.29\\u00b10.77 | 2.17\\u00b10.34 | 6.73\\u00b11.17 | 1.26\\u00b10.05 |\\n+---------------+--------------+---------------+--------------+---------------+--------------+\\n\\nWe provide the standard deviation for the synthetic experiments in Section 3.2.\\nWe generated 3 random meshes for the synthetic experiment in Section 3.2 and reported the mean absolute errors of all methods on the graph signal prediction task. Results show that our proposed method outperforms baselines significantly.\", \"table_2\": \"Mean absolute error (10^\\u22122) for graph signal prediction\\n+----------------+----------------+------------------+----------------+----------------+\\n| VAR | MLP | StandardOP | MeshOP | SDL |\\n| 16.84\\u00b10.41 | 15.75\\u00b10.53 | 11.90\\u00b10.29 | 12.82\\u00b10.06 | 10.87\\u00b10.98 |\\n+----------------+----------------+------------------+----------------+----------------+\", \"minor_comments\": \">> \\u201cA related idea\\u201d\\nThis is a great suggestion. Actually, we are going to work for the applications of SDL and this point will be a possible future direction.\\n\\n>> \\u201cthe type definition of f and F\\u201d\\nIn Section 2.1, we define f as node feature and F as edge feature. While we believe that the definition is correctly described, if you could point out the contradiction, we will make it clear.\"}",
"{\"title\": \"Response #1\", \"comment\": \"Thank you for your comments and suggestions to improve the paper. Below are our responses to the main points of your comments:\", \"weaknesses\": \">> \\u201cA comparison to a graph learning model that is specifically designed for weather forecast\\u201d\\nIt would be a great comparison if there is a graph-based model specifically designed for a weather forecast. However, to the best of our knowledge, we haven\\u2019t found the existing graph-based model particularly designed for climate modeling under the sparsely-observed setting. If you are aware of any references about a graph learning model for weather forecasting, please let us know them.\\n\\n>> \\u201cAdding StandardOP or MeshOP might not help the prediction.\\u201d\\nIt is a good point that adding not-learnable operators (StandardOP or MeshOP) doesn\\u2019t help to reduce the prediction error. The inconsistent behaviors of RGN(StandardOP) and RGN(MeshOP) in Table 3 support the idea that incorporating some incorrect or irrelevant features is little helpful and it can even be harmful. As Section 3.1 shows, the operators having no learnable parameters suffer substantial numerical error under the sparse setting and it causes the diminished prediction power when the operators are used. On the other hand, the consistent prediction power from PA-DGN is the evidence that the physics-inspired features from SDL are significantly helpful and provide a more effective inductive bias.\\n\\n>> \\u201cWhat value for h was used?\\u201d\\nThanks for pointing out the confusion. Overall, there are two GNNs involved in PA-DGN; (1) GNNs in SDL and (2) GNNs in RGN. For both GNNs, we used h=2. \\n\\n>> \\u201cNecessity of SDL since RGN by itself propagates the signals to neighbors\\u201d\\nThis is a very good point. Yes, RGN is actually able to do message-passing and it means that it is able to incorporate neighboring features. However, as the expressive power and learning efficiency of data-driven models are highly dependent on its architecture and features extracted from itself, it is still very critical to design proper architecture for efficient learning.\\nThe purpose of using SDL is to provide physics-aware representations, which are data-driven spatial derivatives (gradients and Laplacian), instead of using observations directly. In other words, SDL provides a physics-aware inductive bias, which improves the prediction quality.\", \"additional_comments\": \">> \\u201cThe motivation of the second (\\u2206f)i\\u201d\\nThe second (\\u2206f)i in Section 2.2 is introduced to provide a different form of Laplacian in a triangulated mesh. It is well-known that the second Laplacian (geometric discretizations\\nof the Laplacian) has more effective approximation qualities on the mesh. We will update the motivation of the second one in the draft.\\n\\n>> \\u201cLack of explanation of what is train / test set\\u201d\\nFor the experiment in Section 3.1, we first defined a function on 2D space. Then, we sampled 200 points from the 2D coordinates and built a graph based on k-NN algorithm. As gradients are defined on each edge, we split the available edges as train/validation/test sets. We will update the draft to provide this information clearly.\\n\\n>> \\u201cWhy the setup in Section 3.2 is only similar to PDE-Net?\\u201d\\nThere are 2 differences between the settings of Equation (8) in Long et al. 2018 and ours. (1) While the coefficients in Long et al. 2018 before the second-order spatial differentiation terms in the partial differential equation are constant, they are from a function of the coordinates of nodes in our setting. This setting increases the dynamics of the generated datasets and makes the prediction task more challenging; (2) In Long et al. 2018, the second-order spatial differentiation terms along x-axis and y-axis have different coefficients, while in ours setting they share the same coefficients. We make this modification to fit the equation on graph-structured data, because the Laplacian term on graphs of sampled nodes is defined as a scalar on nodes instead of having a specific direction.\\n\\n>> \\u201cData split and the problem of learning different seasons\\u201d\\nThanks for pointing out the data splitting. Yes, we used the first 8 months for training and left months for validation and test. In fact, learning different seasons doesn't matter much since we focus on \\\"differences\\\" instead of absolute values. In other words, our model is focusing on how the \\\"differences\\\" of physical quantities interact and propagate spatially and temporally, and thus, if the governing physics rules are not significantly changed over the different seasons, it won\\u2019t be affected. \\nStill, some unique characteristics in the specific months in test/validation sets can\\u2019t be seen during the training and they may not be properly handled. While we only have one-year observations, this problem can be handled by yearly splitting, i.e., train a model using a certain year and evaluate it on another year.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe paper considers the problem of predicting node and/or edge attributes in a physics-governed system, where we have access to measurement data on discrete sensor locations. In contrast to other methods, they explore the usefulness of a Spatial Difference Layer (SDL) that learns a representation of the gradients on the edges and the Laplacian on the nodes, where the parameters to create those operators are learnable weights. The SDL layer is concatenated with the original graph and fed into a recurrent graph network (RGN) to predict node and/or edge properties.\", \"strengths\": [\"This research is very relevant for physics inspired machine learning, since many physical systems are governed by underlying differential equations.\", \"The authors show on synthetic data experiments that SDL is capable of representing the derivatives of a physical system.\", \"Real world use case for temperature prediction is presented with encouraging results.\"], \"weaknesses\": [\"While comparison to RGN represents a rather strong benchmark, it would be interesting to see a comparison to a graph learning model that is specifically designed for weather forecast.\", \"Just adding the Spatial Difference Layer using numerical methods (method RGN(StardardOP) and RGN(MeshOP)) can diminish prediction power for a long time horizon. This result suggests that those gradients might not help the prediction.\", \"The inclusion of an h-hop neighborhood is not quite clear. What value for h was used in the experiments? Is this really necessary, when RGN by itself propagates the signal to neighbors that are further away?\"], \"additional_comments\": [\"2.1 and 2.2: On first reading, it's a bit confusing why there are two different equations for (\\u2206f)i. The motivation of the second equation should be made more explicit.\", \"3.1 lacks explanation of what is train / test set, which is given only in the appendix. This is critical information to understand the use cases of the model and should definitely be in the main body.\", \"In 3.2 formatting of a(i), b(i) and c(i) is confusing. Why is the setup only similar to Long et al? It would be nice to point out the differences and explain why it wasn't exactly the same.\", \"4.1: Was the train/validation/test split done in contiguous segments? I.e. are the 8 months of training data January to August? How is the problem of learning different seasons handled?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose new architectures that simulate difference operators. According to my experience, this research is important since PDEs are the most commonly used form to represent known physical relationships in dynamical systems. The proposed method has novelty in that it uses advanced machine learning architectures to estimate physically meaningful variables. The authors further investigate how to use the proposed spatial difference layer in two tasks.\", \"i_would_suggest_improving_this_research_on_these_aspects\": \"1.\\tThe proposed method should be evaluated on more datasets. The difference information is used in almost any real-world dynamical systems and thus it would be more convincing to show the effectiveness on diverse applications, e.g., object tracking, the variation of energy and mass across space and time.\\n2.\\tIt would be interesting to design a test scenario where governing PDEs are known. Is it possible for your method to uncover the relationship of gradients that govern the system?\\n3.\\tHow sparse the data is? It would be better to have an experiment where data is intentionally hidden to control the sparsity and then evaluate the performance.\\n4.\\tA side question: In real-world systems, the observations are not only governed by PDE, but also unknown noisy factors, missing physics, etc. Can your method handle the noisy data/labels?\\n5.\\tThe proposed method has some complex components. I would encourage releasing the code and the simulated dataset upon acceptance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method to reduce numerical error when predicting sequences governed by physical dynamics. The idea is that most physics simulators use finite difference differential operators, and the paper adds trainable parameters to them (the derivative and Laplacian operators). The added parameters are computed by a GNN. The paper concats the original graph feature and the output of the differential operators, and inputs them to a recurrent GNN to obtain the final prediction.\\n\\nI think the idea is interesting. Incorporating modulated derivative and Laplacian operators into physical simulators is novel and well justified. It could strengthen the argument is there is more justification of why this particular parameterization is selected. \\n\\nI think the experimental evaluation is somewhat adequate. There are a good selection of baselines including both manually designed iterators and GNNs. In particular, the weather prediction experiment show improved performance over several baselines. I am not familiar with this task or its state-of-the-art performance, but I am convinced that the proposed approach is superior compared to the claimed baselines (RGN, GRU). \\n\\nI have several confusions or concerns about the synthetic experiments\\n\\n1. In the synthetic experiments, is the evaluation task different from the training task? It is unclear from the description how well the learned parameters generalize. Does the method generalize to a. New functions/dynamics b. New graphs with similar properties (e.g. another graph draw from the same distribution) c. New graph with different properties (e.g. more or less sparse)? \\n\\n2. One short-coming of the synthetic experiment is the lack of error bars, or analysis of statistical significance. I think some of the improvements are not large enough to be statistically convincing without additional analysis. It seems necessary to experiment on multiple random problems (e.g. with random meshing, dynamics parameters).\", \"minor_comments\": \"A related idea is \\u201cLearning Neural PDE Solvers with Guarantees\\u201d which modulates the finite difference iterative solver with deep networks, but the objective is solving PDEs with known dynamics instead of prediction with unknown dynamics. Conversely, the method the authors proposed seem also useful for speeding up PDE solvers. \\n\\nI think there is an error in the type definition of f and F in section 2.1. The two claimed types contradict each other.\"}"
]
} |
HygegyrYwH | Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks | [
"Ziwei Ji",
"Matus Telgarsky"
] | Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error.
The required width, however, is always polynomial in at least one of the sample size $n$, the (inverse) target error $1/\epsilon$, and the (inverse) failure probability $1/\delta$.
This work shows that $\widetilde{\Theta}(1/\epsilon)$ iterations of gradient descent with $\widetilde{\Omega}(1/\epsilon^2)$ training examples on two-layer ReLU networks of any width exceeding $\textrm{polylog}(n,1/\epsilon,1/\delta)$ suffice to achieve a test misclassification error of $\epsilon$.
We also prove that stochastic gradient descent can achieve $\epsilon$ test error with polylogarithmic width and $\widetilde{\Theta}(1/\epsilon)$ samples.
The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting. | [
"neural tangent kernel",
"polylogarithmic width",
"test error",
"gradient descent",
"classification"
] | Accept (Poster) | https://openreview.net/pdf?id=HygegyrYwH | https://openreview.net/forum?id=HygegyrYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9fIYwx6fH-",
"r1l6L345sS",
"BkxvVm4ciS",
"r1lil-NcjS",
"Hkl7GrRFoH",
"Byer9Q0tiB",
"rkldAVatsr",
"rJeP142Kir",
"ByxJi5sYiB",
"r1eSFGzDiS",
"r1gHwzfwsS",
"H1xIGMGwjS",
"rJeEcZzPoB",
"HkeBVZMPjH",
"Bkgha_aM9r",
"HyeBXPOpFr",
"rkgnCl8MKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724643,
1573698645442,
1573696303153,
1573695730849,
1573672203232,
1573671821407,
1573668047938,
1573663711486,
1573661334828,
1573491324837,
1573491293241,
1573491214382,
1573491083680,
1573490988883,
1572161732235,
1571813149443,
1571082452036
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1492/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies how much overparameterization is required to achieve zero training error via gradient descent in one hidden layer neural nets. In particular the paper studies the effect of margin in data on the required amount of overparameterization. While the paper does not improve in the worse case in the presence of margin the paper shows that sometimes even logarithmic width is sufficient. The reviewers all seem to agree that this is a nice paper but had a few mostly technical concerns. These concerns were sufficiently addressed in the response. Based on my own reading I also find the paper to be interesting, well written with clever proofs. So I recommend acceptance. I would like to make a suggestion that the authors do clarify in the abstract intro that this improvement can not be achieved in the worst case as a shallow reading of the manuscript may cause some confusion (that logarithmic width suffices in general).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Official Blind Review #3\", \"comment\": \"I think the revision have fixed the problem I mentioned and I will increase the score.\"}",
"{\"title\": \"The typo has been corrected in the response\", \"comment\": \"We thank AnonReviewer2 for the correction; we apologize for the oversight. We have corrected our response, and confirm that the reference is correct in our revised submission.\"}",
"{\"title\": \"Thank you for the support!\", \"comment\": \"If there are any additional comments on the revision, we would very much like to hear them and improve our submission correspondingly.\"}",
"{\"title\": \"Is that possible to reconsider your decision for the paper?\", \"comment\": \"I think authors already addressed the comments very carefully.\"}",
"{\"title\": \"Corrected Typos\", \"comment\": \"[Zeyuan & Li, 2019] should be [Allen-Zhu & Li, 2019]\"}",
"{\"title\": \"Agreed with Reviewer 2\", \"comment\": \"I would like to also urge Reviewer 3 to reconsider their opinion. I believe the authors have adequately addressed all of the concerns we have raised, and this result is quite tight compared to existing results.\"}",
"{\"title\": \"Good revisions\", \"comment\": \"I like to increase my score to 10, but it seems the maximum allowed is only 8. I still think this paper should be at least an oral presentation of ICLR. I hope Reviewer 3 can reconsider his opinion.\"}",
"{\"title\": \"Appendix E is Very Helpful\", \"comment\": \"I would like to first reiterate that I would recommend accept for this paper regardless of the discussion on margin.\\n\\nFurthermore, I believe the addition of Appendix E giving more examples is quite helpful to understanding the \\\\gamma parameter. In particular, I really like that section E.1 showed a connection with the classical notion of margin. And E.2 showed that the width is necessarily dependent on \\\\gamma, so it's a natural quantity to study. Since \\\\gamma can be seen as an important quantity on its own, I believe the \\\"polylog dependence\\\" in the title is now better justified.\"}",
"{\"title\": \"Response to AnonReviewer #1\", \"comment\": \"We thank the reviewer for their review and support.\\n\\nFirst we want to note that as AnonReviewer #2 points out, there was a term not handled in the original analysis. We have fixed this bug in the revision. The missing term is small for reasons common to NTK analyses, namely that weights stay close to initialization; other parts of the proof remain unchanged. The required width still has only a polylogarithmic dependency on n, 1/delta, and 1/epsilon, but now it depends on 1/gamma^8.\\n\\nNext we discuss the separability assumption. Let us mention that the Hilbert space \\\\mathcal{H} defined in section 2 is just the RKHS induced by the NTK, and the training feature x_i is mapped to \\\\phi_i which lies in the RKHS. The margin gamma given by Assumption 2.1 and 3.1 is just the separation margin in the RKHS. More discussion has been added to the beginning of Section 2 and 5 of the revision.\\n\\nIn Section 5 we give various estimates of gamma which depends on poly(n). However, these bounds are for arbitrary labels or random labels. In general it might be impossible to prove a polylogarithmic width for arbitrary or random labels, and this might also explain the poly(n) dependency in prior work, since their bounds usually hold for arbitrary labels. In addition, to prove a generalization result, we must assume some relation between features and labels. The separation margin gamma defined in our submission is one natural way to capture the feature-label relation, as discussed below. The details are given in Appendix E. \\n1. One interesting example is the noisy 2-XOR distribution introduced in the following paper: [Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. arXiv preprint arXiv:1810.05369, 2018.] \\nIn the noisy 2-XOR distribution, the label is the XOR of the first two bits. Although there could be 2^d data examples, in Proposition E.1 we show that gamma is Omega(1/d). Interestingly, we further prove the following results for the noisy 2-XOR distribution.\\n(a) In Proposition E.2, we prove that for any four points with the same last d-2 bits, if the width is less than sqrt{d-2}/4, then with probability 1/2, the finite-width NTK classifier fails on at least one point. This suggests that in the NTK regime, the width has to depend on at least 1/sqrt{gamma}. On the other hand, there is still a large gap between this lower bound and the 1/gamma^8 upper bound, and it is an interesting open question to close this gap.\\n(b) In this paper, for a constant test accuracy, we prove a \\\\tilde{O}(1/gamma^2) sample complexity upper bound for SGD and overparameterized networks. Such a sample complexity upper bound can also be shown for the infinite-width NTK, as discussed in Appendix E.2.2. As mentioned above, the margin gamma is Omega(1/d), which gives a sample complexity upper bound of \\\\tilde{O}(d^2). On the other hand, [Wei, Lee, Liu, Ma] prove a sample complexity lower bound of d^2 for the infinite-width NTK. In other words, these bounds are tight up to logarithmic factors. We think it suggests that the notion of separation margin could be very useful, since the almost tight upper bound analysis highly relies on it.\\n2. A simpler example is the linearly separable case. In Appendix E.1, we show that if the data can be linearly separated with margin gamma, then Assumption 2.1 and 3.1 hold with margin at least gamma/2. In other words, when the data is linearly separable, the notion of margin in Assumption 2.1 and 3.1 could make a good use of this structure. \\n\\nWe thank the reviewer for their comments. Most of them are handled in the revision, and we will keep working on them.\"}",
"{\"title\": \"Response to AnonReviewer #3\", \"comment\": \"We thank the reviewer for their review.\\n\\nWe apologize that the original proof of Theorem 2.2 was not clear enough. In fact, as AnonReviewer #2 points out, there was a term not handled in the original analysis. We have fixed this bug in the revision; however, the proof idea is still similar. For example, due to the exponential tail of the logistic loss, to show that \\\\hat{R}^{(0)}(\\\\bar{W}) is small, we only need to show that < \\\\nabla f_i(W_0), \\\\bar{W} > is large. Note that \\\\bar{W} is defined as W_0+lambda\\\\bar{U}, and < \\\\nabla f_i(W_0), W_0 > is controlled by Lemma 2.5, and < \\\\nabla f_i(W_0), \\\\bar{U} > is concentrated around gamma by Lemma 2.3. Therefore with the chosen value of lambda, it holds that < \\\\nabla f_i(W_0), \\\\bar{W} > is large, and thus \\\\hat{R}^{(0)}(\\\\bar{W}) is small. To further handle \\\\hat{R}^{(t)}(\\\\bar{W}), we control < \\\\nabla f_i(W_t) - \\\\nabla f_i(W_0) , \\\\bar{W} > using a standard NTK argument. More details of the proof of Theorem 2.2 are given at the end of Section 2 and in Appendix A. The required width still has only a polylogarithmic dependency on n, 1/delta, and 1/epsilon, but now it depends on 1/gamma^8.\\n\\nRegarding gamma, in Section 5 we give various estimates of gamma which depends on poly(n). However, these bounds are for arbitrary labels or random labels. In general it might be impossible to prove a polylogarithmic width for arbitrary or random labels, and this might also explain the poly(n) dependency in prior work, since their bounds usually hold for arbitrary labels. In addition, to prove a generalization result, we must assume some relation between features and labels. The separation margin gamma defined in our submission is one natural way to capture the feature-label relation, as discussed below. The details are given in Appendix E. \\n1. One interesting example is the noisy 2-XOR distribution introduced in the following paper: [Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. arXiv preprint arXiv:1810.05369, 2018.] \\nIn the noisy 2-XOR distribution, the label is the XOR of the first two bits. Although there could be 2^d data examples, in Proposition E.1 we show that gamma is Omega(1/d). Interestingly, we further prove the following results for the noisy 2-XOR distribution.\\n(a) In Proposition E.2, we prove that for any four points with the same last d-2 bits, if the width is less than sqrt{d-2}/4, then with probability 1/2, the finite-width NTK classifier fails on at least one point. This suggests that in the NTK regime, the width has to depend on at least 1/sqrt{gamma}. On the other hand, there is still a large gap between this lower bound and the 1/gamma^8 upper bound, and it is an interesting open question to close this gap.\\n(b) In this paper, for a constant test accuracy, we prove a \\\\tilde{O}(1/gamma^2) sample complexity upper bound for SGD and overparameterized networks. Such a sample complexity upper bound can also be shown for the infinite-width NTK, as discussed in Appendix E.2.2. As mentioned above, the margin gamma is Omega(1/d), which gives a sample complexity upper bound of \\\\tilde{O}(d^2). On the other hand, [Wei, Lee, Liu, Ma] prove a sample complexity lower bound of d^2 for the infinite-width NTK. In other words, these bounds are tight up to logarithmic factors. We think it suggests that the notion of separation margin could be very useful, since the almost tight upper bound analysis highly relies on it.\\n2. A simpler example is the linearly separable case. In Appendix E.1, we show that if the data can be linearly separated with margin gamma, then Assumption 2.1 and 3.1 hold with margin at least gamma/2. In other words, when the data is linearly separable, the notion of margin in Assumption 2.1 and 3.1 could make a good use of this structure. \\n\\nWe thank the reviewer for pointing out the relation to prior work. Here are the responses.\\n1. Assumption 2.1 and 3.1 are indeed similar to the assumption made in [Cao & Gu, 2019a]. The difference has been discussed in the related work Section of our original submission, and is now mentioned again below Assumption 3.1: [Cao & Gu, 2019a] assume separability in the RKHS induced by the NTK of the second layer, while we assume separability in the RKHS induced by the NTK of the first layer.\\n2. The quantity \\\\hat{Q} is indeed analyzed in [Cao & Gu, 2019a], and also [Nitanda & Suzuki]. We discuss it at the beginning of Section 2.2 in the revision. \\n3. Lemma 2.6 is indeed similar to Fact D.4 and (seemingly) Claim D.5 of [Allen-Zhu & Li, 2019], where the squared loss is considered. We discuss it below Lemma 2.6 in the revision. On the other hand, we still want to highlight Lemma 2.6 since it plays an important role in proving a polylog(1/epsilon) width (see the discussion at the end of Section 2, bullet 2).\\n\\nWe would very much like to discuss any further questions!\"}",
"{\"title\": \"Response to AnonReviewer #2 Cont.\", \"comment\": \"Here are responses to the reviewer's comments:\\n1. Discussed above.\\n2. We thank the reviewer for pointing out the references and have included them.\\n3. We think it is possible to show a decreasing risk using the smoothness of the logistic loss and the NTK analysis, but have not finished it. We will keep working on it.\\n4. We do not know whether this lower bound is tight or not.\\n5. It seems unlikely to prove an o(log n) width with the current proof techniques. In fact, a polylog(n) width is already required to ensure the finite-width NTK has a positive margin (cf. Lemma 2.3).\\n6. One key step in our analysis is to show the representation result that \\\\hat{R}^{(t)}(\\\\bar{W}) is small. To show such a result with a polylog(n, 1/delta, 1/epsilon) width, we need some assumption on the relation between features and labels. For classification, the margin naturally captures the feature-label relation, and it works well with the logistic loss. If we want to prove a similar result for the squared loss, then we should need a similar assumption. In addition, the logistic loss and its derivative (the sigmoid function) allows a clean generalization analysis.\"}",
"{\"title\": \"Response to AnonReviewer #2\", \"comment\": \"We thank the reviewer for their review and support.\\n\\nWe are particularly grateful to the reviewer for catching the missing y_i <\\\\nabla f_i (W_t) - \\\\nabla f_i (W_0) , W_0> term in our original analysis. We have fixed this bug in the revision, using the quantity ||w_r(t)-w_r(0)||_2 suggested by the reviewer. In the suggested fix, there is a 1/m factor which should actually be 1/sqrt{m}; in our current proof, the additional 1/sqrt{m} factor we need comes from ||w_r(t)-w_r(0)||_2. Other parts of the proof are the same as before. The required width still has only a polylogarithmic dependency on n, 1/delta, and 1/epsilon, but now it depends on 1/gamma^8.\\n\\nWe have also included in Appendix E some concrete examples where the gamma in Assumption 2.1 and 3.1 is large. \\n1. One interesting example is the noisy 2-XOR distribution introduced in the following paper: [Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. arXiv preprint arXiv:1810.05369, 2018.] \\nIn the noisy 2-XOR distribution, the label is the XOR of the first two bits. Although there could be 2^d data examples, in Proposition E.1 we show that gamma is Omega(1/d). Interestingly, we further prove the following results for the noisy 2-XOR distribution.\\n(a) In Proposition E.2, we prove that for any four points with the same last d-2 bits, if the width is less than sqrt{d-2}/4, then with probability 1/2, the finite-width NTK classifier fails on at least one point. This suggests that in the NTK regime, the width has to depend on at least 1/sqrt{gamma}. On the other hand, there is still a large gap between this lower bound and the 1/gamma^8 upper bound, and it is an interesting open question to close this gap.\\n(b) In this paper, for a constant test accuracy, we prove a \\\\tilde{O}(1/gamma^2) sample complexity upper bound for SGD and overparameterized networks. Such a sample complexity upper bound can also be shown for the infinite-width NTK, as discussed in Appendix E.2.2. As mentioned above, the margin gamma is Omega(1/d), which gives a sample complexity upper bound of \\\\tilde{O}(d^2). On the other hand, [Wei, Lee, Liu, Ma] prove a sample complexity lower bound of d^2 for the infinite-width NTK. In other words, these bounds are tight up to logarithmic factors. We think it suggests that the notion of separation margin could be very useful, since the almost tight upper bound analysis highly relies on it.\\n2. A simpler example is the linearly separable case. In Appendix E.1, we show that if the data can be linearly separated with margin gamma, then Assumption 2.1 and 3.1 hold with margin at least gamma/2. In other words, when the data is linearly separable, the notion of margin in Assumption 2.1 and 3.1 could make a good use of this structure.\", \"here_are_responses_to_the_disadvantages_pointed_out_by_the_reviewer\": \"1. Discussed above.\\n2. We can show that ||W_t-W_0||_F=O(lambda), where lambda is defined in Theorem 2.2 and depends on ln(1/epsilon). On the other hand, to get an error of epsilon, we need roughly 1/epsilon steps. Therefore we said that ||W_t-W_0||_F=O(ln t), which is actually an incomplete argument since it only considers epsilon. What we want to highlight is that in our setting, to make the width depend only on ln(1/epsilon), it is important to have an upper bound on the GD movement which also only depends on ln(1/epsilon). More discussion is given at the end of Section 2 (bullet 2).\\n3. The typos have been fixed.\\n4. The current proof uses both ||W_t-W_0||_F and ||w_r(t)-w_r(0)||_2. The quantity ||W_t-W_0||_F is important in Lemma 2.6, and we are not sure if we can avoid using it completely.\\n5. As discussed at the beginning of Section 2 of the revision, Assumption 2.1 and 3.1 are basically separability assumptions in the RKHS induced by the NTK. The training feature is mapped to \\\\phi_i which lies in the RKHS, while \\\\bar{v} given by Assumption 2.1 and 3.1 is a separator in the RKHS. Our analysis then basically deals with finite-width samples of these points in the RKHS, e.g., \\\\nabla f_i(W_0) consists of samples of \\\\phi_i, while \\\\bar{W} is constructed from samples of \\\\bar{v}.\"}",
"{\"title\": \"List of changes made in the revision\", \"comment\": \"1. As pointed out by AnonReviewer #2, there was a missing term in the original proof of Theorem 2.2; this oversight has now been fixed. In more detail, the missing term is small for reasons common to NTK analyses, namely that weights stay close to initialization; the new proof is sketched at the end of Section 2, and appears in full in Appendix A. This proof adjustment caused the dependency on 1/gamma in the network width to worsen to at most 1/gamma^8 in Theorems 2.2, 3.2, 4.1 and Corollary 3.3; the other polylogarithmic terms are unchanged.\\n2. All reviewers requested further discussion of the margin parameter gamma. With the aim of demonstrating that gamma is both a natural quantity, and that it is large in interesting cases (notably, not breaking the \\\"polylogarithmic\\\" promise), the revision contains a new Appendix E providing a variety of examples. A first example is the linear case, where gamma is the usual linear separation margin. A more interesting example is the \\\"noisy 2-XOR\\\" problem, which has the xor on 2 bits in d dimensions, and for which we can show gamma is indeed a natural parameter: (a) a sample complexity lower bound of d^2 (proved in prior work), (b) a margin lower bound 1/d, (c) a sample complexity upper bound in the SGD case of d^2, thus showing the margin-based analysis is tight in this case, (d) a width lower bound 1/sqrt{gamma} below which the problem becomes nonseparable by the NTK with constant probability.\\n3. In the abstract and Section 1, some typos have been fixed, and references have been added.\\n4. At the beginning of Section 2, explanations of the separability assumption have been added. At the beginning of Section 2.2, more discussion on the quantity \\\\hat{Q} has been added. The proof of Lemma 2.6 has been moved to Appendix A, while some discussion has been added below Lemma 2.6.\\n5. In Section 3, some discussion on separability assumptions made in prior work has been added.\\n6. In Section 5, more explanations have been added. The upper bound on the margin for random labels is now formally stated in Proposition 5.2. Some typos have been fixed.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary :\\n1. Classification task with shallow Relu and logistic loss.\\n2. Showing fast global convergence rate with polylog width for both training and generalization error under appropriate assumptions\\n\\nOverall, this paper is very exciting and surprising. At some point, I was trying to prove such results, but couldn\\u2019t get it. This paper should be accepted as an oral presentation in ICLR. If the authors can address some of the questions in the comments, I will be happy to increase the score.\", \"advantages\": \"1. Better results for classification task with shallow Relu in terms of global convergence rate and network width\\n2. Showing the essence of the power of over-parameterization that the weights don\\u2019t change much\\n3. Clear logic and proof\\n4. Also discuss the stochastic GD/generalization error? (I didn\\u2019t read that part)\", \"disadvantages\": \"1.Bug in proving Theorem 2.2, a larger lambda is needed (but won\\u2019t influence the polylog result). See the below for a fix.\\n2.In the proving sketch, why ||W_t-W_0||_F=O(ln t)? In the proof, it looks like ||W_t-W_0||^2_F=O(t), as in the third equation on page 12. More explanation about proof sketch is needed.\\n3.Typo\\na.The \\\\odot operation in Equation 5.1 is not defined. According to later computation, this operation seems to be the hadamard product between two vectors. But this notation is not widely used and some brief introduction will be benefinitial.\\nb.On page 1, \\u201c\\u2026 and standard Rademacher tools but exploiting how little the\\u2026.\\u201d, \\u201cbut\\u201d seems to be a typo.\\nc.On page 2, \\u201calso suffices via a smoothness-based generalization bound\\u201d, \\u201csuffices\\u201d should be \\u201csuffice\\u201d.\\nd.Last formula in page 11 missing a \\u201c>0\\u201d in the indicator function.\\n4.(optional) why using ||W_t-W_0||_F instead of ||w_r(t)-w_r(0)||_2, which used in previous work for square loss, for the analysis? Is there any benefit or restriction here? \\n5.(optional) Give more insights about intermediate quantities such as \\\\hat R^(t), \\\\bar{W}, etc.\", \"comments\": \"1. More arguments for polylog width in last section needed. E.g., give a specific case where the gamma in Assumption 2.1 is constant, or comparable to the smallest eigenvalue of NTK; otherwise in the worst case, gamma can be as bad as the smallest eigenvalue of NTK over n, which ruins the polylog results. To be more specific, we can always set q to be the uniform distribution over [n], then \\\\|q\\\\odot y\\\\|_2 is indeed 1/\\\\sqrt{n}, hence \\\\gamma_1\\\\leq \\\\sqrt{\\\\lambda_{max}(K_1)/n}. If K_1 has constant spectral norm(which is the case if all the data points are orthogonal to each other), then \\\\gamma_1 will depend on 1/n.\\n2. For the over-parameterization theory, more references are needed. https://arxiv.org/abs/1902.01028 [Allen-Zhu, Li] is about generalization bound for the over-parametrized networks, https://arxiv.org/abs/1810.12065 [Allen-Zhu, Li, Song] and https://arxiv.org/abs/1905.10337 [Allen-Zhu, Li] are about the over-parameterization bound for more than two-layer networks. https://arxiv.org/abs/1906.03593 [Song, Yang] obtains a better width bound for two-layer neural networks under the framework of https://arxiv.org/abs/1810.02054 [Du, Zhai, Poczos, Singh].\\n3. Theorem 2.2 shows that the average loss converges. Does this imply after training for T steps, we obtain good weights with small logistic loss? Can you get results showing the loss is decaying, like Theorem 4.1 in https://arxiv.org/abs/1810.02054 [Du, Zhai, Poczos, Singh]?\\n4. On page 8, the lower bound of \\\\lambda_0 is given as \\\\delta/n^2. Is this bound tight? Is this lower bound achievable? \\n5. Under what assumptions can we prove o(log n), say poly(log log n) width?\\n6. What is the role of logistic loss in the proof? In general, if we replace logistic loss with square loss, will this make it harder to train neural networks?\\n\\n\\nThe original analysis might has some flaw/bug:\\n\\nIn the proof of Theorem 2.2, top of page 12, to show \\\\hat R^{(t)}(\\\\bar W)<= \\\\epsilon/4, the term y_i <\\\\nabla f_i (W_t) - \\\\nabla f_i (W_0) , W_0> seems to be forgotten to consider.\\n\\nThis could be a fix.\\n\\nNote that above term equals y_i/m \\\\sum_{r=1}^m ( 1_{[< w_{r, t}, x_i> >= 0]} - 1_{[< w_{r, 0}, x_i> >= 0]} ) < w_{r, 0}, x_i >. We can use concentration to bound <w_{r, 0}, x_i>, such that with high probability, it will be no larger than polylog(n). Correspondingly, we know this term won\\u2019t be too small. Adding this extra polylog factor into lambda, we can fix the proof.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the author shows that for a two-layer ReLU network, it only requires a network width that is poly-logarithmic in the sample size n to get good optimization and generalization error bounds, which is better than prior results.\\n\\nOverall, the paper is well written and easy to follow. However I still have some questions about this paper.\\n\\nOne of my major concerns is that there might be an important error in the proof of the main theorem. Specifically, in the proof of Theorem 2.2 (page 12), it says that due to lemma 2.5, $\\\\hat R^{(t)}(\\\\bar W)\\\\leq \\\\varepsilon$. However, Lemma 2.5 only shows that $|f(x_i,W_0,a)|$ is small, and the reason $\\\\hat R^{(t)}(\\\\bar W)$ can also be small is not explained in this paper at all. Based on Lemma 2.5, I can roughly get that $\\\\hat R^{(0)}(\\\\bar W) $ can be small, but the reason why $\\\\hat R^{(0)}(\\\\bar W) $ is small is unclear to me, especially when the network width m is only polylogarithmic in n and \\\\varepsilon. Without a clear explanation on this issue, the theoretical results in this paper might be flawed, and the polylog claim might not be correct.\\n\\nMoreover, this paper does not provide sufficient comparison with existing work. For example, Assumption 2.1 looks very similar to the assumption made in Cao & Gu (2019a). The definition of $\\\\hat Q(W)$ has also been introduced in Cao & Gu (2019a). However these similarities are not mentioned in the paper at all. Moreover, the result of Lemma 2.6, which is also one of the selling points of this paper, is actually very similar to Fact D.4 and Claim D.6 in the following paper:\\nAllen-Zhu, Zeyuan, and Yuanzhi Li. \\\"What Can ResNet Learn Efficiently, Going Beyond Kernels?.\\\" arXiv preprint arXiv:1905.10337 (2019).\\n\\nFinally, the authors\\u2019 claim in the title that the width of the network is poly-logarithmic with the sample size n might be misleading. In fact, in Section 5, it has been discussed that in certain settings about the data distribution, $\\\\frac 1\\\\gamma$ is polynomial of n. However, the width is polynomial with $\\\\frac 1\\\\gamma$, which means the width is poly of n in these settings.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary and Decision\\n\\nThe authors of this paper studied the optimization and generalization properties of shallow ReLU networks. In particular, the authors were able to show a width dependence that is polylogarithmic on the number of samples, probability or failure, error tolerance, and a margin parameter. This work is unique in that the authors showed how to bound many key quantities in terms of the margin parameter, and drew a connection with the neural tangent kernel's maximum margin. Furthermore, the overall reading experience was very smooth, although I do have some minor comments later. \\n\\nThe main concern from me is on the implicit dependence of the margin parameter \\\\gamma, most notably this can lead to the width dependence to be polynomial in terms of the number of samples and the minimum separation distance. While this concern warrants a careful discussion (below), I believe the paper still offers a nice analysis of shallow networks. \\n\\nOverall, I would recommend accept for this paper. \\n\\n\\nBackground \\n\\nThere has been a large number of works studying very wide networks, showing both optimization and generalization results. While there has been great progress, most of these existing results require the width of both deep (and shallow) networks to be very large. For example, even by being polynomial in the number of samples, the networks are already unrealizable in practice. Therefore, guarantees with much better dependence is highly desirable. \\n\\n\\nDiscussion of Contributions\\n\\nAs the title may suggest, it is a bit surprising that we can show that polylog width is sufficient. Intuitively, we can imagine that the classification margin can grow exponentially more complex as the number of samples increase. I believe the nice result can be attributed to a careful analysis of key quantities such as \\\\| W_t - W_0 \\\\|_F in terms of the margin parameter from Assumption 2.1. \\n\\nSome nice examples of the analysis in this paper include the introduction of the weight matrix \\\\overline{U} and \\\\overline{W} as an intermediate between W_0 and W_t, and observing that the activations of the ReLU \\\\xi_{i,t} do not change very much during training. The tricks together led to a very tight bound on the change in weights \\\\| W_t - W_0 \\\\|_F in terms of the margin parameter \\\\gamma. As the authors mentioned on page 6, this tight control was used to bound the Rademacher complexity later. \\n\\nThe connection drawn between the margin assumption and neural tangent kernel (Proposition 5.1) is also interesting on its own. The authors intended this result to serve as a justification of the margin assumption (2.1). \\n\\n\\nDiscussion of the Margin Parameter\\n\\nLet me start by saying I'm not completely certain on how to interpret this margin parameter \\\\gamma in Assumption 2.1. Perhaps I'm missing some obvious ideas here, but I would still like the authors respond with some more details. At the same time, I don't believe this is a sufficient criticism to reject this paper, as I believe the analysis in terms of \\\\gamma is still valuable. \\n\\nOn one hand, if we were to assume the margin condition holds for all possible data points (i.e. Assumption 3.1), then there is no concern about polynomial dependence on the number of samples, and this is certainly a reasonable assumption in some applications. \\n\\nOn the other hand, many of the previous analysis on wide networks were in terms of a minimum separation distance, i.e. assume there exists a \\\\delta > 0 such that for all i \\\\neq j, we have \\n\\t\\\\| x_i - x_j \\\\| \\\\geq \\\\delta . \\nThe authors have provided a discussion in section 5, including both a worst case bound of \\n\\t\\\\gamma \\\\geq \\\\delta / (100 n^2),\\nby Oymak and Soltanokotabi (2019) and an example where the margin is O( n^{-1/2} ) with high probability. \\n\\nUsing either bounds on \\\\gamma, we will have a width with polynomial dependence in terms of the number of samples and minimum separation distance. Therefore if we were to compare against previous works in the same benchmark, i.e. using a minimum separation assumption instead, then arguably this work did not achieve a width that is only polylog in terms of the number of samples. \\n\\nThat being said, I don't believe the authors were intentionally trying to hide sample dependence inside an assumption. The paper is presented in a very transparent way, and the authors were being honest in chapter 5 about the worst case dependence on the number of samples. \\n\\nTo summarize, it is unclear to me whether the paper truly achieved a width dependence that is polylog in terms of the number of samples, but the analysis in terms of \\\\gamma remains a valuable contribution. I welcome the authors and the other reviewers to provide additional comments on whether the title and claims of this paper is appropriate. \\n\\n\\nMinor Comments \\n\\nFor the sake of improving readability, I also have some minor comments that do not contribute towards the decision. \\n\\n - On page 5, the first observation bullet point in section 2.2, it is written here that by triangle inequality \\n \\\\| \\\\nabla \\\\widehat{R}(W) \\\\|_F \\\\leq \\\\widehat{Q}(W) , \\n I thought you needed an absolute value on the right hand side, perhaps you should mention that \\\\ell is strictly decreasing. \\n\\n - On page 6, in the statement of Corollary 3.3, you are missing \\\\eta \\\\leq 1, and perhaps the \\\\tilde on \\\\Omega and O should be defined. \\n - On the same note, while it is obvious that Theorem 3.2 implies this corollary, it is still worth writing up a proof to compute the constants. \\n\\n - On page 7, the notation \\\\Delta_n and \\\\odot are not defined, I had to infer definition from the proof. \\n\\n - On page 12, just below the second equation, it wasn't immediately clear how \\\\widehat{R}( \\\\overline{W} ) \\\\leq \\\\epsilon / 4. I believe it's worth expanding the definitions a bit, and explicitly plug in Lemma 2.5. \\n\\n - On page 12, in the second last equation, I'm actually not sure where this inequality comes from \\n \\\\| W_t - \\\\overline{W} \\\\|_F \\\\geq \\\\langle W_t - \\\\overline {W} , \\\\overline{U} \\\\rangle\\n Perhaps it's obvious, but I currently don't see it.\"}"
]
} |
Hke1gySFvB | Enhancing Language Emergence through Empathy | [
"Marie Ossenkopf"
] | The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly. That is seen as an important step towards achieving general AI. The scope of emergent communication is so far, however, still limited. It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity. We took an example from human language acquisition and the importance of the empathic connection in this process. We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning. We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time. Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup. | [
"multi-agent deep reinforcement learning",
"emergent communication",
"auxiliary tasks"
] | Reject | https://openreview.net/pdf?id=Hke1gySFvB | https://openreview.net/forum?id=Hke1gySFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AYB0xheCUL",
"ryeJqt0UiB",
"Hkx7J160Yr",
"S1e1TaJRYS",
"ryeHqlTZFB"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724613,
1573476743435,
1571897051273,
1571843511387,
1571045516611
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1490/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1490/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1490/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1490/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces the idea of \\\"empathy\\\" to improve learning in communication emergence. The reviewers all agree that the idea is interesting and well described. However, this paper clearly falls short on delivering the detailed and sufficient experiments and results to demonstrate whether and how the idea works.\\n\\nI thank the authors for submitting this research to ICLR and encourage following up on the reviewers' comments and suggestions for future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"I am very grateful for the feedback. It helps me better understand the requirements for a full ICLR paper.\\nI'm happy, that the reviewers liked the overall idea, I will work on improving the experimental base.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper aims to take insight from human language acquisition and the importance of empathic connection to learn better models for emergent language. The authors propose an approach to introduce the notion of empathy to multi-agent deep RL by extending existing approaches on referential games with an auxiliary task for the speaker to predict the listener\\u2019s empathy/mind. Experiments show that this gives some improvement with faster convergence.\", \"strengths\": [\"The concept is interesting and grounded in human communication.\"], \"weaknesses\": [\"I like the motivation of predicting empathy, but the paper vastly oversells this part: I don't see how predicting the listener's hidden state is the same as modeling empathy. Empathy is a complex human state/emotion that should not be reduced to this.\", \"This paper is very preliminary. There are multiple typos in the paper. Figures are not professionally created. There are multiple training details not included such as how the agents are modeled and trained. The authors seem to have run out of time given the short length of the paper.\", \"The experimental results are not convincing at all. The improvement is too small and it would help to run the experiment multiple times to see the improvement with respect to variance in the model. There should also be experiments testing the effect of \\\\alpha on performance. There is no analysis of how well the model is able to predict empathy, as well as ablation studies testing for various design decisions. The authors should also add an analysis of the learned communication protocols and whether they are different in a meaningful way.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper starts with a conceptual claim that incorporating a notion of \\u201cempathy\\u201d in language emergency would help agents learn faster. The paper then proposes a learning mechanism for implementing this, and looks at its empirical effect for the case of a Speaker-Listener game.\\n\\nThe concept at the core of the paper is thought-provoking, somewhat grounded in human communication, and it\\u2019s interesting to see how this can be translated into a learning mechanism for the multi-agent setting. The specific implementation proposed seems reasonable at a high level, however there are many technical details missing which really hamper the paper\\u2019s message & potential scientific impact. The results are limited to a single game, with just a pairwise comparison (with and without \\u201cempathy\\u201d), and provide a narrow view into the effectiveness of the proposed technique.\\n\\nMy main problem with the paper is the clarity & organization problems. Usually I tend to be lenient on this, thinking poor writing is much easier to fix than poor science. But in this case the problems are large enough that the paper is just too far from the standard for ICLR publication. It also fits in 5 pages, so the authors had lots of space to write a much better paper. I encourage them to do this for a future submission, in addition to more extensive results, because I think the ideas are worthwhile.\", \"specific_comments\": \"Design of the empathy mechanism. Can you motivate why it\\u2019s reasonable to \\u201cachieve a high relation between the hidden states of both agents\\u201d? Is this necessary / sufficient for empathy? What are alternate framings of this? What are properties and pros/cons of this framing?\\n\\nSec.4 needs a lot more detail! Sections Agent setup, Learning and Empathy Extension in Sec.5 should be moved to Sec.4, since they describe the method, rather than the experiments.\\n\\nSec.5 needs better clarity.\\no\\tWhat are m^l_t and M^{<l}_t in Eqn 5? Define how h_S and h_L are parameterized, and how each is trained.\\no\\tFig.2 gives a high-level view of the approach, but lacks important details. Do you apply a loss at both the Speaker\\u2019s Decoder output, and the Listener\\u2019s Decoder output? Or just the latter? What is the loss specifically? I assume combination of Eqn 5 and Eqn 6, but not sure.\\no\\tIf you train just on the loss of the Listener\\u2019s Decoder, does this mean this is backpropagated all the way to train the Speaker? How would this be done in a real system? It\\u2019s a very strong assumption to say that the Listener will share gradients with the Speaker. It seems more realistic to assume they will each observe a loss and train independently.\\n\\nResults are very brief.\\no\\tHow robust are the results to the specification of the \\\\alpha (the loss weight from Eqn 6)? How much data goes to finding a good \\\\alpha?\\no\\tWhat is the difference between the left and the right plot? Is one for the Speaker and the other for the Listener?\\no\\tHow do you measure \\u201clearning speed\\u201d, which is the main metric discussed in the text of Sec.6?\\no\\tHow do the results change by number of concepts in the game?\\no\\tDo you do any pre-training of the encoder/decoder networks?\\no\\tCan you show confidence intervals on each curve?\\no\\tCan you show test performance?\\no\\tAre there other related games to consider?\\n\\nMany references missing throughout to support statements, e.g.\\no\\t\\u201cNatural language is not as rule-based as\\u2026\\u201d \\no\\t\\u201cThese considerations led to the research field of emergent communication\\u2026\\u201d\\no\\tSec.2: Earlier refs to RL in general (e.g. work Sutton in the 1980\\u2019s). Earlier refs to RL with neural networks (e.g. work of G. Tesauro; work of M. Riedmiller).\\no\\tReferential game in Fig.1 caption.\\n\\nSome minor language issues, e.g.\\no\\t\\u201cThe field was then alleviated\\u201d -> Do you mean elevated?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper takes the reference-game setup of Lazaridou et al. (2018), as a means of enabling emergent communication, and adds an auxiliary task to demonstrate that this helps with language emergence. The auxiliary task is to enable the speaker to predict the hidden state of the listener, after the message has been received. This is (not unreasonably) likened to providing the speaker with some empathy, in that it enables the speaker to try and predict what the effect of the message will be on the listener.\\n\\nThe main result is that the learning exhibits a speed-up, arriving at roughly the same level of overall reward but in fewer training steps.\\n\\nThe idea of adding an \\\"empathy\\\" auxiliary task to the reference-game setup is an interesting one, and the approach is well-motivated and described, including a background section. Unfortunately, however, the contributions of the paper are some way off what would be required for a full ICLR paper. Note that the main experimental results section takes up only 1/3 of a page, and the overall paper has only 5 pages of content. (As far as I know there is no requirement for an ICLR paper to take up the whole 8 pages, but a submission with only 5 pages is quite unusual.) So the overall contribution could be summarised as taking an existing emergent-language setup with the same speaker and listener neural architectures; adding a single MLP to the speaker; and showing two graphs of training reward, varying the number of candidates (2 and 5). I hope that the authors can perhaps see that this submission would be better suited to a dedicated workshop on emergent communication (and even then it would need more experiments and analysis).\"}"
]
} |
B1eCk1StPH | The Generalization-Stability Tradeoff in Neural Network Pruning | [
"Brian R. Bartoldson",
"Ari S. Morcos",
"Adrian Barbu",
"Gordon Erlebacher"
] | Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting. This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts. To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates (defined as the drops in test accuracy immediately following pruning) than on the final size of the pruned model. We demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks. | [
"pruning",
"generalization",
"stability",
"dynamics",
"regularization"
] | Reject | https://openreview.net/pdf?id=B1eCk1StPH | https://openreview.net/forum?id=B1eCk1StPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"MIj-EvXKF2",
"rJgZEBSjsr",
"rJgATHpwsH",
"HklyaBTDsB",
"BkeUES6vjB",
"rJxtXr6wjS",
"H1euKNpPsH",
"Hyg44Npwor",
"BJxKLBXTYH",
"HkeqIHShKr",
"H1gBba7itH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724584,
1573766441208,
1573537222121,
1573537206843,
1573537069842,
1573537056613,
1573536895668,
1573536811520,
1571792208995,
1571734865988,
1571663101094
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1489/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1489/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1489/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors introduce a notion of stability to pruning and argue through empirical evaluation that pruning leads to improved generalization when it introduces instability. The reviewers were largely unconvinced, though for very different reasons. The idea that \\\"Bayesian ideas\\\" explain what's going on seems obviously wrong to me. The third reviewer seems to think there's a tautology lurking here and that doesn't seem to be true to me either. It is disappointing that the reviewers did not re-engage with the authors after the authors produced extensive rebuttals. Unfortunately, this is a widespread pattern this year.\\n\\nEven though I'm inclined to ignore aspects of these reviews, I feel that there needs to be a broader empirical study to confirm these findings. In the next iteration of the paper, I believe it may also be important to relate these ideas to [1]. It would be interesting to compare also on the networks studied in [1], which are more diverse.\\n\\n\\n[1] The Lottery Ticket Hypothesis at Scale (Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, and Michael Carbin) https://arxiv.org/abs/1903.01611\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 2, Addendum\", \"comment\": \"Again, we thank the reviewer for their helpful comments.\\n\\nWhile Section 5 of our original submission included mention of the Bayesian mechanism suggested by the reviewer (see our reference to [1]), we have made the potential role of this mechanism more prominent by connecting it to our results in Section 4.3. Illuminating the particular mechanism giving rise to the generalization-stability tradeoff is (as our discussion stated) important, but it's perhaps secondary in importance to simply identifying the tradeoff because, when used in practice for model compression, magnitude pruning almost always targets small weights [2, 3, 4, 5]. While we agree with the reviewer on the potential applicability of the Bayesian mechanism to our results, then, we maintain our original disagreement with the suggestion that usage of Bayesian priors in the literature makes unsurprising our finding that large weight pruning is helpful.\\n\\n[1] Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pages 5\\u201313. ACM, 1993.\\n[2] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135\\u20131143, 2015.\\n[3] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.\\n[4] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.\\n[5] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018.\"}",
"{\"title\": \"References\", \"comment\": \"[1] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine\\nlearning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118, 2018.\\n[2] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable\\nneural networks. arXiv preprint arXiv:1803.03635, 2018.\\n[3] Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks.\\nCoRR, abs/1902.09574, 2019. URL http://arxiv.org/abs/1902.09574.\\n[4] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections\\nfor efficient neural network. In Advances in neural information processing systems, pages 1135\\u20131143, 2015.\\n[5] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for\\nefficient convnets. arXiv preprint arXiv:1608.08710, 2016.\\n[6] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for\\nmodel compression. arXiv preprint arXiv:1710.01878, 2017.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\n**The Use of E[BN] vs. Naive Approaches**\\n\\nWhile Figure 1 used the proposed E[BN] score, Figures 2, 3, and 4 contain results from experiments with L2-norm pruning and simple-magnitude pruning. For example, the two plots in Figure 4 that are entitled \\u201cL2 Norm Pruning\\u201d contain results from L2-norm pruning experiments. To illustrate the generalization-stability tradeoff across pruning methods more thoroughly, we added to the updated paper a version of Figure 1 that uses L2-norm pruning (see Figure A3). Figure A3 has qualitatively similar results to Figure 1, but the correlation between instability and generalization is somewhat weaker in Figure A3. This may be explained by the fact that L2-norm pruning generated a narrower spectrum of instabilities (perhaps due to L2-norm scoring\\u2019s inability to accurately assess parameter importance, which was shown in Figure 4).\\n\\n**The Scope of Our Experiments, and Adding Higher Sparsity Results**\\n\\nWe would like to clarify that our conclusions are *not* based only on comparisons of large/small weight pruning, but rather that we used dozens of experimental configurations to (across three model classes) compare: 1) pruning of small score, random score, and large score weights (Figure 4; also, Figure 2 finely differentiated scores by pruning specific score deciles); 2) various iterative pruning rates, and therefore pruning schedules (Figure 4); 3) structured pruning vs. unstructured pruning in two different contexts\\u2014-unstructured vs. structured ResNet filter pruning (Figure 3), and unstructured vs. structured Conv4 linear-layer pruning (Figure 2); 4) L2-norm vs. E[BN] weight scoring (Figure 4); and 5) iterative pruning vs. one-shot pruning (Figure 3). While all of our experiments used some form of magnitude pruning, this form of pruning is perhaps the most common [2, 4\\u20136], and has been shown to perform similarly to more sophisticated pruning approaches [3] (as mentioned in our related work section). Thus, our experiments observed the generalization-stability relationship across a large range of interesting contexts and instability values (see Figures A2 and 4, for examples of such values). Importantly though, as we stated in our discussion section, adding instability can have downsides, so the best generalizing pruning approach may involve some mixture of unstable and stable pruning. \\n\\nTo evaluate our results under higher sparsity, we used high- and low-stability pruning algorithms to prune 85% of Conv4, and we found that the generalization-stability tradeoff was present. We've added this result in Appendix A4.\\n\\n**The Meaning of Instability**\\n\\nWe wish to note that instability does not take into account final test accuracy. To make clear that instability is calculated from test accuracies computed immediately before and after pruning, we added to Section 3 a timeline that depicts when instability is calculated relative to pruning iterations. \\n\\nTo add additional clarity, we removed the absolute value component of the instability calculation. Thus, it's no longer possible for instability to be positive when pruning immediately helps the model. Importantly, the updated version of Figure A2 shows that instability is rarely negative (i.e., it\\u2019s rare that pruning immediately helps the model), and never negative and large. Perhaps unsurprisingly, then, removing the absolute value did not notably affect our conclusions, in fact doing so improved the statistical evidence for the tradeoff (see Figure 1).\\n\\n**The Role of the Generalization Gap and Bounds**\\n\\nWhile generalization gap bounds are loose and imprecise (they are just bounds, as the reviewer points out), they have nonetheless been used to guide model selection (see the discussion of this in our related work section). We updated the introduction to make more clear our point that some recent empirical and theoretical results (e.g. generalization gap bounds) now similarly guide us to select models that are overparameterized. This new guidance seems at odds with more traditional ideas (such as the idea that pruning improves generalization via removing parameters). For a discussion of the conflict between traditional and recent recommendations on choosing a model size (or parameter count), see [1]: \\u201ca best practice in deep learning for choosing neural network architectures [is] that the network should be large enough to permit effortless zero loss training\\u2026 in direct challenge to the bias-variance trade-off philosophy...\\u201d\\n\\nWe also wish to note that, except in Figure 2, we exclusively look at generalization to a test set (defined as the test accuracy), not the generalization gap.\"}",
"{\"title\": \"References\", \"comment\": \"[1] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine\\nlearning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118, 2018.\\n[2] Aidan N Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, and Geoffrey E Hinton. Targeted\\ndropout. 2018.\\n[3] Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing\\nthe description length of the weights. In Proceedings of the sixth annual conference on\\nComputational learning theory, pages 5\\u201313. ACM, 1993.\\n[4] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping\\nTak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima.\", \"arxiv_preprint_arxiv\": \"1609.04836, 2016.\\n[5] Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pages 3290\\u20133300, 2017.\\n[6] Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep\\nneural networks. arXiv preprint arXiv:1701.05369, 2017.\\n[7] Vaishnavh Nagarajan and J Zico Kolter. Generalization in deep networks: The role of distance\\nfrom initialization. arXiv preprint arXiv:1901.01672, 2019.\\n[8] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias:\\nOn the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014.\\n[9] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076, 2018.\\n[10] Ben Poole, Jascha Sohl-Dickstein, and Surya Ganguli. Analyzing noise in autoencoders and\\ndeep networks. arXiv preprint arXiv:1406.1831, 2014.\\n[11] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929\\u20131958, 2014.\\n[12] Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, and Michael W Mahoney. Hessian-based\\nanalysis of large batch training and robustness to adversaries. In Advances in Neural Information Processing Systems, pages 4949\\u20134959, 2018.\\n[13] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.\\n[14] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\n**Bayesian Pruning Studies\\u2019 Relationship to Our Study and the Generalization-Stability Tradeoff**\\n\\nThe reviewer notes that the effect of noise on generalization has been studied. We wish to make clear that our related work and discussion sections identified the novel role of noise in our analysis, which required making comparisons to studies of dropout, noise injection, and pruning [2, 3, 5, 6, 10, 11]. In particular, in the last paragraph of our related work section, we stated that the noise in Bayesian/dropout pruning \\u201czeroes\\u201d and restores (or otherwise noises) the weights\\u2019 values very frequently (e.g., a new weight vector is sampled for every single data point in a batch [2, 5, 6]), *but* (among other differences) the noise in *iterative* pruning permanently zeroes subsets of the weights and does so comparatively rarely (e.g., a new subset of the weight vector is replaced with zeroes at roughly ten distinct points during training). To make our study\\u2019s focus more clear, we changed the phrase \\u201cpruning noise\\u201d in this section to \\u201citerative DNN pruning noise\\u201d. Please let us know if you are aware of any papers that frame *iterative* DNN pruning as noise injection, as we believe ours is the first. Further, we believe this contribution is important because iterative pruning is, to the best of our knowledge, the most popular DNN pruning approach in the literature and deployment (e.g., TensorFlow\\u2019s pruning library uses iterative magnitude pruning [14]).\\n\\nThe reviewer notes that the KL-divergence mechanism from the Bayesian literature makes our main finding (the presence of a generalization-stability tradeoff) unsurprising. Our understanding, however, is that larger magnitude weights need not cause larger KL divergences. For example, Bayesian pruning with a log-uniform prior has the effect of \\u201cessentially either pruning the parameter or keeping it close to the maximum likelihood estimate\\u201d [5]. Furthermore, even with a prior that discourages large weights, shrinking the cost of communicating the model (the KL divergence between the weights and their prior) is balanced with shrinking the cost of communicating prediction errors [5]. Thus, large weights can survive Bayesian pruning and weight shrinkage mechanisms if these weights are important to accuracy, suggesting Bayesian pruning is quite different than the unstable pruning we analyze, which is (at least initially) quite damaging to the accuracy. Notably, in the Bayesian literature, pruning that leaves large weights intact is also empirically supported: [5] found that the horseshoe prior, which encourages unpruned weights to be near 0 and \\u201ccan, potentially, offer better regularization and generalization\\u201d, actually led to worse generalization than the log-uniform prior, which doesn't aim to shrink large weights. Similarly, [6] discussed sparsity, not magnitude shrinkage, as the source of the generalization benefits in their Bayesian pruning approach.\\n\\nImportantly, while all of the pruning studies we\\u2019re aware of avoid pruning large weights in order to minimize harm to the loss, our study has shown that harm to the loss may be related to how pruning engenders better generalization. As such, our conclusion suggested that pruning algorithms may be improved by balancing the apparent generalization benefits of short-term accuracy degradation with the risks such unstable pruning entails (e.g., damaging capacity too much). If the reviewer is aware of a study (Bayesian or not) showing unstable pruning improving generalization, or a study recommending the pruning of the most important weights, then please let us know of the relevant paper(s), as we would like to add such studies to our related work.\\n\\n**Overparameterization and Pruning**\\n\\nWe wish to clarify that, since we are primarily considering iterative pruning, the model becomes smaller as training progresses, making it unclear whether the benefit to training of overparameterization (that was identified by the reviewer) is diminished by the pruning process. This motivates the puzzling question we posed, and we made this clearer by updating the introduction to reflect that we are talking about pruning throughout training. Also, while the reviewer\\u2019s understanding of how overparameterization helps generalization is valuable (and consistent with some of the work we cited; for example [7]), our understanding is that this is an active area of research: \\u201cDespite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with overparameterization\\u201d [9]; also, see [1, 4, 8, 12, 13].\"}",
"{\"title\": \"References\", \"comment\": \"[1] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable\\nneural networks. arXiv preprint arXiv:1803.03635, 2018.\\n[2] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections\\nfor efficient neural network. In Advances in neural information processing systems, pages 1135\\u20131143, 2015.\\n[3] Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in neural information processing systems, pages 164\\u2013171, 1993.\\n[4] Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural\\ninformation processing systems, pages 598\\u2013605, 1990.\\n[5] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for\\nefficient convnets. arXiv preprint arXiv:1608.08710, 2016.\\n[6] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for\\nmodel compression. arXiv preprint arXiv:1710.01878, 2017.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\n**The Absence of a Tautology**\\n\\nTo better convey our findings, we would like to clarify how we measure test accuracy and instability. The generalization-stability tradeoff is essentially a positive correlation between test accuracy at convergence (i.e., generalization), and the average drop in test accuracy from immediately before to immediately after pruning (i.e., average instability, where the average is computed across all pruning events). Put another way, we are *not* investigating the relationship between an increase in test accuracy following a pruning event and the final increase in test accuracy. Rather, we found that when pruning immediately causes large *drops* in test accuracy (high instability), converged (or final) test accuracy increases. To make this more apparent in our main text, we removed the absolute value component of the instability calculation. Additionally, we added to Section 3 a timeline that shows when instability is calculated with respect to pruning events.\\n\\nRemoving the absolute value component of the instability calculation had no qualitative effect on the demonstrated relationship between instability and final generalization (in Figure 1, the statistical significance of the correlation actually increased). Note that our unchanged results are unsurprising, as negative instabilities were rare to begin with; i.e., a negative instability implies that pruning somehow had an immediately positive effect on test accuracy, which should be (intuitively) and is (see the updated Figure A2) rare. In any case, we believe that refraining from taking the absolute value has made the suggested correlation between accuracy drops and generalization (highlighted with arrows in Figure 1\\u2019s accuracy plots) more clearly supported by the statistical correlations (shown in Figure 1\\u2019s correlation plots). \\n\\nWe hope to have made clear that our main finding lacks a tautological nature. Indeed, that higher final test accuracy (generalization) is facilitated by pruning that causes larger immediate damage to the test accuracy (instability) is not at all tautological (or obvious). Correspondingly, when used in practice for model compression, magnitude pruning almost always targets small weights [1, 2, 5, 6]. Similarly, we are unaware of any suggestion that DNN pruning instability might be helpful\\u2014-the canonical approaches OBD and OBS actually use second-order information about the loss to ensure pruning is as *stable* as possible [3, 4]. Please let us know if you are aware of a DNN pruning paper that advocates any disruptiveness/instability in pruning, as we would like to add it to our related work.\\n\\n**Moving Derivation from Appendix**\\n\\nWhile we discussed all of our contributions in the main text, we chose to put the mathematical derivation of our novel pruning target in the appendix to save space. However, we agree with the reviewer that this derivation is critical to the contribution, and have therefore moved the derivation into the main text. Please note that the paper is a little longer than 8 pages as a result of this change. \\n\\n**The Capacity Effect of Temporarily Noising/Removing Weights**\\n\\nWe wish to make clear that the pruning noise we apply affects a particular weight for no more than one segment of training time, where a segment is a series of N consecutive batches (we edited our main text to make this more clear). After a weight has been zeroed/noised for N batches, it resumes normal training and can contribute to the model again. That said, the reviewer is correct in that weights that were temporarily zeroed do not necessarily learn upon reentering the model, so our approach may effectively remove some weights. However, we observed (see Figure 5) that pruning the reentered weights at convergence resulted in a marked drop in performance (for all noise schemes except \\u201cZeroing 1500\\u201d), showing that the reentered weights had typically learned after reentry, and that temporary zeroing is therefore less harmful to capacity than permanent pruning. We added this explanation to our main text.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies a puzzling question: if larger parameter counts (over-parameterization) leads to better generalization (less overfitting), how does pruning parameters improve generalization? To answer this question, the authors analyzed the behaviour of pruning over training and finally attribute the pruning's effect on generalization to the instability it introduces.\\n\\nI tend to vote for a rejection because \\n(1) The explanation of instability and noise injection is not new. Pruning algorithms have long been interpreted from Bayesian perspective. Some parameters with large magnitude or large importance contribute to large KL divergence with the prior (or equivalently large description length), therefore it's not surprising that removing those weights would improve generalization. \\n(2) To my knowledge, the reason why over-parameterization improves generalization (or reduces overfitting) is because over-parameterized networks can find good solution which is close to the initialization (the distance to the initialization here can be thought of as a complexity measure). In this sense, the effect of over-parameterization is on neural network training. However, pruning is typically conducted after training, so I don't think the fact that pruning parameters improves generalization contradicts the recent generalization theory of over-parameterized networks. Particularly, these two phenomena can both be explained from Bayesian perspective.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper mainly studies the relationship between the generalization error and mean/variance of the test accuracy. The authors first propose a new score for pruning called E[BN]. Then, the authors observe the generalization error and the test accuracy mean/variance for pruning large score weights and small score weights for VGG11, ResNet18, and Conv4 models. From these experiments, the authors observe that pruning large score weights generates instable but high test accuracy and smaller generalization gap compared to pruning small score weights. The authors additionally study some other aspects of pruning (e.g., pruning as a noise injection) and conclude the paper.\\n\\nOverall, I am not sure whether the observation holds in general due to the below reasons. \\n\\n- The authors proposed a new score E[BN] and all experiments are performed on this. However, how it differs from the usual magnitude-based pruning in practice is unclear. I would like to know whether similar behavior is observed for the na\\u00efve magnitude-based pruning.\\n\\n- I think that the author\\u2019s observation is quite restricted and cannot extend to a general statement since the experiments are only done for pruning small score/large score weights. To verify the generalization and instability trade-off, I believe that it is necessary to examine several (artificial) pruning methods controlling the instability of test accuracies and check whether the proposed trade-off holds. For example, one can design pruning methods that disconnect (or almost disconnect) the network connection from the bottom to the top (i.e., pruned network always outputs constant) with some probability to extremely increase the instability.\\n\\n- The authors did not report the results for high sparsity.\\n\\nBesides, I am not sure the meaning of the instability since when the test accuracy of the pruned model is higher than that of the unpruned model, the instability could be large.\", \"other_comments\": \"- The first paragraph mentions that the generalization gap might be a function of the number of parameters. However, I think that it is quite trivial that the generalization gap is not a function of the number of parameters while it only provides the upper bound.\\n----------------------------------------------------------\\nI have read the authors' response. Thanks for clarifying the definition of instability and additional experiments with high sparsity. However, I will maintain my score due to the following concern. \\n\\nThe remaining concern is that the current evidence for verifying generalization-stability tradeoff is not convincing as the authors presented only some examples having small and large instability (e.g., pruning smallest/largest weights) under the same pruning algorithm. I think that the results would be more convincing if the authors add a test accuracy plots given a fixed prune ratio, whose x-axis is controlled instabilities (e.g., from 10% to 90%) among various pruning algorithms (other than magnitude-based ones, e.g., Hessian based methods). It would be much more interesting if the same instability results same test accuracy even for different pruning algorithms.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is an empirical study that looks into the effect of neural network pruning on both the model accuracy as well as the generalization risk (defined as the difference between the training error and the test error). It concludes that while some pruning methods work, others fail. The authors argue that such discrepancy can be explained if we look into the impact of pruning on \\\"stability\\\".\\n\\nThe first major issue I have with the paper is in their definition of stability. I don't believe that this definition adds any value. Basically, the authors define stability by the difference between the test accuracy pre-pruning and post-pruning. This makes the results nearly tautological (not very much different from claiming that the test accuracy changes if the test accuracy is going to change!). One part where this issue is particularly important is when the authors conclude that \\\"instability\\\" leads to an improved performance. However, if we combine both the definition of test accuracy and the definition of \\\"instability\\\" used the paper, what the authors basically say is that pruning improves performance. To see this, note that a large instability is equivalent to the statement that the test accuracy changes in any direction (there is an absolute sign). So, the authors are saying that the test accuracy after pruning improves if it changes, which is another way of saying that pruning helps. \\n\\nThe second major issue is that some of the stated contributions in the paper are not discussed in the main body of the paper, but rather in the appendix. For example, the authors mention that one of their contributions is a new pruning method but that method is not described in the paper at all, only in the appendix. If it is a contribution, the authors should include it in the main body of the paper. \\n\\nThird, there are major statements in the paper that are not well-founded. Take, for example, the experiment in Section 4.4, where they apply zeroing noise multiple times. The authors claim that since the weights are only forced to zero every few epochs, the network should have the same capacity as the full network (i.e. VC capacity). I disagree with this. The capacity should reduce since those weights are not allowed to be optimized and they keep getting reset to zero every few epochs. They are effectively as if they were removed permanently.\"}"
]
} |
HklCk1BtwS | Word embedding re-examined: is the symmetrical factorization optimal? | [
"Zhichao Han",
"Jia Li",
"Xu Li",
"Hong Cheng"
] | As observed in previous works, many word embedding methods exhibit two interesting properties: (1) words having similar semantic meanings are embedded closely; (2) analogy structure exists in the embedding space, such that ''emph{Paris} is to \emph{France} as \emph{Berlin} is to \emph{Germany}''. We theoretically analyze the inner mechanism leading to these nice properties. Specifically, the embedding can be viewed as a linear transformation from the word-context co-occurrence space to the embedding space. We reveal how the relative distances between nodes change during this transforming process. Such linear transformation will result in these good properties. Based on the analysis, we also provide the answer to a question whether the symmetrical factorization (e.g., \texttt{word2vec}) is better than traditional SVD method. We propose a method to improve the embedding further. The experiments on real datasets verify our analysis. | [
"word embedding",
"matrix factorization",
"linear transformation",
"neighborhood structure"
] | Reject | https://openreview.net/pdf?id=HklCk1BtwS | https://openreview.net/forum?id=HklCk1BtwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jCnKEzykEE",
"HJla4ULhiH",
"S1ly8M8hjB",
"BJxUTnx3ir",
"B1gw6EE7qr",
"SJx5aT16tB",
"SJgNUbOiFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724555,
1573836340625,
1573835334531,
1573813438186,
1572189375289,
1571777985940,
1571680588188
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1488/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1488/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1488/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1488/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper studies word embeddings using the matrix factorization framework introduced by Levy et al 2015. The authors provide a theoretical explanation for how the hyperparameter alpha controls the distance between words in the embedding and a method to estimate the optimal alpha. The authors also provide experiments showing the alpha found using their method is close to the alpha that gives the highest performance on the word-similarity task on several datasets.\\n\\nThe paper received 2 weak rejects and 1 weak accept. The reviews were unchanged after the rebuttal, with even the review for weak accept (R2) indicating that they felt the submission to be of low quality. Initially, reviewers commented that while the work seemed solid and provided insights into the problem of learning word embeddings, the paper needed to improve their positioning with respect to prior work on word embeddings and add missing citations. In the revision, the authors improved the related work, but removed the conclusion.\\n\\nThe current version of the paper is still low quality and has the following issues\\n1. The paper exposition still needs improvement and it would benefit from another review pass\\nFollowing R3's suggestions, the authors have made various improvements to the paper, including modifying the terminology and contextualizing the work. However, as R3 suggests, the paper still needs more rewriting to clearly articulate the contribution and how it relates to prior work throughout the paper. In addition, the conclusion was removed and the paper still needs an editing pass as there are still many language/grammar issues.\", \"page_5\": \"\\\"top knn\\\" -> \\\"top k\\\"\\n\\n2. More experimental evaluation is needed.\\nFor instance, R1 suggested that the authors perform additional experiments on other tasks (e.g. NER, POS Tagging). The authors indicated that this was not a focus of their work as other works have already looked at the impact of alpha on other task. While prior works has looked at the correlation of alpha vs performance on the task, they have not looked at whether alpha estimated the method proposed by the author will give good performance on these tasks as well. Including such analysis will make this a stronger paper.\\n\\nOverall, there are some promising elements in the paper but the quality of the paper needs to be improved. The authors are encouraged to improve the paper by adding more experimental evaluation on other tasks, improving the writing, as well as incorporating other reviewer comments and resubmit to an appropriate venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks sincerely for your review!\", \"comment\": \"Thank you very much for the valuable comments. We agree on almost every point in your comment. And we revise the narrative of some key points according to your suggestion.\\n\\n1. Firstly, we revise the term word2vec to SGNS in the paper:) \\n\\n2. About the discussion on the parameter $\\\\alpha$.\\nThanks for this comment. We totally agree and revised the narrative of the discussion about $\\\\alpha$. Particularly, we are sorry we once missed this related work by Artetxe et al. (CoNLL'18) and we add it into the revised version.\\n\\n3. About the claim that SGNS is performing a symmetric factorization.\\nOur claim that SGNS is performing a symmetric factorization is based on the existing works. The evidence includes: (1) \\\"the factorization achieved by SGNS\\u2019s training procedure is much more \\u201csymmetric\\u201d...\\\" stated in section 3.3 of Levy et al. (TACL'15, https://www.aclweb.org/anthology/Q15-1016.pdf); (2) \\\"In Levy and Goldberg [2014] where the authors explained the connection between skip-gram Word2Vec and matrix factorization, $\\\\alpha$ is set to 0.5 to enforce symmetry\\\" stated in section 2.2 of Yin and Shen (NeurIPS'18); \\\"Levy and Goldberg [2014] showed that skip-gram Word2Vec\\u2019s objective is an implicit symmetric factorization of the Pointwise Mutual Information (PMI) matrix\\\" stated in section 2.3 in Yin and Shen (NeurIPS'18); \\\"For the popular skip-gram [Mikolov et al., 2013b] and GloVe [Pennington et al., 2014], $\\\\alpha$ equals 0.5 as they are implicitly doing a symmetric factorization\\\" stated in section 5.1 in Yin and Shen (NeurIPS'18, https://papers.nips.cc/paper/7368-on-the-dimensionality-of-word-embedding.pdf).\\n\\nHowever, we realized this point is arguable as you mentioned. Anyway, it is not the key point in our paper. The purpose of us to use this claim is to highlight the influence of the parameter $\\\\alpha$. Besides, if the asymmetric decomposition is better than the symmetric decomposition, we can explicitly add this asymmetry into the SGNS architecture. We change the narrative in the updated version.\\n\\n\\n4. About the experiments.\\nOur intuition of tuning $\\\\alpha$ according to the specific training dataset is that we think every training dataset is focusing on some particular words or words pairs. For example, the dataset \\\"bruni_men.txt\\\" are focus on the similarity of word pairs appears in this dataset, and \\\"luong_rare.txt\\\" may focus on the similarity of other word pairs. So, if the goal is to make the word embedding perform well for a specific dataset, we can particularly consider the relative distance of words (word similarity task) and the relative distance of pairs (e.g. (\\\"France\\\", \\\"Germany\\\") and (\\\"Paris\\\", \\\"Berlin\\\")). Or if the goal is to learn a general word embedding that is not designed for a specific task, we can consider all words instead of some particular words. Besides, if we care about the efficiency, we can sample some words instead of using all words.\"}",
"{\"title\": \"Thanks a lot for your review!\", \"comment\": \"1. About the contributions and the connection with previous works\\nWhile we adopted the matrix factorization framework that was once proposed (Levy, et al. (NeurIPS'14)) to discuss word embedding, our focus is different from the previous studies. As mentioned in your review, we provide the theoretical explanation why the word embedding exhibits nice properties. Besides, while previous studies (e.g. Bullinaria et al. (2012), Levy et al. (2015), Artetxe et al. (CoNLL'18)) empirically observed the parameter $\\\\alpha$ influences the quality of word embedding, they do not provide theoretical explanation or the method about how to set this parameter. On the contrary, we provide a theoretical explanation for this behavior, and derive a method to automatically find its optimal value. In detail, the relation between this paper and previous works are summarized as follows:\\n(1) The relation between this paper and Levy et al (NeurIPS'14) is that Levy et al (NeurIPS'14) proved that SGNS is implicitly factorizing the (shifted) PMI matrix. In this paper, we adopt this matrix factorization framework to discuss the word embedding. With this assumption, we provide theoretical explanation about the word similarity and analogy structures existing in the word embeddings (e.g. SGNS). \\n(2) The relation between this paper and Bullinaria et al. (2012), Levy et al. (2015), Artetxe et al. (CoNLL'18) and other existing words that discussed the parameter $\\\\alpha$ is that, these existing methods empirically found that the parameter $\\\\alpha$ can influence the quality of the learnt word embeddings. But these methods did not give the theoretical explanation to explain why $\\\\alpha$ has such influence and they did not give out a clear method to guide us to find the optimal $\\\\alpha$. In this paper, we theoretically explain how $\\\\alpha$ influences the word embedding, and provide the method as a guide to find the optimal $\\\\alpha$.\\n(3) The relationship between this paper and Yin & Shen (2018) is that, Yin & Shen and this paper both discuss the word embedding under the matrix factorization framework. Yin & Shen mainly focuses on the dimensionality of the word embedding such that how to choose the dimension of the learnt word embedding, while this paper focuses on the theoretical explanation about the inner mechanism of the word similarity and analogy structure existed in the word embedding. Besides, section 5.1 in Yin & Shen's paper mentioned that $\\\\alpha$ and they suggest that $\\\\alpha$ should be larger because they states that \\\"as $\\\\alpha$ becomes larger, the embedding algorithm becomes less sensitive to over-fitting caused by the selection of an excessively large dimensionality k\\\". However, in the real case, the very large $\\\\alpha$ does not produce the embedding with the best quality. This point is verified in our experiments and the experiments in Bullinaria et al. (2012), Levy et al. (2015), Artetxe et al. (CoNLL'18), etc,.\\n\\n2. About the mentioned related work.\\nWe are sorry we once missed it. We add it into the revised version.\"}",
"{\"title\": \"Thanks a lot for your review.\", \"comment\": \"Firstly, thanks a lot for your comments! Here are the key points:\\n\\n1. About the grammar issues.\\nWe correct these grammar typos according to your comment.\\n\\n2. About other downstream tasks.\\nFirstly, we agree with you that it might be better to conduct more experiments on the downstream task to further show the impact of the alpha parameter. However, we have to say that the influence of alpha has been empirically observed by previous studies (e.g. Bullinaria and Levy (2012), Levy et al. (TACL'15), Artetxe et al. (CoNLL'18)), and our purpose is not to empirically re-examine this phenomenon. Differently, we focus on the theoretical understanding of the inner reason why word embedding exhibits these two well-known properties, namely the word similarity and the analogy structure. Particularly, we theoretically explain how ALPHA influences the word embedding. The purpose of the conducted experiments is to verify these theoretical analysis.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors study the word embedding, with a particular emphasize on the word2vec or similar strategies. To this end, the authors consider the matrix factorization framework, previously introduced in the literature, and also study the influence of an hyperparameter denoted by alpha. Roughly speaking, there are two major parts in the paper. On one hand, it explains the reasons why the word embedding schemas provide nice properties, by defining the embedding as a low rank transformation mechanism. On the other hand, they propose to choose optimally the hyperparameter alpha in order to ameliorate word embedding by better preserving the distance structure. Conducted experiments are convincing.\\n\\nThe paper is well written, and derivations seem correct.\\n\\nWe think that the major issue in this work is that it does not provide significant contributions with respect to the state of the art. It seems that the contributions are rather incremental compared to related works, such as Levy et al. 2015 and Yin & Shen 2018. In the latter, it is proven that most existing word embedding schemas can be formulated as low rank matrix approximations, either explicitly or implicitly. The submitted paper does not provide new significant results.\\n\\nMoreover, the authors have failed to provide connections to other related works, or even cite them, including papers that consider word embedding as asymmetric low-rank projections. See for example:\\nFei Tian,\\u00a0Bin Gao,\\u00a0Enhong Chen,\\u00a0Tie-Yan Liu\\nLearning Better Word Embedding by Asymmetric Low-Rank Projection of Knowledge Graph\\nJ. Comput. Sci. Technol. (2016) 31: 624.\", \"https\": \"//doi.org/10.1007/s11390-016-1651-5\", \"also_available_on_arxiv\": \"https://arxiv.org/abs/1505.04891\\n\\n\\n------\\nReply to Rebuttal\\n\\nWe thank the authors for modifying the paper and the reply to out comments and suggestions. However, we still think that the paper is of low quality, due to straightforward extension to the paper of Levy et al from 2015.\\n\\nThe authors have added a small section on related works, as recommended. However, they have removed the \\\"Conclusion\\\" section. The paper no longer has a conclusion and potential work that ends the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"\", \"summary\": \"=======\\nThis paper provides a closer look at the well-studied problem of learning word embeddings. In particular, it looks at the set of embedding methods that explicitly or implicitly perform a matrix factorization and tries to understand why the word embeddings exhibit analogy structure and why words that are semantically similar get embedded close together. The mechanism it comes up with has to do with the alpha parameter that represents the powers of singular values of the matrix that was factorized to estimate the embeddings. It turns out that alpha controls the distance between the words in the embedding transformation process. Next the paper discusses how to choose/estimate alpha to get better quality embeddings. Results are shown on several word similarity tasks.\", \"comments\": \"=======\\nThe paper offers fresh insights into the well studied problem of learning word embeddings. The impact of the alpha parameter is definitely interesting w.r.t the quality of embeddings learned. That said, the paper does have a few problems. First, though the paper is well motivated and puts itself nicely in context of previous work, it needs a copy-editor as there are many language/grammar issues some of which I highlight below. \\n\\n\\nSecond, and the main problem with the paper, is that the properties of the alpha parameter are intriguing but the experimental evaluation is underwhelming. The paper also needs to show the impact of the alpha parameter on the quality of embeddings learned for some downstream task e.g. NER, POS Tagging. Just showing results on word similarity tasks and computing correlations is not very insightful or useful. \\n\\n\\nGrammar issues (subset):\", \"page_1\": \"\\\"..has an important influence to the...\\\"\", \"page_6\": \"\\\"The first is to verify....\\\"\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper explores the role of the implicit alpha parameter when learning word embeddings. More concretely, word embeddings work by either implicitly or explicitly factorizing a co-occurrence matrix, and the underlying parameter alpha controls how the singular values are weighted between the word and the context vectors. The authors provide theoretical insights on the role of alpha in relation with the original co-occurrence matrix, and propose a new method to find its optimal value.\\n\\nI think that this is overall a solid work. The paper provides a new perspective in the workings of word embeddings that I find interesting, it is theoretically well-founded (although I did not check all derivations in detail), and the presentation is clear.\\n\\nHowever, I think that the paper does a poor job in putting its contributions into context in relation to previous work. In particular, the role of alpha in word embeddings was already studied empirically by Artetxe et al. (CoNLL'18, https://www.aclweb.org/anthology/K18-1028.pdf) for both word analogy and word similarity tasks, to the extent that Figure 2 in both papers is showing the exact same curves. However, the authors do not even cite it. As acknowledged in the paper, other authors like Levy et al. (TACL'15) also observed that the value of alpha was important in their experiments.\\n\\nI think that the right narrative for the paper should more in the line of \\\"previous work showed that alpha behaves this and that way; we provide a theoretical explanation for this behavior, and derive a method to automatically find its optimal value\\\". However, starting from the title (\\\"word embedding re-examined: is the symmetrical factorization optimal?\\\", when it was already known that it wasn't) and the abstract (where only the statement that \\\"we propose a method to find the optimal alpha\\\" corresponds to a novel contribution), the paper does a poor job in identifying and properly contextualizing its real contributions. More importantly, the paper does not try to establish any connection between the authors own theory and the empirical findings from previous work.\\n\\nIn terms of the actual content, the authors constantly claim that word2vec is performing a symmetric factorization (e.g. \\\"the original word2vec is implicitly performing a symmetric factorization, thus implying the alpha equal to 0.5) as if it was something obvious or well-known. I might be missing something here, but I do not see why this is the case. Following your notation, let's say that word2vec is implicitly factorizing M = E*C^T, where E are the word embeddings and C are the context embeddings. One could multiply E with any arbitrary invertible matrix W, and C by the transpose of its inverse W^-T, which could be chosen to completely break any symmetry, yet the objective value of word2vec would not change at all, as (E*W)*(C*W^-T)^T = E*C^T. In other words, there is nothing in the training objective of word2vec that forces a symmetric factorization, and there is always an optimal solution with respect to this training objective that is arbitrarily asymmetric.\\n\\nAnother point that raises concerns to me is that the optimal value of alpha is determined by the vocabulary of the evaluation task. It would make sense if the optimal alpha depended on the nature of the task (e.g. syntactic vs semantic), but I do not have any intuition (nor do the authors provide) as of why the vocabulary would be anyhow relevant. More importantly, this does not seem generalizable beyond a few intrinsic tasks as, in the general case, one wants good embeddings for the full vocabulary. In either case, I think that this point deserves more attention in the paper.\\n\\nI also find the experimental evaluation to be somewhat weak. In particular, the proposed theory focuses in two phenomena (word similarity and word analogy) as stated in the abstract itself, but the empirical evaluation is limited to the word similarity task.\\n\\nAlso, this is a minor detail and it did not influence my score, but I dislike that the authors use \\\"word2vec\\\" to refer to skip-gram with negative sampling throughout the paper. I would suggest to either use SGNS (which is quite standard) or simply skip-gram.\"}"
]
} |
BJeRykBKDH | Empowering Graph Representation Learning with Paired Training and Graph Co-Attention | [
"Andreea Deac",
"Yu-Hsiang Huang",
"Petar Velickovic",
"Pietro Lio",
"Jian Tang"
] | Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions. The setup of prediction tasks over entire graphs (such as property prediction for a molecule, or side-effect prediction for a drug), however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction.
Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs. Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them. We first show that such a setup provides natural benefits on a pairwise graph classification task (drug-drug interaction prediction), and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark. Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs. | [
"graph neural networks",
"graph co-attention",
"paired graphs",
"molecular properties",
"drug-drug interaction"
] | Reject | https://openreview.net/pdf?id=BJeRykBKDH | https://openreview.net/forum?id=BJeRykBKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PpdNEoBky",
"H1lNy0q3sr",
"BkeBspq3sH",
"rylL76qnsS",
"rkgsaIwdqH",
"B1xagNeTFH",
"Hkg6hDIotB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724528,
1573854684226,
1573854621508,
1573854493924,
1572529858930,
1571779572797,
1571674036567
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1487/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1487/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1487/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes combining paired attention with co-attention. The reviewers have remarked that the paper is will written and that the experiments provide some new insights into this combination. Initially, some additional experiments were proposed, which were addressed by the authors in the rebuttal and the new version of the paper. However, ICLR is becoming a very competitive conference where novelty is an important criteria for acceptance, and unfortunately the paper was considered to lack the novelty to be presented at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Firstly, we would like to thank the reviewer for their kind thoughts and comments on our paper!\", \"regarding_your_comment_on_the_technical_novelty\": \"our proposal concerns the combination of paired training with co-attention, which seeks to generically exploit and match similarities between two given graph structures (using a co-attention mechanism) in a hierarchical way (through stacking of co-attentive layers).\\n\\nThis is a generic idea that extends beyond just the drug-drug side effect prediction task, and we had demonstrated its utility with further graph regression experiments on QM9. In the revision of the paper we submitted just now, we also include results on two standard graph classification datasets (D&D/PROTEINS; Section 5). In both of these kinds of datasets, it can be observed that no known way of \\u201cpairing\\u201d the graph structures is given, yet our methodology manages to extract additional benefits compared to its respective baseline GNN.\\n\\nThank you for your comments regarding existing baselines on the drug-drug interaction task. We would like to note that the proposed comparison with concatenated GAT embeddings is already present in the paper (and given under the name of \\u201cMPNN-Concat\\u201d). This architecture has turned off co-attention, and comparatively evaluating it against the full co-attentive model (along with additional ablation studies against models such as CADDI and Late-Outer) represent our key evaluation, directly demonstrating the benefits of the different components of our model. \\n\\nWith respect to this, our comparison with Decagon primarily serves to put our results into context with the existing state-of-the-art (i.e. to show that our results are competitive). It should also be highlighted that our method does not require additional information that Decagon uses (such as protein-protein interaction graphs).\\n\\nLastly, thank you for pointing VGAEs as a possible additional option -- we have now cited VGAE appropriately at the end of Section 3. \\n\\nWe thank you once again for your thoughtful review!\"}",
"{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"We would like to start by thanking the reviewer for their highly constructive and useful comments.\\n\\nInitially, we would like to address your comment on the technical novelty: our proposal concerns the combination of paired training with co-attention, which seeks to generically exploit and match similarities between two given graph structures (using a co-attention mechanism) in a hierarchical way (through stacking of co-attentive layers). \\n\\nThis is a generic idea that extends beyond just the drug-drug side effect prediction task, and we had demonstrated its utility with further graph regression experiments on QM9. In the revision of the paper we submitted just now, we also include results on two standard graph classification datasets (D&D/PROTEINS; Section 5). In both of these kinds of datasets, it can be observed that no known way of \\u201cpairing\\u201d the graph structures is given, yet our methodology manages to extract additional benefits compared to its respective baseline GNN.\\n\\nTo answer your questions in turn (we are very open to further discussions, of course):\\n\\n1. We would like to confirm that, indeed, we do leverage one-hot encodings of the atom type (as is common practice in the MPNN paper, for example). Effectively, a separate vector representation is learnt for each atom in this way. We found this to be a critical component to the performances we obtained on the computational chemistry datasets. Thank you for pointing this out to us -- we are now making it more clear in the (just submitted) revision of the paper.\\n\\n2. Currently, we seek to encode all bond types as discrete (one-hot) representations. The approach is typically to assume \\u201cspecial\\u201d edge types for bonds that are situated in e.g. a benzene ring, (to specify that they are not quite e.g. single or double bonds).\\n\\n3. \\na) We provided some loose guidelines on choosing K in the paper -- generally, choosing any K > 1 will yield benefits compared to self-pairing and other K = 1 variants. From there, the differences are more fine-grained and depend on the chemical property being predicted---but the overall performance does not change drastically.\\n\\nb) We would like to note that we have now updated the QM9 prediction MAEs to match unscaled values. Unfortunately, a direct comparison of our work with previously published numbers is extremely difficult, given that different manuscripts utilise different scales of the output labels (e.g. Gilmer et al. incorporate a ratio to the DFT chemical accuracy). In addition, our architecture aims to predict all molecular properties simultaneously, which is not comparable to training individual models for every property (as done by Gilmer et al., for example). With this in mind, we note that the (K = 1, self) model is, in terms of data flow between the atom representations, equivalent to the state-of-the-art MPNN model of Gilmer et al.\\n\\nThank you very much for your review, which has certainly helped make our contributions stronger!\"}",
"{\"title\": \"Reply to AnonReviewer4\", \"comment\": \"We would like to thank the reviewer for their careful and detailed review.\\n\\nInitially, we would like to address your comment on the innovativeness of our contribution: our proposal concerns the combination of paired training with co-attention, which seeks to generically exploit and match similarities between two given graph structures (using a co-attention mechanism) in a hierarchical way (through stacking of co-attentive layers). \\n\\nThis is a generic idea that extends beyond just the drug-drug side effect prediction task, and we had demonstrated its utility with further graph regression experiments on QM9. In the revision of the paper we submitted just now, as per your advice, we also include results on two standard graph classification datasets (D&D/PROTEINS; Section 5). In both of these kinds of datasets, it can be observed that no known way of \\u201cpairing\\u201d the graph structures is given, yet our methodology manages to extract additional benefits compared to its respective baseline GNN.\\n\\nRegarding our DDI results, we note that our primary evaluation is the comparative ablation of various aspects of the MHCADDI model (directly evaluating the benefits of individual decisions such as paired training, co-attention, and multi-head attention, against a strong GNN-based baseline). We include results from Decagon (and others) to situate our results against the existing state-of-the-art, demonstrating the method is in essence competitive. On the issue of the \\u201cmodest\\u201d gains compared to Decagon, we would like to highlight that our method does not require additional information that Decagon uses (such as protein-protein interaction graphs).\\n\\nWe agree with the comment about the related work on DDI, and have now expanded each of the proposed strong baselines (RESCAL, DEDICOM, DeepWalk and Decagon) in turn within Section 3.3.\\n\\nFinally, we would like to note that we have attempted a variant of our model that takes into account both the sender and receiver node (i and j) when computing edge messages. We have found no tangible benefits to this implementation for the datasets considered.\\n\\nYour review has been very valuable to us in terms of improving the paper -- thank you once again!\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper presents a model to classify pairs of graphs which is used to predict sided effects caused by drug-drug-interactions (DDI).\\n\\nThe contribution of this work is to add attention connections between two graphs such that each node operation from one graph can attend on the nodes of the other graph. The paper shows good results in DDI prediction, although the performance gap with previous works (Zitnik et al., 2018) is modest.\\n\\nIn the related work they mention some works from Graph Neural Networks literature. But works from the benchmark experiments are not explained. I think they could also explain which are the similarities and differences of the proposed method vs these works they are comparing to.\\n\\nAnother way of improving the paper could be running more experiments beyond the QM9 dataset to corroborate the good performance of the algorithm.\\n\\nIn equation (2), a message that goes from node \\u201cj\\u201d to node \\u201ci\\u201d does not include node \\u201ci\\u201d as input into the edge operation. I think the GNN would be more powerful if both nodes \\\"i\\\" and \\\"j\\\" are input into the edge operation.\\n\\nIn summary, the main contribution of the paper is to add attention connections between two graphs. I do not feel it is innovative enough.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors proposed a method to extend graph-based learning with a co-attentional layer. Feeding graphs pairwisely into the model allows nodes to easily exchange information with nodes in other graphs as well as within the graph. This method outperforms other previous ones on a pairwise graph classification task (drug-drug interaction prediction).\\nThis model is generalized from Neural Message Passing for Quantum Chemistry (Justin Gilmer et al.) and Graph Attention Networks (Petar Velickovic et al.), but most ideas are directly from the two previous papers. Combining the two methods do provide insights into understanding the interactions between graphs and get really good results on DDI prediction, but the novelty is limited.\", \"questions\": \"1 Are atoms encoded as only atom numbers, charges and connected hydrogen atoms? Because some atoms might have much larger atom numbers than others, e.g. carbon (6) and sulfur (16), will there be some scale problems? Will one-hot encoding of atom type help (like in Neural Message Passing for Quantum Chemistry)?\\n2 According to the paper, bond types will be encoded as e_{ij}. But in molecules, bond type is way more complex than only single/double/triple bonds, especially for drug molecules which are enriched for aromatic systems. For example, bonds in benzene or pyridine rings are between single and double (also not necessarily 3/2). Are there other possible methods to encode graph edges?\\n3 In result table 2 of Section 4 (quantum chemistry), I didn\\u2019t see a principle of choosing K value and choosing neighbors because different properties reaches the lowest MAE at different K values. This might cause some confusion in real application. Moreover, the authors should compare the performance with previous methods.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work injects a multi-head co-attention mechanism in GCN that allows one drug to attends to another drug during drug side effect prediction. The motivation is good with limited technical novelty. The paper is well-written and well organized.\\n\\n\\nFor MHCADDI, it is performing binary classification for all side effect labels. It is different from Decagon\\u2019s setting, hence not comparable. Maybe also include Decagon-Binary?\", \"missing_baseline\": \"as its main innovation is using co-attention, it should compare with concatenated embedding generated from Graph Attention Network so that we know co-attention is better than independent attention on each drug (seems the authors have already attempted to do so but did not report it). Current baselines such as Decagon only use GCN with no attention mechanism. It could be also benefited by including VGAE.\"}"
]
} |
Bke61krFvS | Learning representations for binary-classification without backpropagation | [
"Mathias Lechner"
] | The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains.
While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory proofing their learning capabilities.
Here we introduce the first feedback alignment algorithm with provable learning guarantees. In contrast to existing work, we do not require any assumption about the size or depth of the network except that it has a single output neuron, i.e., such as for binary classification tasks.
We show that our FA algorithm can deliver its theoretical promises in practice, surpassing the learning performance of existing FA methods and matching backpropagation in binary classification tasks.
Finally, we demonstrate the limits of our FA variant when the number of output neurons grows beyond a certain quantity. | [
"feedback alignment",
"alternatives to backpropagation",
"biologically motivated learning algorithms"
] | Accept (Poster) | https://openreview.net/pdf?id=Bke61krFvS | https://openreview.net/forum?id=Bke61krFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9WcmfWTJKn",
"H1gC5r7mjS",
"SkxYmEmQoS",
"BylAPQmQjr",
"rJxE1TVAFH",
"BylesNeRKr",
"HJlPbgIsKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724500,
1573234069857,
1573233697385,
1573233510492,
1571863772009,
1571845272179,
1571672063004
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1486/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1486/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1486/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper provides a rigorous analysis of feedback alignment under two restrictions 1) that all, except the first, layers are constrained to realize monotone functions and 2) the task is binary classification. Overall, all reviewers agree that this is an interesting submission providing important results on the topic and as such all agree that it should feature at the ICLR program. Thus, I recommend acceptance. However, I ask the authors to take into account the reviewers' concerns and include a discussion about limitations (and general applicability) of this work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your comments, we hope we can address your concerns\", \"comment\": \"We thank the reviewer for their careful analysis of our manuscript.\\nNotably, we appreciate the reviewer's thoughts on the methodological approach and the legibility of the paper.\\n\\nFirst of all, we want to clarify the statement made by the reviewer \\\"The modified Direct Feedback Alignment (DFA) is applied where only signs are used to update the weights.\\\", which is not 100% correct.\\nThe mDFA update does not use the signs. Instead, the signs of the mDFA update equal the signs of the true gradient (at least for networks with a single output neuron), i.e., the angle between the update vector and the gradient stay within 90 degrees of each other.\\n\\nSecondly, we want to recapitulate that the premise of this paper is to seek alternatives to backpropagation for training multi-layer neural networks. Existing research on that topic consists primarily of empirical studies or requires strong assumptions to give any mathematical learning guarantee.\\n\\nFinally, we want to respond to the detailed concerns that the reviewer made:\\n(A) The monotonic layers are necessary for the DFA update (not to be confused with the \\\"gradient\\\" update). We want the DFA update vector to point in roughly the same direction as the gradient update. \\n(B) We want to develop a model that, I: is a universal approximator, and II: can be provably trained without backpropagation.\\nWe were able to achieve both, I and II, by constraining all layers on top of the first layer to non-negative weight values.\\n(C) Our target was particularly that the mDFA update rule and the RPROP update should be the same. The difference is that mDFA does not rely on any weight sharing and only employs a weakened form of a reciprocal error transport. Both are biological implausibilities of the backpropagation-of-error algorithm.\\n(D) Indeed, it has been shown that unconstrained networks perform specifically well with ReLU activation [1]. However, we assessed in the paper, that having only non-negative weights (our constraint) and non-negative activations (ReLU), hurts the expressiveness and learning of the network. We agree that resolving this limitation is an interesting problem; nevertheless, it is not the main focus of this paper.\\n(E) Generally, the training performance of a network with backpropagation is not affected by the size of the output layer. However, as we have shown empirically and theoretically, it does matter when training the network with mDFA. For details, see supplementary materials A.3.\\n\\n\\nWe hope that we could address most of the reviewer's concerns, and we hope the reviewer reconsiders this paper regarding their recommendation on acceptance.\\n\\n\\n[1] Glorot, Xavier and Bordes, Antoine and Bengio, Yoshua. \\\"Deep sparse rectifier neural networks\\\". AISTAT 2011.\"}",
"{\"title\": \"Thanks for your review, we included your suggestions into the paper\", \"comment\": \"We thank the reviewer for their thoughtful comments on our paper, especially for providing valuable related works, which we incorporated into our manuscript.\", \"we_want_to_respond_to_two_points_that_the_reviewer_made\": \"1. We have added a sub-section (2.3 \\\"Sign-symmetry algorithms\\\"), in which we briefly discuss the papers that the reviewer mentioned [1-3]\\n2. We have added a discussion, \\\"How does mDFA relate to the non-negative matrix factorization?\\\" (top of page 6), which provides a comparison to research on non-negative matrix factorization algorithms.\\nWe have updated the manuscript accordingly, and hope the reviewer supports the contributions of this work on biologically motivated learning algorithms.\\n\\n\\n[1] Liao, Qianli and Leibo, Joel Z and Poggio, Tomaso. \\\"How important is weight symmetry in backpropagation?\\\". AAAI 2016\\n[2] Moskovitz, Theodore H and Litwin-Kumar, Ashok and Abbott, LF. \\\"Feedback alignment in deep convolutional networks\\\". arXiv preprint 2018.\\n[3] Xiao, Will and Chen, Honglin and Liao, Qianli and Poggio, Tomaso. \\\"Biologically-plausible learning algorithms can scale to large datasets\\\". ICLR 2019\"}",
"{\"title\": \"Thanks for your review, we incorporated your feedback.\", \"comment\": \"We thank the reviewer for their thorough review of our paper, and their strong support on this research topic.\", \"we_want_to_respond_to_the_three_suggestions_the_reviewer_made\": \"1) We agree that the cited papers provide background and context to our work. Consequently, we have added the sub-section 2.3 \\\"Sign-symmetry algorithms\\\" to our revised submission, where we briefly discuss the papers [1-3]\\n\\n2) Agree. We added a statement about it at the end of the discussion **What about networks with more than one output neuron?** (bottom of page 5).\\n\\n3) The general trend observed in our experiments approximately aligns with the results of Bartunov et al. (2018), e.g., the accuracies achieved by the fully-connected network trained with BP on CIFAR-10.\\nHowever, several subtle differences explain the discrepancies between our results and the ones reported by Bartunov et al.:\\n- We evaluated 10-class subsampled CIFAR-100, whereas Bartunov et al. used standard CIFAR-10 (explains high standard deviation in our results)\\n- We performed a proper training-validation-test split, whereas Bartunov et al. reported the best-achieved test accuracy.\\n- The Fully-connected network of Bartunov et al. has three hidden layers (ours has only two)\\n- The CNN of Bartunov et al. is larger, i.e., 256 filters in the last layer (ours has only 96) and has an additional fully-connected layer between the last convolutional layer and the output layer (ours is an \\\"all-convolutional-net\\\")\\n- As the main contribution of Bartunov et al. was to benchmark various biologically inspired learning algorithms, they performed a more extensive hyperparameter search.\\nThis difference is further amplified for FA and DFA, because of the observation reported in Bartunov et al. (also confirmed in our experiments) that all FA variants are relatively sensitive to hyperparameter choice.\\nIn case there remain any concerns, the code to reproduce our results is publicly available.\\n\\nThe paper has been updated according to points 1 and 2.\\n\\nWe want to thank the reviewer for their time, and we hope that we could address the concerns of the reviewer.\", \"remark\": \"Reference to Lillicrap et al. has been updated.\\n\\n[1] Liao, Qianli and Leibo, Joel Z and Poggio, Tomaso. \\\"How important is weight symmetry in backpropagation?\\\". AAAI 2016\\n[2] Moskovitz, Theodore H and Litwin-Kumar, Ashok and Abbott, LF. \\\"Feedback alignment in deep convolutional networks\\\". arXiv preprint 2018.\\n[3] Xiao, Will and Chen, Honglin and Liao, Qianli and Poggio, Tomaso. \\\"Biologically-plausible learning algorithms can scale to large datasets\\\". ICLR 2019\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper examines the question of learning in neural networks with random, fixed feedback weights, a technique known as \\u201cfeedback alignment\\u201d. Feedback alignment was originally discovered by Lillicrap et al. (2016; Nature Communications, 7, 13276) when they were exploring potential means of solving the \\u201cweight transport problem\\u201d for neural networks. Essentially, the weight transport problem refers to the fact that the backpropagation-of-error algorithm requires feedback pathways for communicating errors that have synaptic weights that are symmetric to the feedforward pathway, which is biologically questionable. Feedback alignment is one approach to solving the weight transport problem, which as stated above, relies on the use of random, fixed weights for communicating the error backwards. It has been shown that in some cases, feedback alignment converges to weight updates that are reasonably well-aligned to the true gradient. Though initially considered a good potential solution for biologically realistic learning, feedback alignment both has not scaled up to difficult datasets and has no theoretical guarantees that it converges to the true gradient. This paper addresses both these issues.\\n\\nTo address these issues, the authors introduce two restrictions on the networks: (1) They enforce \\u201cmonotone\\u201d networks, meaning that following the first layer, all synaptic weights are positive. This also holds for the feedback weights. (2) They require that the task in question be a binary classification task. The authors demonstrate analytically that with these restrictions, direct feedback alignment (where the errors are communicated directly to each hidden layer by separate feedback weights) is guaranteed to follow the sign of the gradient. (Importantly, they also show that monotone nets are universal approximators.) Empirically, they back up their analysis by demonstrating that in fully connected networks that obey these two restrictions they can get nearly as good performance as back propagation on training sets, and even better performance on tests sets sometimes. However, they also demonstrate (empirically and analytically) that violating the second requirement (by introducing more classes) leads to divergence from the gradient and major impairments in performance relative to backpropagation.\\n\\nUltimately, I think this is a great paper, and I think it should be accepted at ICLR. It provides some of the first rigorous analysis of feedback alignment since the original paper came out, and unlike those original analyses, it is not restricted to linear networks (which are certainly not universal function approximators). I have looked over the proofs, and they seem to all be correct. As well, I found the paper easy to read, which was nice. However, there are a few things that could be done to clarify the contributions and situate the work within the field of biological learning algorithms better:\\n\\n1) Though they do not include rigorous analyses, two previous papers have demonstrated empirically that feedback alignment works extremely well as long as the feedback weights share the same sign as the feedforward weights (see: Moskovitz, Theodore H., Ashok Litwin-Kumar, and L. F. Abbott. \\\"Feedback alignment in deep convolutional networks.\\\" arXiv preprint arXiv:1812.06488 (2018) and Liao, Qianli, Joel Z. Leibo, and Tomaso Poggio. \\\"How important is weight symmetry in backpropagation?.\\\" In Thirtieth AAAI Conference on Artificial Intelligence. 2016). Due to the requirement for monotone networks, this work is also providing a guarantee that the sign of feedforward and feedback weights are the same. That does not subtract substantially from the contributions of this paper, as the provision of the analytical guarantees is important. But, it is important for the authors to consider how their work relates to this past work. For example, could their analytical approach work equally well with nothing more than a sign symmetry guarantee? This should at least be discussed.\\n\\n2) It should be admitted somewhere in the paper that the second requirement on the networks for binary tasks is deeply unbiological. As such, it should be recognized in discussion that this paper provides some important contributions to our understanding of feedback alignment, but does not ultimately move the question of biologically realistic learning forward all that much. Indeed, the discussion at the end about applications notably ignores biology. But, rather than just ignoring it, the biological mismatch should be openly admitted.\\n\\n3) The results with the test sets are a little strange, at least for the tests with larger numbers of categories. In Bartunov et al. (2016), they reported not only better training set results with backprop, but also better test set results generally, than feedback alignment. Are the authors sure that their results, in say, Table 4, are not indicative of insufficient hyperparameter optimization?\", \"small_notes\": [\"Lillicrap et al.\\u2019s paper was eventually published in Nature Communications (see citation above), and the reference should be changed to reflect this.\", \"The discussion on the impact of convolutions could be beefed up a little bit. In particular, it could be discussed relative to the results of Moskovitz et al. (above) who show that convnets work fine with nothing but guaranteed sign symmetry.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an approach towards extending the capabilities of feedback alignment algorithms, that in essence replace the error backpropagation weights with random matrices. The authors propose a particular type of network where all weights are constraint to positive values except the first layers, a monotonically increasing activation function, and where a single output neuron exists (i.e., for binary classification - empirical evidence for more output neurons is presented but not theoretically supported). This is to enforce that the backpropagation of the (scalar) error signal to affect the magnitude of the error rather than the sign, while preserving universal approximation. The authors also provide provable learning capabilities, and several experiments that show good performance, while also pointing out limitations in case of using multiple output neurons.\\n\\nThe strong point of the paper and main contribution is in terms of proposing the specific network architecture to facilitate scalar error propagation, as well as the proofs and insights on the topic. The proposed network affects only magnitude rather than sign, and the authors demonstrate that it can do better than current FA and match BP performance. This seems inspired from earlier work [1,2] - where e.g., in [2] improvements are observed when feedback weights share the sign but not the magnitude of feedforward nets.\\n\\nSummarizing, I believe that this research is interesting, and can lead to improvements in FA algorithms that could potentially be more biologically plausible, and offer advantages such as full weight update parallelization (although this is more related to the fixed weights rather than the method per-se given my understanding). However, this also seems - at the moment - to be of limited applicability.\\n\\n===\\nFurthermore, the introduction of the network with positive weights in the 2nd layer and on is remiscent of non-negative matrix factorization algorithms. Can the authors establish a link to these methods, where variants with backprop have also been proposed?\\n\\n\\n\\n[1] Xiao W. et al. Biologically-Plausible Learning Algorithms Can Scale to Large Datasets, 2018\\n[2] Qianli Liao, Joel Z Leibo, and Tomaso Poggio. How important is weight symmetry in backprop-agation, AAAI 2016\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"------update------\\n\\nAfter talking to other reviews and reading the rebuttal, I am convinced that the paper contributes sufficiently to the theoretical understanding of the FA algorithm and should be accepted as a conference paper. \\n\\nI hope that, in the next revision, the authors could include more about the limitation of their work and potential alternatives to improve the generosity of the proposed method.\\n\\n-------end of the update------\\n\\nThe paper presented a mono-net which has only positive weights and monotonically increasing functions in between layers except for the first layer, and it can be shown that the proposed mono-net is capable of modelling any continuous functions. The modified Direct Feedback Alignment (DFA) is applied where only signs are used to update the weights. There are several issues and concerns that leads me to the following comments and concerns:\\n\\n(A) Why is it necessary to construct a monotonic layer which is constrained to only be able to approximate monotonic functions? Please elaborate. If the concern is about the gradient update, then there is better way of constructing such a network with limiting it being only capable of modelling monotonic functions.\\n\\n(B) If, given the proof in appendix, that the proposed mono-net is indeed a universal function approximator, then why do all the layers on top of the first layer have to contain only positive weights?\\n\\nFollowing the proof, given a neural network with only one hidden layer and hyperbolic tanh activation functions, as long as the weights in the layer that is after the tanh functions are all non-negative or non-positive, the neural network is also a universal function approximator.\\n\\nSince the chosen activation function monotonically increases in the input domain, the sign of the update calculated by DFA is the same as the gradient calculated by the chain rule. \\n\\nThe aforementioned way of constructing a neural network allows it to be able to model to change the monotonicity of the function in between layers when the two sets of weights in consecutive two layers have opposite signs. Also, the update gives the sign of the gradient calculated by backpropagation. \\n\\n(C) What is the different between the proposed network along with the update rule and a network with only non-negative weights in the layers above the first layer trained with RPROP? They look exactly the same to me. \\n\\nIf so, another perspective of the story is that the paper emposes a non-negative constraint on the weights in a neural network, and this could be used as a baseline.\\n\\n(D) The proof that gives the universal approximation theorem explicitly defines the squashing function/the activation function to be bounded and this paper follows the setting, which is reflected in the experiments where the proposed network with ReLU activation functions don't work well. \\n\\nHowever, the expressiveness of ReLU networks has been shown to be strong and they are also universal approximators. I'd believe that with (B), it might lift the constraints brought by ReLU in the settings proposed by the paper.\\n\\n(E) I am not sure why for now the number of output units matters that much. The training algorithm can easily ignore other output units when samples of a specific class is presented, then after learning, the algorithm can rank the output units to make predictions.\"}"
]
} |
BylTy1HFDS | Deep unsupervised feature selection | [
"Ian Covert",
"Uygar Sumbul",
"Su-In Lee"
] | Unsupervised feature selection involves finding a small number of highly informative features, in the absence of a specific supervised learning task. Selecting a small number of features is an important problem in many scientific domains with high-dimensional observations. Here, we propose the restricted autoencoder (RAE) framework for selecting features that can accurately reconstruct the rest of the features. We justify our approach through a novel proof that the reconstruction ability of a set of features bounds its performance in downstream supervised learning tasks. Based on this theory, we present a learning algorithm for RAEs that iteratively eliminates features using learned per-feature corruption rates. We apply the RAE framework to two high-dimensional biological datasets—single cell RNA sequencing and microarray gene expression data, which pose important problems in cell biology and precision medicine—and demonstrate that RAEs outperform nine baseline methods, often by a large margin. | [
"Single cell rna",
"microarray",
"feature selection",
"feature ranking"
] | Reject | https://openreview.net/pdf?id=BylTy1HFDS | https://openreview.net/forum?id=BylTy1HFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"10I3AhtDV",
"SyejVTsd9S",
"SkeKMsaJqH",
"ByglajI6FS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724471,
1572547890700,
1571965713026,
1571806136109
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1485/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1485/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1485/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes Restricted AutoEncoders (REAs) for unsupervised feature selection, and applies and evaluates it in applications in biology. The paper was reviewed by three experts. R1 recommends Weak Reject, identifying some specific technical concerns as well as questions about missing and unclear experimental details. R2 recommends Reject, with concerns about limited novelty and unconvincing experimental results. R3 recommends Weak Accept saying that the overall idea is good, but also feels the contribution is \\\"severely undermined\\\" by a recently-published paper that proposes a very similar approach. Given that that paper (at ECMLPKDD 2019) was presented just one week before the deadline for ICLR, we would not have expected the authors to cite the paper. Nevertheless, given the concerns expressed by the other reviewers and the lack of an author response to help clarify the novelty, technical concerns, and missing details, we are not able to recommend acceptance. We believe the paper does have significant merit and hope that the reviewer comments will help authors in preparing a revision for another venue.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper is concerned with unsupervised feature selection, in a lossy compression perspective. Formally, the idea is to select a subset of features that supports the reconstruction of the whole dataset.\\n\\nThe idea is good, but the the same idea (same reconstruction goal, also relying on auto-encoders) was published this year, see here: https://ecmlpkdd2019.org/programme/awards/\\n\\nAlgorithmically speaking, the approach is very close to the above paper; as far as I can tell, the main difference lies in the recursive feature elimination heuristics.\\n\\nI feel that the originality of the paper is thus severely undermined. The authors might want to address this, through:\\n* thorough empirical comparisons (experimental setting, considering other datasets; comparing with other regularization schemes)\\n* examining the stability of the selected features (among runs).\\n\\nThe analysis definitely is a good point of the paper; however, parts of it are straightforward (Thm1).\", \"details\": \"continuity, p.3\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an unsupervised feature selection method by minimizing reconstruction error with restricted autoencoders. The proposed method employs iterative elimination to select features with learned per-feature corruption rates.\\n\\n1) The novelty of the paper is very limited: using reconstruction for unsupervised feature selection has been explore in many papers, such as [1][2][3]. So the proposed method is a bit incremental.\\n\\n2) Time complexity of the proposed method should be discussed.\\n\\n3) Experimental results are not convincing at all due to the following reasons:\\n a) Only one baseline (out of 9) was proposed after 2011. Outperforming these decade old approaches is not very difficult. There has been much progress on unsupervised feature selection and several hundred papers in this area have been published in the past 5 years. The author should include more recent state-of-the-art baselines.\\n b) When tuning the hyper-parameters (e.g., \\\\lambda) for the proposed method and baseline methods (UDFS) on validation dataset, the author should list the value range used in the parameter search. Also, as unsupervised feature selection methods, using validation dataset (which has supervision labels) to choose parameters is not a fair way, as it does use supervision information.\\n c) Unsupervised feature selection papers in the past 5 years typically use 6~8 datasets to demonstrate the effectiveness while this paper only shows results on 2 datasets.\\n\\nGiven the reasons above, this paper needs improvement in many aspects and not ready to publish in its current form.\\n\\n\\n[1] Zhu et al. Unsupervised feature selection by regularized self-representation\\n\\n[2] Yang et al. Unsupervised Feature Selection Based on Reconstruction Error Minimization\\n\\n[3] Li et al. Reconstruction-based Unsupervised Feature Selection: An Embedded Approach\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors of this paper propose the restricted autoencoder (RAE) framework for selecting features that can accurately reconstruct the rest of features. Authors justify the proposed method via the proof that the reconstruction ability of a set of features bounds its performance in downstream supervised learning tasks. The algorithm that iteratively eliminates features using learned per-feature corruption rates is proposed.\\n\\nThe fundamental of this paper is built on the argument that the optimal approach is to select a set of features that can accurately reconstruct all the remaining features for the settings where they will be used in downstream prediction tasks. Authors studied the performance losses of linear and nonlinear models by using the defined imputation losses. Some concerns are listed as follows:\\n1. the theorem based on strong assumptions that all learned models are optimal. The applicability of the theoretical results to general prediction model is still questionable. \\n2. only the prediction problems of least square (linear or nonlinear) are studied. It is not equivalent to the downstream supervised learning tasks. It is just a special case study.\\n3. It is unclear how to get the conclusion from Theorem 1 that the linear imputation loss is equal to the sum of eigenvalues. Please clarify it in details.\\n\\nThe RFE-like algorithm is used to solve (7). However, the sensitivity measures used in Algorithm 1 seems to take the different optimization problems since additional regularization terms are added. This is different from RFE where a single SVM optimization problem is used and the ranking score is solely based on the learned SVM classifier. The discussion on the inconsistency of learning h_{\\\\theta} and the sensitivity measures could be interesting. \\n\\nIn the experiments, authors did not mention the parameter settings of all compared methods. It is known that the unsupervised feature selection methods incorporate priors with usually various parameters. For fair comparisons, it is better to report the properly tuned results since these parameters are often data-dependent.\"}"
]
} |
Skeh1krtvH | WaveFlow: A Compact Flow-based Model for Raw Audio | [
"Wei Ping",
"Kainan Peng",
"Kexin Zhao",
"Zhao Song"
] | In this work, we present WaveFlow, a small-footprint generative flow for raw audio, which is trained with maximum likelihood without complicated density distillation and auxiliary losses as used in Parallel WaveNet. It provides a unified view of flow-based models for raw audio, including autoregressive flow (e.g., WaveNet) and bipartite flow (e.g., WaveGlow) as special cases. We systematically study these likelihood-based generative models for raw waveforms in terms of test likelihood and speech fidelity. We demonstrate that WaveFlow can synthesize high-fidelity speech and obtain comparable likelihood as WaveNet, while only requiring a few sequential steps to generate very long waveforms. In particular, our small-footprint WaveFlow has only 5.91M parameters and can generate 22.05kHz speech 15.39 times faster than real-time on a GPU without customized inference kernels. | [
"flow-based models",
"raw audio",
"waveforms",
"speech synthesis",
"generative models"
] | Reject | https://openreview.net/pdf?id=Skeh1krtvH | https://openreview.net/forum?id=Skeh1krtvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oIAHH8t1m8A",
"TzL9ldw0Ni",
"BklOXBLior",
"HyesGurosr",
"HJxCxI95sr",
"SJloae9gcS",
"SkeuG_BAYB",
"rylJ0A96Yr",
"BygtvEDMYH",
"S1eUdXwzFB",
"rkxW1GDftH",
"BJlQPXYZ_B",
"HkgjptwsvS"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1587680905783,
1576798724443,
1573770528042,
1573767187115,
1573721589896,
1572016323383,
1571866639996,
1571823302601,
1571087457487,
1571087213986,
1571086809359,
1569981274652,
1569581506958
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1484/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1484/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1484/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1484/Authors"
],
[
"~Yi_Ren2"
],
[
"~Joan_Serrà1"
]
],
"structured_content_str": [
"{\"title\": \"[DEPRECATED] This version is outdated\", \"comment\": \"We have released a new version of this paper on arXiv ( https://arxiv.org/abs/1912.01219 ), which is accepted to ICML 2020.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper presented a unified framework for constructing likelihood-based generative models for raw audio. It demonstrated tradeoffs between memory footprint, generation speech and audio fidelity. The experimental justification with objective likelihood scores and subjective mean opinion scores are matching standard baselines. The main concern of this paper is the novelty and depth of the analysis. It could be much stronger if there're thorough analysis on the benefits and limitations of the unified approach and more insights on how to make the model much better.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Many thanks for your review; the feedback is helpful to improve our paper.\\n\\n** After the submission, we have made two improvements: 1) We find that the split & reverse operations for stacking multiple flows are more effective than reverse-only operations (see details in Section 3.4 in our revision). Our small-footprint WaveFlow now obtains larger test likelihood (see Table 3 in the revision) and improved speech fidelity (see Table 5 in the revision). 2) We implement convolution queue (Paine et al., 2016), which brings additional 3x to 5x speedup for WaveFlow models. As a result, our small-footprint WaveFlow (5.91M parameters) can now generate 22.05kHz high-fidelity speech (MOS: 4.32) more than 40x faster than real-time (faster than WaveGlow), which provides a promising neural vocoder. We also recommend setting h = 16 for neural vocoding task, because it provides better speech fidelity and its synthesis speed is only marginally slower than h = 8 with the help of convolution queue. **\\n\\nWe will address your comments in the following.\\n\\n- \\u201cOn one hand, we may catch the contributions. But, on the other hand, the contributions were not clearly explained. The results were averaged but were not clearly explained.\\u201d\\n* We can summarize our contributions in two points: \\n(1) We propose a novel and unified framework for constructing likelihood-based generative models for raw audio, which includes previous approaches (WaveNet and WaveGlow) as special cases. We demonstrate the trade-off between memory footprint, generation speed and audio fidelity within the framework.\\n(2) The resulting small WaveFlow is a compelling neural vocoder. In comparison with WaveGlow, it requires much fewer parameters (5.91M vs. 87.88M) to generate high fidelity speech (MOS: 4.32 vs. 4.34). Its synthesis speed is also slightly faster (42.60x vs. 34.69x). In comparison with WaveNet, WaveFlow models are significantly faster at synthesis.\\n\\n- \\u201cThe property of getting better performance using deeper wavenet was \\\"not\\\" clearly explained and investigated.\\u201d \\n* We only test 30-layer WaveNet in the paper. We think this question was perhaps raised for flow-based models. In Table 4, we investigate WaveFlow with 6x8 = 48 and 8x8 = 64 layers (e.g., row-(l) vs. row-(m)), and WaveGlow with 6x8 = 48 and 12x8 = 96 layers (e.g., row-(e) vs. row-(f)), respectively. The models stacked with larger number of flows (i.e., deeper layers) consistently provide better likelihood. This property is also well known in normalizing flow literature (e.g., [1]). We have added details in Section 5.1.\\n\\n[1] Rezende and Mohamed. Variational inference with normalizing flows. ICML, 2015.\\n\\n- \\u201cThis paper mentioned that using convolution queue could improve the synthesis speed. But, the synthesis speed has been fast enough because it is almost 15 times faster than real time. In practical applications, 100x faster is almost the same as 15x faster for humans. But, the task isn\\u2019t interacted with human. It is suggested to focus on reducing the number of parameters or enhancing the log likelihood.\\u201d\\n* From human perceptual perspective, 15x faster and 40x faster (our new result) than real-time has minor difference. However, the convolution queue removes redundant calculation at synthesis, which will also improve system throughput in practical applications. We do agree on that reducing parameters or enhancing the log likelihood is very important for flow-based models. The previously mentioned split & reverse operation is a new endeavor after the submission. Note that, there is still significant likelihood gap that has so far existed between autoregressive models and flow-based models [2]. Our proposed model can close the gap with larger squeezing factor h (e.g., h = 64 in Table 4), or increased model size. \\n\\n[2] Ho et al. Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design. ICML 2019.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Many thanks for your detailed review; they are really helpful to improve the quality of our paper.\\n\\n** After the submission, we have made two improvements: 1) We find that the split & reverse operations for stacking multiple flows are more effective than reverse-only operations (see details in Section 3.4 in our revision). Our small-footprint WaveFlow now obtains larger test likelihood (see Table 3 in the revision) and improved speech fidelity (see Table 5 in the revision). 2) We implement convolution queue (Paine et al., 2016), which brings additional 3x to 5x speedup for WaveFlow models. As a result, our small-footprint WaveFlow (5.91M parameters) can now generate 22.05kHz high-fidelity speech (MOS: 4.32) more than 40x faster than real-time (faster than WaveGlow), which provides a very promising neural vocoder. We also recommend setting h = 16 for neural vocoding task, because it provides better speech fidelity and its synthesis speed is only marginally slower than h = 8 with the help of convolution queue. **\\n\\nWe will address your detailed comments in the following.\\n\\n- \\u201cIn the subjective evaluation section (5.2), Table 5 is hard to decipher, especially given that there are three measurements to take into account, so it's not easy to see the benefit of the approach.\\u201d\\n* This is a good point. In comparison with WaveGlow, WaveFlow requires much fewer parameters (5.91M vs. 87.88M) to generate comparable fidelity speech (MOS: 4.32 vs. 4.34). Its synthesis speed is also slightly faster (42.60x vs. 34.69x). In comparison with WaveNet, WaveFlow models synthesize speech significantly faster (e.g., 42.60x vs. 0.002x). We will emphasize the benefit of our approach in the final draft. We will also try to organize the results in a clearer way. Many thanks for your nice suggestions.\\n\\n- \\u201cIn the same section, is the WaveNet model the original one, or the Parallel WaveNet ? if it's the original, why not include Parallel WaveNet in the table?\\u201d\\n* It is the autoregressive WaveNet. Note that, reproducing Parallel WaveNet, which produces high fidelity speech on public dataset, is so far beyond the capability of open source community. For example, here is some related discussion ( https://github.com/r9y9/wavenet_vocoder/issues/7 ).\\n\\nAlso, many thanks for pointing out the typo. We have fixed it.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": [\"Thank you so much for the detailed comments and suggestions; they are really helpful to improve the quality of our paper.\", \"** After the submission, we have made two improvements: 1) We find that the split & reverse operations for stacking multiple flows are more effective than reverse-only operations (see details in Section 3.4 in the revision). Our small-footprint WaveFlow now obtains larger test likelihood (see Table 3 in our revision) and improved speech fidelity (see Table 5 in the revision). 2) We implement convolution queue (Paine et al., 2016), which brings additional 3x to 5x speedup for WaveFlow models. As a result, our small-footprint WaveFlow (5.91M parameters) can generate 22.05kHz high-fidelity speech (MOS: 4.32) more than 40x faster than real-time (faster than WaveGlow). Now, it is a very promising neural vocoder. We also recommend setting h = 16 for neural vocoding task, because it provides better fidelity of audio and its synthesis speed is only marginally slower than h = 8 with the help of convolution queue. **\", \"We will address your detailed comments in the following.\", \"\\u201cThe submission would have benefited from discussion about model complexity/expressivity and it's impact on MOS for WaveFlow, WaveNet and other approaches. \\u201d\", \"Many thanks for this great suggestion. We measure the complexity/expressivity of generative models in terms of likelihood. We find a positive correlation between test likelihood and MOS score for these likelihood-based models (see Figure 3 in the updated version of the submission). In general, larger likelihood implies higher fidelity of speech. Indeed, we use the likelihood score as a performance indicator for designing WaveFlow. We will include more discussion in our final draft.\", \"\\u201c1) lack of proper technical description of your model in sections 1 and 2 making reading sections 1,2,3,etc in order awkward. It seems the order should be 3,4,(5),1,2,(5). \\u201d\", \"Thank you for this nice suggestion. We have reorganized the related work section after the technical description of our model in the revision.\", \"-\\u201c2) complete omission of conditioning on text to be synthesised; anyone not familiar deeply with speech synthesis will wonder where does the text come in\\u201d\", \"This is a good point. We have added Section 3.3 to provide the details of conditioning on text.\", \"-\\u201c3) explicit statement of complexity for the operations involved using proper big-O notation; helps to avoid confusion about what do you mean by \\\"parallel\\\" (autoregressive WaveNet followed by parallel computation != parallel computation) \\u201d\", \"Thank you for your suggestion. We have explicitly stated the complexity for the operations using big-O notation in our revision.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper re-organized the high dimensional 1-D raw waveform as 2-D matrix. This method simulated the autoregressive flow. Log-likelihood could be calculated in parallel. Autoregressive flow was only run on row dimension. The number of required parameters was desirable to synthesize high-fidelity speech with the speed faster than real time. Although this method could not achieve top one in ranking in every measurements, the resulting performance was still obtained with the best average results.\\n\\nIn general, this paper is clearly written, well organized and easy to follow. The authors carried out sufficient experiments and analyses, and proposed some rules of thumb to build a good model. On one hand, we may catch the contributions. But, on the other hand, the contributions were not clearly explained. The results were averaged but were not clearly explained.\\n\\nThe authors suggested to specify a bigger receptive field than the squeezed height. The property of getting better performance using deeper wavenet was \\\"not\\\" clearly explained and investigated. In the experiments, a small number of generative steps was considered. This is because short sequence based on autoregressive model was used. \\n\\nThis paper mentioned that using convolution queue could improve the synthesis speed. But, the synthesis speed has been fast enough because it is almost 15 times faster than real time. In practical applications, 100x faster is almost the same as 15x faster for humans. But, the task isn\\u2019t interacted with human. It is suggested to focuse on reducing the number of parameters or enhancing the log likelihood.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This submission belongs to the field of text-to-speech synthesis. In particular it looks at a novel way of formulating a normalising flow using 2D rather than conventional 1D representation. Such reformulation enables to provide interpretations to several existing approaches as well as formulate a new one with quite interesting properties. This submission would benefit from a discussion of limitations of your approach.\\n\\nI believe there is a great deal of interest in the use of normalising flows in the text-to-speech area. I believe this submission could be a good contribution to the area. The test log-likelihoods look comparable to existing approaches with significantly worse inference times. The mean opinion scores (MOS) seem to approach one of the standard baselines with significantly worse inference times though at the expense of increasing the number of model parameters from 6M to 86M parameters whilst gaining only 0.2 in MOS. The submission would have benefited from discussion about model complexity/expressivity and it's impact on MOS for WaveFlow, WaveNet and other approaches.\", \"the_largest_issues_with_this_submission_are\": \"1) lack of proper technical description of your model in sections 1 and 2 making reading sections 1,2,3,etc in order awkward. It seems the order should be 3,4,(5),1,2,(5). \\n2) complete omission of conditioning on text to be synthesised; anyone not familiar deeply with speech synthesis will wonder where does the text come in\\n3) explicit statement of complexity for the operations involved using proper big-O notation; helps to avoid confusion about what do you mean by \\\"parallel\\\" (autoregressive WaveNet followed by parallel computation != parallel computation)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"## Updated review\\n\\nI have read the rebuttal. The new version of the paper is definitely clearer, especially the contribution section and the experimental results. The new version addresses all my concerns, hence I am upgrading my rating to Accept.\\n\\n## Original review\\n\\nThis paper presents the WaveGlow model, a generative model for raw audio. The model is based on a 2D-matrix approach, which allows to generate the audio with a fixed amount of step. The model is shown to be a generalization of the two main approaches for raw audio generation, autoregressive flow and bipartite flow. The model is evaluated and compared with related work on an objective evaluation (Log-likelihood) and a subjective evaluation (MOS), and is shown to be a trade-off between memory footprint, generation speed and quality.\\n\\nI think this paper should be accepted, for the following reasons:\\n- The theoretical framework presented is novel and significant, as it provides a unified view of the two main approaches for neural waveform generation.\\n- The experiments are reasonably convincing, although they could be improved.\", \"detailed_comments\": [\"In the subjective evaluation section (5.2), Table 5 is hard to decipher, especially given that there are three measurements to take into account, so it's not easy to see the benefit of the approach. Maybe the results should be organised differently, for instance grouping them according to one measurement could help, typically showing what speed and MOS each of the three models can achieve for a given model size. Maybe plotting speed vs MOS for the same model size could also be interesting.\", \"In the same section, is the WaveNet model the original one, or the Parallel WaveNet ? if it's the original, why not include Parallel WaveNet in the table ?\", \"Typo at the end of Section 1: \\\"We orgnize\\\" -> \\\"organize\\\"\"]}",
"{\"comment\": \"Hi Yi, thanks for your interest in our work. Your work is also interesting and we will reference it in the final version of this paper.\", \"title\": \"Thanks for your comment\"}",
"{\"comment\": \"Hi Joan, thank you for your interest in our work. You paper is also interesting and we will reference it in a future version of our paper.\", \"title\": \"Thank you for your comment\"}",
"{\"comment\": \"After this submission, we have implemented convolution queues (Paine et al., 2016) to cache the intermediate hiddens within WaveFlow for autoregressive inference over the height dimension. It can bring significant speed-up over vanilla implementation depending on the squeeze size h on height. In particular, our small-footprint WaveFlow can generate 22.05kHz speech 47.61 times faster than real-time. The updated synthesis speed results are as follows:\\n\\nModel residual channels # param synthesis speed\\nWaveFlow (h=8) 64 5.91M 47.61x\\nWaveFlow (h=16) 64 5.91M 42.60x\\nWaveFlow (h=8) 96 12.78M 29.09x\\nWaveFlow (h=8) 128 22.25M 23.44x\\nWaveFlow (h=8) 256 86.18M 9.09x\", \"title\": \"Further speed-up at synthesis with convolution queues (Paine et al., 2016)\"}",
"{\"comment\": \"Quite interesting work!\\n\\nAnd I would greatly appreciate it if you would cite our FastSpeech (NeurIPS 2019) paper which significantly speeds up the mel-spectrogram generation with non-autoregressive architecture.\", \"fastspeech\": \"Fast, Robust and Controllable Text to Speech: https://arxiv.org/abs/1905.09263\", \"title\": \"Related work\"}",
"{\"comment\": \"Cool work!\", \"perhaps_the_authors_could_also_be_interested_on_our_paper_https\": \"//arxiv.org/abs/1906.00794\", \"title\": \"Audio flows paper\"}"
]
} |
Ske31kBtPr | Mathematical Reasoning in Latent Space | [
"Dennis Lee",
"Christian Szegedy",
"Markus Rabe",
"Sarah Loos",
"Kshitij Bansal"
] | We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space. The set of rewrites (i.e. transformations) that can be successfully performed on a statement represents essential semantic features of the statement. We can compress this information by embedding the formula in a vector space, such that the vector associated with a statement can be used to predict whether a statement can be rewritten by other theorems. Predicting the embedding of a formula generated by some rewrite rule is naturally viewed as approximate reasoning in the latent space. In order to measure the effectiveness of this reasoning, we perform approximate deduction sequences in the latent space and use the resulting embedding to inform the semantic features of the corresponding formal statement (which is obtained by performing the corresponding rewrite sequence using real formulas). Our experiments show that graph neural networks can make non-trivial predictions about the rewrite-success of statements, even when they propagate predicted latent representations for several steps. Since our corpus of mathematical formulas includes a wide variety of mathematical disciplines, this experiment is a strong indicator for the feasibility of deduction in latent space in general. | [
"machine learning",
"formal reasoning"
] | Accept (Talk) | https://openreview.net/pdf?id=Ske31kBtPr | https://openreview.net/forum?id=Ske31kBtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_xQ9Hvor9M",
"HyxcG0Phor",
"ryx8LTFijr",
"r1eZtFVKir",
"BJeJJl0doB",
"H1g9CATusr",
"BJeoaTpujH",
"BkgqHnp_oS",
"rJxGlK6Ttr",
"Bkx-GQrhtS",
"HJl5cVvoYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724414,
1573842450358,
1573784910013,
1573632376881,
1573605334565,
1573605074229,
1573604803403,
1573604417566,
1571834089775,
1571734280924,
1571677330332
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1483/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1483/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1483/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1483/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1483/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1483/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper was very well received by the reviewers with solid Accept ratings across the board.\\nThe subject matter is quite interesting - mathematical reasoning in latent space, and it was suggested by a reviewer that this could be a good candidate for an oral. The AC agrees and recommends acceptance as an oral. Some of the intuitions of what is being done in this paper could be better visualized and presented and I encourage the authors to think carefully about how to present this work if an oral presentation is granted by the PCs.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewer #3 Response\", \"comment\": \"Thank you for the response and the paper revision. Now it looks more readable! I have raised my score accordingly.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the response. The updated version of the paper clarified the questions I had.\"}",
"{\"title\": \"Discussion of Review#1\", \"comment\": \"Thank you for these changes, which are greatly improving the readability of the paper. I especially appreciate Fig. 5, which makes the experiment design much easier to grasp, and the new analysis in Sect. 6.5. The fact that the new, simpler model architecture works slightly better than the original two-network design is a nice new development as well, and I'm grateful that you included this (substantial, but important) change in the revision. With these improvements, I'm happy to raise my score.\"}",
"{\"title\": \"Updates to the paper\", \"comment\": \"Addressing the reviewers' suggestions and concerns, we added further experimental results and made significant clarifications to the paper, while striving to keep the broad ideas and findings intact. In summary, the following changes have been made:\", \"1_introduction\": \"Added overview of latent space prediction (Fig 1)\\n\\n4.1 Training Data:\\nAdded statistics of training data.\\n\\n4.2 Model Architecture and Training Methodology:\\nRewritten with a simpler, more effective new model architecture. Previous architecture moved to (5 Alternative Model Architectures). Updated corresponding plots. \\nAdded short description of GNN encoder.\", \"5_alternative_model_architectures\": \"New section containing the previous architecture and explaining why we used it. \\nDescription of other possible configurations.\\nExperimental comparison with alternatives.\\n\\n6 Experiments\\nAdded figure explaining generation of different curves (Fig 5)\\nUpdated all experiment curves to 9 steps of rewrites (Fig 6, 7, 8)\\n\\n6.1 Neural Network Architecture Details:\\nClarified batching procedure for training data.\\nAdded description of hard negative mining.\\n\\n6.2 Evaluation Dataset:\\nExplain improved sampling method (for evaluation data only) which better preserves complex statements. \\n\\n6.3 Evaluation of Rewrite Prediction Model:\\nMore detailed explanation of experiments.\\nMoved baselines that were not baselines out of itemized section\\n\\n6.4 Comparison of Alternatives:\\nAdded experiments comparing the alternative model architectures described in section 5.\\n\\n6.5 Evaluation of Rewriteability:\\nAdded experiment showing the model\\u2019s performance in key cases and a common source of failures.\\n\\n6.6 Visualization of Latent Spaces:\\nAdded description of plots and analysis\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank the reviewer for the great feedback. We have simplified the architecture described in the paper by combining the networks $\\\\sigma$ and $\\\\omega$, and included the results from this architecture as well, producing a more robust architecture that performs better for multiple rewrite steps (while keeping the original, more complicated solution as one of the baselines).\\n\\nAs suggested, we have added further analysis of failure cases. We also corrected the typos and clarified the definitions of True, Pred (One Step) and Pred (Multi Step) variants.\\n\\nWe are very grateful for the review that helped to improve the paper significantly.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank the reviewer for the constructive feedback. The use of a fixed embedding space $L$ and a separate space $L^\\\\prime$ was useful as it naturally prevents the collapse of embeddings. However this could be counteracted by stopping the gradient at the right place in the simplified architecture which was suggested in the original paper and is now described in the updated paper.\\n\\nAs suggested, we have added further analysis of failure cases, and describe strategies for negative mining from these examples. In addition, we have included a brief description of the graph neural network architecture used in Paliwal et al (2019). We also include further details on the construction of training set. \\n\\nTraining a decoder to predict the results of rewrites from the latent space is an interesting idea, but is technically challenging and we felt it was out of scope for this paper. We managed to counteract the noisiness of predicted embedding by training on noisy embeddings which trains the network to be robust to random changes and improves the prediction of multi-step rewrites significantly. \\n\\nWe are grateful for the suggestions that contributed significantly to improving the quality of the paper.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We are thankful for the valuable feedback. The main concern of this review is the quality of the writing and experimental details. We updated the paper to clarify all the names and ensure that all terms are introduced before they are used.\\n\\nThe two tower design was necessary since the decision whether theorem T can be rewritten using parameters P requires both pieces of information, so we need to feed them to the network. In fact $\\\\omega$ does not need to predict p, but it gives extra supervision signal and therefore regularizes the prediction. The random baseline is necessary because of the unbalanced nature of the rewrite success, this is hard to control, so we added an extra baseline that shows that our results are better than just ignoring any of the input expressions (theorem or parameter).\\n\\nIn addition, we have simplified the architecture described in the paper by combining the networks $\\\\sigma$ and $\\\\omega$, and included the results from this architecture. \\n\\nWe have significantly improved the experimental section by further clarifying the experiments and expanding them with more supporting measurements. We have also moved the two non-baselines out of the baselines section.\\n\\nFinally, thank you for the insightful questions! With our current setup, the goal is to simply perform reasoning steps in latent space without specifically proving any statements. There are several approaches to make the network predict a closed goal, for example by predicting a fixed embedding such as the zero vector. \\n\\nWe expect that most semantic aspects of the formula could be recovered, but not superficial features as the naming of the variables should not affect the rewriteability of formulas. The question how much of the formula can be recovered is probably dependent on the theorem database, since only those aspects that manifest in different rewrite successes are expected to be recovered.\\n\\nWe don't have much intuition on the decomposability of embeddings, but it seems like a fascinating research direction.\\n\\nWe are grateful for the feedback which has helped to make the paper much clearer and more readable.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a technique to perform reasoning on mathematical formulas in a latent space. The model is trained to predict whether a rewrite rule can be applied to a formula given its latent representation. When the rewrite is possible, the model also predicts the embedding of the resulting formula. Experiments show that the network can be applied multiple steps in a row, while operating only in the embedding space.\\n\\n1. As mentioned in the paragraph before Section 4.1, it would be much simpler to consider a single latent embedding space L. In that case, \\\\sigma and \\\\alpha become unnecessary and we only need to train \\\\omega. Did you try to have a single network? This seems a much more natural approach to me, and I'm surprised that you did not start with that. From my experience, aligning embedding spaces is something that usually does not work very well, especially in high dimension. The role of \\\\sigma seems very redundant given \\\\omega.\\n\\n2. If you consider \\\\sigma, why do you also predict the rewrite success with \\\\omega? Couldn't it be simply a function from S x S -> L ?\\n\\n3. The graph neural networks used in the model are not described in the paper, only a reference to Paliwal et al (2019) is given. It would be helpful to have a brief paragraph describing this architecture, for readers not familiar with the referenced paper.\\n\\n4. How large is the training set of (T, P) pairs? I don't think this is mentioned in the paper.\\n\\n5. To train \\\\sigma and \\\\omega, the negative instances are selected randomly. You mention that negative mining should improve over this strategy. What does negative mining correspond to in this context? Are there bad rewrites better than others?\\n\\n6. Did you consider using an inverse function (say G), that maps an embedding in L / L' back to S (i.e. the inverse function of gamma / gamma'). I would imagine that even if an embedding X is a bit noisy, because not exactly equal to gamma(P) where P is the expression it represents, you could consider doing the propagation with gamma(G(X)). This could be a possibility to remove the noise you have when doing multi-step operations (and potentially go way beyond 4 steps). Also, G could be used to check whether you obtain the expected formula after 4 steps, which would be a more informative information than the L2 distance between the resulting embedding and the embedding of the final formula.\\n\\nOverall, the model is a bit complicated (e.g. question 1.), but the results are promising, the paper is well written, and the ability to manipulate formula embeddings is probably going to be useful in the context of theorem proving.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a method to do math reasoning purely using formula embeddings. The proposed method employs a graph neural network to embed math formulas to a latent space. The formula embeddings are then combined with theorem embeddings (also formulas, computed in the same way as formula embeddings) to predict whether one can do one step of math reasoning using the corresponding theorem, and also to predict the embeddings of the resulting formula. Empirically the authors demonstrate that the method can be chained end-to-end to do multiple steps of reasoning purely in the latent space.\\n\\nI tend to accept this paper, (but also OK if it gets rejected), for the following reasons: (1) the idea is novel and interesting; (2) the writing of the paper is below conference standard and very hard to read, especially the method and the experiment sections.\\n\\n===========================================================================\\n\\nNovelty and significance\\n\\nI really like the idea of doing math reasoning in latent space. The idea is definitely novel and interesting. It is related to existing works such as neural logic induction[1] and planning in latent space[2]. It is amazing that one can do multiple steps of math reasoning after only training the model using data from one single step. It would be interesting to see how it can improve existing learning-based theorem provers.\\n\\nMy question is if we want to integrate the proposed method into theorem provers, after multiple steps of math reasoning, how would us know the goal has been proved? Is it possible that we can train a decoder that maps back from the latent space to the formula space? Also can it work with theorems that decompose the current goal into several sub-goals? I know these are not the concerns of this paper, but I would be really grateful if you could provide some intuitive answers!\\n\\n===========================================================================\\n\\nWriting\\n\\nThe paper is not well-organized and not written in a consistent way. For the method and the experiment sections, I need to jump back and forth several times in order to understand what the authors are trying to say.\\n\\n1. Typo: Third paragraph in section 1, \\\"...which is makes use of ...\\\".\\n2. It's very confusing when the authors introduce \\\\sigma and \\\\omega in the beginning of section 4: why would you need two networks predict the same thing?\\n3. Mentioning \\\"merging \\\\sigma and \\\\omega, is left for future work\\\" is confusing before formally introducing \\\\sigma and \\\\omega.\\n4. Even when the authors formally introduce \\\\sigma and \\\\omega in 4.2, it is still not clear that why both of them are used for modelling the success probability.\\n5. In fact, I don't know why \\\\omega needs to output p. It's never mentioned in the experiment section.\\n6. The rationale of the two tower design (why not combine two) is not clearly explained.\\n7. Typo: Page 5 last paragraph, \\\"... negative instances for for each ...\\\".\\n8. The itemized part in 5.3, \\\"...carefully selected baselines: 1.xxx, 2.xxx, 3. xxx, 4. xxx\\\". However, both 3 and 4 are not baselines!\\n9. It is not clear that baseline 1 and 2 correspond to which baselines in later experiments.\\n10. Reading the baselines before the experiments is very confusing. For example, for baseline 1, it is very hard to understand why would we want to use such an unusual baseline, and why it is called a \\\"random baseline\\\".\\n11. Baseline 2 is actually referred to as \\\"usage baseline\\\" but this name is not introduced in the itemized part.\\n\\n\\n\\n[1] Rockt\\u00e4schel, Tim, and Sebastian Riedel. \\\"End-to-end differentiable proving.\\\" Advances in Neural Information Processing Systems. 2017.\\n[2] Srinivas, Aravind, et al. \\\"Universal planning networks.\\\" arXiv preprint arXiv:1804.00645 (2018).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": [\"= Summary\", \"Embeddings of mathematical theorems and rewrite rules are presented. An in-depth analysis of the resulting embeddings is presented, showing that a network can learn to \\\"apply\\\" embedded rewrite rules to embedded theorems, yielding results that are similar to the embedding of the rewritten theorem. [i.e., app'(emb(thm), emb(rule)) is near to emb(app(thm, rule))] This is an interesting property for the application of deep learning to automated theorem proving, though not directly a breakthrough result.\", \"= Strong/Weak Points\", \"Simply a cute result, showing that proof search can remain in embedding space for a limited time horizon without having to switch back into the theorem prover environment.\", \"Nicely designed experiments testing this (somewhat surprising) property empirically\", \"Missed opportunity of better analysis of which theorem/rewrite rule properties are more likely to fail\", \"Writing sometimes a bit overcomplicated (e.g., Sect. 4.5 could just be a figure of a commuting diagram and two sentences...)\", \"Architecture choice unclear: Why are $\\\\sigma$ and $\\\\omega$ separate networks. This is discussed on p4, but it's unclear to me how keeping $\\\\sigma$ separate is benefitial for the analysis, and this is not picked up again explicitly again?\", \"= Recommendation\", \"Overall, this is a nice, somewhat surprising result. The writing and experiments could use some improvement, but I believe that the majority of the ICLR audience would enjoy seeing this result (even though it would have no impact on most people's research)\", \"= Detailed Comments\", \"page 4, Sect. 4.4: Architecture of $\\\\alpha$ would be nice (more than a linear layer?)\", \"page 5, paragraph 3: \\\"we from some\\\" -> \\\"we start from some\\\"\", \"p6par1: \\\"much cheaper then computing\\\" -> than\", \"p6par6: \\\"on formulas that with\\\" -> no that\", \"p6par7: \\\"measure how rate\\\" -> \\\"measure the rate\\\"\", \"p8par1: \\\"approximate embedding $\\\\alpha(e(\\\\gamma'(...)))$ - $e$ is undefined and should probably be $e'$ (this is also the case in the caption of Fig. 5), and $c'$ should probably be included as well. However, I don't understand the use of $\\\\alpha$ here. If Fig. 4 is following Fig. 3 in considering $p(c(\\\\gamma(T), \\\\pi(P)))$, then Fig. 4 should plot the performance of, e.g., $p(c(e'(c'(\\\\gamma'(T_{i-1}), \\\\pi'(P_{i-1}))), \\\\pi(P_i)))$ (i.e., $p$ applied to approximate embedding of $T_i$ and (\\\"true\\\") embedding of $P_i$). I believe that's what \\\"Pred (One Step)\\\" expresses, but it would maybe be generally helpful to be more precise about the notation in Sect. 6.\"]}"
]
} |
rJxok1BYPr | Black Box Recursive Translations for Molecular Optimization | [
"Farhan Damani",
"Vishnu Sresht",
"Stephen Ra"
] | Machine learning algorithms for generating molecular structures offer a promising new approach to drug discovery. We cast molecular optimization as a translation problem, where the goal is to map an input compound to a target compound with improved biochemical properties. Remarkably, we observe that when generated molecules are iteratively fed back into the translator, molecular compound attributes improve with each step. We show that this finding is invariant to the choice of translation model, making this a "black box" algorithm. We call this method Black Box Recursive Translation (BBRT), a new inference method for molecular property optimization. This simple, powerful technique operates strictly on the inputs and outputs of any translation model. We obtain new state-of-the-art results for molecular property optimization tasks using our simple drop-in replacement with well-known sequence and graph-based models. Our method provides a significant boost in performance relative to its non-recursive peers with just a simple "``for" loop. Further, BBRT is highly interpretable, allowing users to map the evolution of newly discovered compounds from known starting points. | [
"molecules",
"chemistry",
"drug design",
"generative models",
"application",
"translation"
] | Reject | https://openreview.net/pdf?id=rJxok1BYPr | https://openreview.net/forum?id=rJxok1BYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9-54e4UqWf",
"H1xQJmOsoB",
"Bke66M_oor",
"ByliaWOsjH",
"HJxiRgOiiB",
"r1exzxujor",
"SJgCp1Oiir",
"HyxysRDssS",
"BylkRpDssS",
"HkxY-Twior",
"Skl3e32vcB",
"ry1VC8LqB",
"H1eRIAw2YB",
"r1liUowJtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724384,
1573778138901,
1573778116829,
1573777858653,
1573777619159,
1573777415831,
1573777350229,
1573777046797,
1573776838531,
1573776640655,
1572486132101,
1572396582517,
1571745366247,
1570892627223
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1482/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1482/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1482/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1482/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a simple method for improving molecular optimization with a learned model. The method operates by repeatedly feeding generated molecules back through an encoder decoder pair trained to maximize a desired property. Reviewers liked the simplicity of the method, and found it interesting but ultimately there were concerns about the metrics used to evaluate the method. Reviewers 3 and 4 both noted issues with the log P (and penalized log P) metric, noting that it is possible to artificially increase both metrics in a way that isn't useful in practice. During the discussion phase, Reviewer 4 constructed a specific example where simply adding long carbon chains to a molecule would yield a linear increase the penalized log P metric, and noted that the \\\"best molecules\\\" found by the method in Figure 3 also have extremely long carbon chains (long carbon chains are not generally desirable for drug discovery).\\nI recommend the authors resubmit after finding a better way to evaluate that their method generates molecules with more useful properties for drug discovery.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response Part 3\", \"comment\": \"> \\\"- 9: \\\"synthetic chemists can carry out the individual steps of a molecular trace...\\\"; in general this is not true. The known medicinal chemistry transformations are a relatively small set of operations, and your molecular traces are unlikely to capture them in any systematic way. Please avoid making claims like this unless you can back them up with experimental evidence or comparisons to models explicitly trained for synthetic route planning.\\\"\\n\\nOur claim is not that molecular traces correspond to synthetic routes. Just as there is an association between chemical structure and activity (and minor perturbations to the structure should not change the activity by too much), there is an association between chemical structures and the synthetic routes to make those structures. If pairwise compounds in a molecular trace are similar, their corresponding synthetic routes will be similar, aiding in the efficiency of chemical synthesis (if you can make one step i, it should follow that you can make step i+1).\\n\\nWe have removed this discussion from the paper as we have concluded that it is only tangentially related.\\n\\n> \\\"- 9: These experiments are not multi-property optimization. Measuring the value of a secondary property while optimizing a primary property is not the same as optimizing them both simultaneously. The latter requires some strategy for incorporating both property values into the decision function, such as scalarizing (see \\\"multi-objective optimization\\\" on Wikipedia).\\\"\\n\\nPlease see our comment in \\\"Response Part 1\\\".\"}",
"{\"title\": \"Response Part 2\", \"comment\": \"> \\\"- 5: The results in Table 1 would be more compelling if they were not divorced from their starting points. Please include information about the similarity of these molecules to the starting molecule as well as the property delta. Consider an approach like Jin et. al [2], where results were specifically categorized by similarity constraints.\\\"\\n\\nBBRT extends the translation set up from similarity constrained optimization to unconstrained molecular optimization. The best compounds reported in Table 1 typically have low similarity to the input compound (by design) but are highly similar to the compound from the previous recursive step. We present two molecular traces to highlight the evolution of any compound from its seed to the final compound (Fig. 5A and Fig. 10). \\n\\n> \\\"- 5: \\\"All models were trained on the open-source ZINC dataset.\\\" What subset of ZINC are you using?\\\"\\n\\nWe use the 250K subset from Kusner et al. 2017. \\n\\n> \\\"- 5: \\\"Consistent with the literature we report diversity as...\\\". Please cite some literature that you are consistent with.\\\"\\n\\nWe have added a reference.\\n\\n> \\\"- 6: \\\"we sample 100 times from a top-2...\\\"; does this mean you are doing 100 iterations? Sampling 100 times from the same top-2 sampler doesn't really make sense, but I'm not entirely sure what you are describing here.\\\"\\n\\nWe have clarified this sentence. \\u201cFor both BBRT applications, we sampled 100 complete sequences from a top-2 and from a top-5 sampler and then aggregated these outputs with a beam search using 20 beams and outputting 20 compounds\\u201d. \\n\\nA top-2 sampler samples from the top 2 most likely tokens at each step of decoding until we hit a stop token. We sample 100 complete sequences following this process.\\n\\n> \\\"- 7: Figure 4 says these are \\\"ablation\\\" experiments. What exactly are you ablating?\\\"\\n\\nTable 1 presents the top scoring compounds from aggregating across decoding strategies (deterministic and stochastic decoding) and scoring functions (logP, max pairwise sim, max seed sim, min mol wt.). Fig. 4 disentangles these design decisions and shows how each of these individual components impact performance.\\n\\n> \\\"- 8: You state that better performance on logP and similar performance on QED is not known in the literature. In fact, the MolDQN paper [3] calls this out explicitly (and also contains a discussion of bounded vs. unbounded logP).\\\"\\n\\nWe were simply stating that using a translation model (whether its graph or sequence based) coupled with stochastic decoding provides better performance on logP (relative to just using deterministic decoding) and provides state-of-the-art results on QED. To the best of our knowledge, this was not known before.\\n\\n> \\\"- 8: \\\"Recent RL methods focus on molecular construction and are therefore not well-suited for the generation of molecular traces\\\"; I disagree with this. RL methods that can start from a predefined graph have the ability to move between compounds, possibly in a way that is orthogonal to traditional similarity-based exploration (see the discussion of \\\"MDP edit distance\\\" in [4]). Also note that one of the key features of graph-based generators like [2] and [5] is that all of the intermediate states are valid, so you could do similar molecular traces for interpretability (although your differences are more like MMPs with functional group-level deltas).\\\"\\n\\nThank you for these references. We have modified the text to mention that there are alternative methods for generating molecular traces using graph-based methods.\"}",
"{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for their thoughtful response.\\n\\n> \\\"Review: This paper presents a translation-based method for molecular property optimization. It uses a sequence- or graph-based encoder/decoder framework to produce molecules with (hopefully) improved properties, then feeds a subset of these molecules back into the encoder/decoder to generate a new set of molecules; this process is repeated for a fixed number of iterations to arrive at a final set of \\\"optimized\\\" molecules. The method is agnostic to the form of the encoder/decoder; the emphasis is on the iterative approach. Additionally, this approach enables visualization of \\\"molecular\\\" traces that can reveal pathways between molecules that follow relationships similar to matched molecular pairs. The work extends related work in translation-based property optimization [6]. The paper is well-written and generally easy to read.\\n\\nThe method is evaluated on two tasks, logP and QED. Both of these are computed properties that have known issues (see the discussion in [3]), but I understand that these properties are used in many publications and are thus easy to compare. \\\"\\n\\nWe agree there are problems with logP and QED, but we leave the optimization of new properties for future work.\\n\\n>The method presented here performs similar to others on a QED task.\\n\\nThey claim superior results on the logP task, but I have concerns about the fairness of the comparison since logP can be exploited by very simple models if there are no limits on the size of the generated molecules (or, similarly, the number of tokens/generative steps allowed for each molecule).\\\"\\n\\nWe use penalized logP, which takes into account ring size and synthetic accessibility. In principle, this property should not reward \\u201cdaisy chaining\\u201d carbon bonds. The penalization, however, is not perfect and is certainly a limitation of using this property.\\n\\n> \\\"Additionally, the authors claim to perform multi-objective optimization but do not actually do this.\\\"\\n\\nWe describe how BBRT enables scoring and ranking intermediate outputs by a second property, thus propagating molecules that score highly on the second property (relative to other sampled outputs at each step) forward. \\u201cRanking by a secondary property\\u201d is a different computation than just reporting a second objective and can be viewed as a form of optimization (albeit a simple one). In Fig. 6A left, the solid lines denote reporting logP (the second objective) while optimizing and scoring by QED. The dotted lines report logP while optimizing QED and scoring by logP. We observe a significant improvements in logP when scoring by logP as opposed to merely reporting its value. This shows that scoring by the second objective can improve that property\\u2019s values without explicit joint optimization.\\n\\nWe understand multi-objective optimization is an overloaded term that can be confusing here given our actual procedure. We have changed the section title to \\u201cImproving secondary properties by ranking\\u201d.\\n\\n> \\\"- 1: \\\"potential druggable candidates\\\" does not make sense; compounds are not \\\"druggable\\\" (their targets are), although they may be \\\"drug-like\\\".\\\"\\n\\nThank you for this correction.\\n\\n> \\\"- 2: Consider citing Kramer et al.'s seminal work on matched molecular pairs [1].\\\"\\n\\nWe have added this citation.\\n\\n> \\\"- 3: Please explain what it means for y to \\\"paraphrase\\\" x?\\\"\\n\\nThis language has been removed and the setup has been clarified.\\n\\n> \\\"- 5: For your logP experiments, you need to be more clear about how you are comparing to other models. You are guaranteed to get to higher logP values if you can generate larger molecules (more tokens) than the baselines, since logP is essentially linear in the number of carbons. Are you doing something to limit the number of tokens you can generate in each iteration? Or why should I believe these comparisons are fair?\\\"\\n\\nAs mentioned above, penalized logP is not linear in the number of carbons. We compute penalized logP in an identical manner to JTNN (Jin et al. 2019), which uses the same setup as GCPN (You et al. 2018). Additionally, in Fig. 4A right we show results where we score by minimum molecular weight to explicitly counteract this logP artifact by choosing compounds that minimize the number of added functional groups while still improving on penalized logP. \\n\\n> - 5: In Table 1, note that some literature uses a \\\"normalized\\\" penalized logP, while others use the formula directly without a dataset-specific normalization (which can appear to give better results). Can you confirm which you are using here and whether the baseline models are the same?\\\"\\n\\nWe use the normalized penalized logP measure as described in Jin et al. 2019 and You et al. 2018.\"}",
"{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for their thoughtful response.\\n\\n> \\\"The paper builds on existing translation models developed for molecular optimization, making an iterative use of sequence to sequence or graph to graph translation models by wrapping them in a meta-procedure. The primary contribution is really just to apply the translation models iteratively, i.e., feeding translation outputs from the models back in as inputs for retranslation. A few strategies are introduced to score / rank candidates before they are chosen for retranslation. The overall idea is very simple, and is likely to work in some basic cases where the property has a natural \\\"additive\\\" nature, e.g., logP that you can improve by adding functional groups.\\\"\\n\\nWe use penalized logP, which penalizes simple strategies like adding functional groups. We agree, however, this is still an artifact of the property that can\\u2019t be fully fixed. \\n\\n> \\\"This is recognized but not really controlled in the paper except for selecting for input similarity before retranslating.\\\"\\n\\nWe introduced a scoring function that ranks by the minimum molecular weight. We use this scoring function to help control for the additive nature of logP and observe that ranking by the minimum molecular weight can find better compounds than its non-recursive peer when ranking by minimum molecular weight and non-recursive Seq2Seq scoring by logP.\\n\\n> \\\"The empirical results are clean though not convincing (see the logP discussion above). Additional properties should be included to demonstrate that the method might actually have some practical value, i.e., generalize beyond additive logP. \\\"\\n\\nWe leave for future work the optimization of properties beyond logP and QED.\\n\\n> \\\"Multi-property optimization would be one possible setting since de novo models have a hard time to reach intersections of different property constraints. Abstractly, one could imagine that an iterative, successively guided approach could work well. The proposed approach in the paper is somewhat undeveloped. It merely uses a translation model for the primary property, and ranks candidates by the other. This is unlikely to get you to any challenging intersections. Also, since logP was always one of the properties effectiveness in this regard is not really demonstrated either. A slightly more sophisticated approach might use relaxed, separately trained ranking models in intermediate steps, successively tightened towards the intersection as the iteration progresses. E.g., Brookes et al., Design by adaptive sampling, arXiv:1810.03714\\\"\\n\\nWe agree that an iterative, successively guided approach to multi-property optimization is an interesting direction. While our approach of translating according to a primary property and propagating molecules that score highly on a secondary property does not jointly optimize both properties, we have found this simple approach to work quite well. In Fig. 6A, regardless of the decoding strategy used (beam or stochastic decoding), ranking by a secondary property always produces improves in the mean logP of the generated candidates relative to ranking by the primary property. More importantly, the primary properties values do not degrade that significantly as evident by Fig. 6B (they are still very high relative to the seed sequences ~0.90 vs ~0.75).\"}",
"{\"title\": \"Response Part 2\", \"comment\": \"> \\\"This would highlight that the advantage of stochastic decoding is really online in the context of recursive translation, not generally.\\\"\\n\\nStochastic decoding certainly helps for recursive translations. However, we also observed that it helps generally for non-recursive translations. In Fig. 4A left, the dotted lines show mean logP for three decoding strategies (beam search, top 2 and top 5 sampling) using a Seq2Seq model. We observe that beam search generates the lowest mean logP.\\n\\n\\n> \\\"- What is the point of Fig 4A right? Why do we expect that maximizing non-logP properties will increase mean logP?\\\"\\n\\nThis is a plot comparing four scoring functions. The translation model is trained to optimize logP for all lines. The difference is how the intermediate outputs are ranked and thus which top k molecules are propagated forward. We expect scoring by non-logP will increase mean logP because the translation still optimizes for logP. Among the optimized logP candidates, the scoring function ranks these choices by one of four scoring functions.\"}",
"{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for their thoughtful response.\\n\\n> \\\"I am leaning towards an accept for this paper, since not only does the technique presented seem general, the authors does in depth analysis into the model and how it affects drug discovery.\\n- Recursive black box translation seems to be widely applicable to new models.\\n- The model seems to reach a significantly better state of the art on the metrics proposed.\\n- None of the baselines seem to use SELFIES as the string of choice.\\\"\\n\\nThe Seq2Seq baseline was trained on SELFIES while JT-VAE, GCPN, and VJTNN are graph-based. We use the reported numbers from ORGAN, which was trained on SMILES strings. We agree a better comparison to ORGAN as a baseline would be trained on SELFIES.\\n\\n> \\\"This means it's difficult to tell how much the \\\"Blackbox recursive\\\" part of the algorithm adds to the model. An ablation experiment without BBRT might inform us of how much of the benefit is due to the molecule representation (Fig 4A reports the mean, but it would be good to have the same metric as Table 1).\\\"\\n\\nThis ablation experiment is reported. In Table 1, the Seq2Seq baseline is trained on SELFIES without BBRT. Figure 2 compares the top 100 generated compounds according to Seq2Seq vs BBRT-Seq2Seq and JTNN vs BBRT-JTNN controlling for the molecular representation. \\n\\n> \\\"A few questions would clear up the strengths of the paper:\\n- Is there a connection to the backtranslation work in Lample 2018? (Phrase-Based & Neural Unsupervised Machine Translation)\\n It seems like a similar idea - except in this domain, the target language and source language are the same.\\\"\\n\\nWe acknowledge that there are some philosophical similarities between our work and the iterative backtranslation (Sennrich et al. 2015) in Lample et al. 2018. Some key distinctions:\\n\\n1. The design goal and motivation are different: while backtranslation (Sennrich et al. 2015) often is used to improve learning in low-resource settings by construction of augmented training sets, ours focuses on the dynamics at test-time; by construction and in this paper, our training data already consists of molecular pairs $(X, Y)$ with high structural similarity $\\\\tau$, for a given property. We do note that an interesting direction for exploring constrained optimization would be where we want to learn a prior over training pairs with a relaxed constraint and use this for a more stringent, high-constraint setting (a \\u201clow resource\\u201d setting, where we might encounter a paucity of data due to expensive experiments).\\n\\n2. The Lample et al. 2018 uses two different language models are learned over source $s$ and target $t$ languages $(P_{s \\\\rightarrow t}$ and $P_{t \\\\rightarrow s})$. The source-to-target model is applied to source sequences to generate inputs for training the target-to-source model. The backtranslation step generates source and target sequences using the learned translation models $P^{(k-1)}_{t \\\\rightarrow s}$ and $P^{(k-1)}_{s \\\\rightarrow t}$ and then trains new translation models $P^{(k)}_{t \\\\rightarrow s}$ and $P^{(k)}_{s \\\\rightarrow t}$ using the sequences. We use only one model, and after $n$ recursive iterations, we ensemble the generated sequences whilst scoring on a desired objective. We acknowledge that, in principle, BBRT could be extended to ensembling recursive outputs from more than one model.\\n\\n\\n> \\\"- How can there be multiple scoring functions? \\n Were they combined in one run, or were these separately optimized runs?\\\"\\n\\nSeparate runs. \\n\\n> \\\"Are these only used in Figure 4?\\\"\\n\\nFig. 4 shows results from 4 scoring functions and Fig. 6 compares 2. Fig. 5 uses the max pairwise sim scoring function while the remaining results use the same scoring function that was used to optimize compounds. Finally, the top reported compounds reported in Table 1 and Fig. 3 are from aggregating results across decoding strategies and scoring functions.\\n\\n> \\\"- Why would beam search do less well than stochastic? \\n Is it because during recursive translation, the beam search variants have low diversity?\\\"\\n\\nFinding sequences that have high probability under the model (when trained with teacher forcing) is not always well-calibrated for downstream tasks (like finding sequences that score highly on chemical properties). This is a well-known phenomena in the NLP literature. Because stochastic decoding can generate a larger number of diverse samples, empirically we have found that after generating a large number of samples (say 100) we are able to find higher scoring compounds relative to the sequence that has the highest log probability under the model (found using beam search).\\n\\n> \\\"Then, training with stochastic decoding and generation with a beam search should do even better, right?\\\"\\nYes, training with stochastic decoding and beam search does do better than just stochastic decoding.\"}",
"{\"title\": \"Response Part 3\", \"comment\": \"> \\\"8. Section 5.3 is verbose and can shortened to a few sentences saying that applying edits to molecules recursively makes the model interpretable. How do traces look like when logP is used a selection criteria? How does the trace of the best molecule shown in figure 3 look like?\\\"\\n\\nWe appreciate your feedback regarding concision. Section 5.3 has been shortened. Please see Fig. 10 in the supplement. We have added a new molecular trace for a high scoring logP compound (comparable to the best molecule shown in Fig. 3) using logP as the scoring function. \\n\\n> \\\"What is the average edit distance between molecules\\\"\\n\\nWe have added Fig. 11 to the supplement, which reports the average Levenshtein edit distance between molecules under two decoding strategies and two scoring functions.\\n\\n > \\\"are and intermediate molecules valid? Are transitions plausible?\\\"\\n\\nFollowing the results reported in the SELFIES paper (Krenn et al. 2019), most generated molecules (including intermediate molecules) are valid. Empirically we observed greater than 99.9% validity. The transitions appear to be plausible. We report two molecular traces in the paper. Additionally, the edit distance plot (Fig. 11) shows a tradeoff between edit distance between pairwise steps and decoding strategy. For softmax sampling from top 2 most likely at each step, the edit distance is highly consistent, while for top 5 sampling, the edit distance is big initially but then it sharply drops off.\\n\\n> \\\"9. Section 5.4: You are optimizing a single objective (e.g. logP) while reporting in parallel a second objective (e.g. QED). This is not multi-objective optimization, where multiple objectives are optimized in parallel. Optimizing a single objective while reporting a second objective can be also done with methods other than BBRT. Please clarify the take-away message of this paragraph or remove it.\\\"\\n\\nThe primary translation model is indeed optimizing a single objective. We describe how BBRT enables scoring and ranking intermediate outputs by a second property, thus propagating molecules that score highly on the second property (relative to other sampled outputs at each step) forward. \\u201cRanking by a secondary property\\u201d is a different computation than just reporting a second objective and can be viewed as a form of optimization (albeit a simple one). In Fig. 6A left, the solid lines denote reporting logP (the second objective) while optimizing and scoring by QED. The dotted lines report logP while optimizing QED and scoring by logP. We observe a significant improvements in logP when scoring by logP as opposed to merely reporting its value. This shows that scoring by the second objective can improve that property\\u2019s values without explicit joint optimization.\\n\\nWe understand multi-objective optimization is an overloaded term that can be confusing here given our actual procedure. We have changed the section title to \\u201cImproving secondary properties by ranking\\u201d.\\n\\n\\n> \\\"Minor comments\\n=============\\n10. Introduction: \\u2018discrete and unstructured\\u2019. Why unstructured? I would say that molecules are structured--they must follow a certain grammar to be valid.\\\"\\n\\nWhile molecules themselves are highly structured objects, chemical space is unstructured (e.g. there is no canonical ordering of all chemicals).\\n\\n> \\\"11. Introduction: \\u2018treating inference as a first class citizen\\u2019 is unclear since \\u2018inference\\u2019 is undefined. Either remove this sentence or clarify.\\\"\\n\\nWe have clarified \\u201cinference\\u201d with \\u201cdecoding strategy\\u201d.\\n\\n> \\\"12. Please discuss that BBRT is limited by the need of a labeled dataset for constructing training pairs.\\\"\\n\\nThis has been added to the paper.\\n\\n> \\\"13. Section 5.1, \\u2018Similar computational budget\\u2019. How did you quantify the computation budget?\\\"\\n\\nFor a fair comparison to the non-recursive peers (Seq2Seq and JTNN), we wanted to make sure we decoded the same number of samples as the aggregate number of decoded samples across a BBRT experiment (which includes using 3 decoding strategies and 4 scoring functions). We simply computes number of decoded samples (100) * 3 decoding strategies * 4 scoring functions * number of total recursive iterations to arrive at the \\u201ccomputational budget\\u201d. For a fair comparison in terms of model capacity, we used a hidden state dimension of 500, which seemed comparable to the JTNN model baseline.\\n\\n\\n> \\\"14. Section 5.1, \\u2018In Fig 2, we report\\u2019. Do you mean Fig 3? Same as with \\u2018Fig 3\\u2019 in the following paragraph.\\\" \\n\\nFigure 2 reports the top 100 logP generated compounds using BBRT (and its non-recursive peer) for two molecular representation. Figure 3 reports the top 2 compounds under BBRT-Seq2Seq and BBRT-JTNN. This appears to be correct in the text.\"}",
"{\"title\": \"Response Part 2\", \"comment\": \"> \\\"3. The method names (Graph2Graph, Seq2Seq, R-Graph2Graph, R-Seq2Seq, BBRT-JTNN, \\u2026) are not defined in section 5.1-baselines, and used inconsistently. Is JTNN the same as Graph2Graph and does BBRT mean recursive (R-)? This makes is hard to follow the results section.\\\"\\n\\nWe have fixed the notation.\\n\\n> \\\"4. Section 5.1: How does the performance depends on the initial seed of sequences? How sensitive is it i) to the choice of the diversity cutoff, and ii) to the target value of the initial molecules?\\\"\\n\\nPlease see Figure 9 in the supplement. We have added experiments to better understand how performance varies depending on the diversity and the property values of the seed compounds. In Fig. 9A, we selected 3 sets of 100 seeds with different diversity levels (min, avg, max) computed with the MinMax algorithm (described in Section 5.1 setup), and applied BBRT to these three sets. Standard error is reported after running this experiment 10 times with different sets of seed compounds chosen each time (using the randomness in MinMax). We observe improvements in performance using a set of seed sequences with max diversity. \\n\\nFor seed molecules with high property values, there are typically no \\u201ctarget values\\u201d available in the training data. Instead, in Fig. 9B, we assessed how performance depends on the property values of the initial molecules themselves. We observed improved performance in early iterations using seed molecules with high property values, although in later iterations, the seed molecule property values does not seem to make much of a difference. \\n\\n> \\\"5. Fig 4a, right: Is is expected that logP increases fastest when using it as a scoring function. Please show instead QED vs. the number of iterations. QED combines several molecular properties, including logP, and is therefore more suited for quantifying drug likeness.\\\"\\n\\nFig. 4A right highlights the performance tradeoffs when using different scoring functions, which can allow users to weigh between competing tradeoffs (e.g. interpretability vs performance). The fact that using logP as a scoring function when optimizing for logP finds the best logP compounds is useful information if the end goal of the user is to find the highest logP scoring molecules. In Fig. 6, we show the optimization of QED while using logP as a scoring function (the reverse of your suggestion). When optimizing for QED, using logP as a scoring function generates compounds that score highly on both metrics.\\n\\n> \\\"6. \\u2018Differences between logP and QED.\\u2019 I do not understand this section. Please clarify the goal of an explorative vs interpolative task? Are molecules with the highest QED in the training dataset? Motivate why BBRT does not achieve a higher QED in table 1?\\\"\\n\\nThis section has been clarified and the \\u201cexplorative vs interpolative\\u201d language has been removed. Despite our BBRT method outperforming ORGAN, JT-VAE and GCPN on the QED task, we observe the corresponding non-recursive techniques---Seq2Seq and JTNN baselines performed just as well without recursive inference. This might be a result of the translation models finding compounds that have reached a max ceiling for the best scoring QED compounds (we haven\\u2019t seen any paper report higher QED values than 0.948). Additionally, molecules with the highest QED values are in the training dataset. We leave for future work the application of BBRT to other metrics and experimental designs that leave out the best scoring compounds from the training data.\\n\\n> \\\"7. \\u2018The distinction in the vocabulary highlights the usefulness \\u2026\\u2019. Is the conclusion that representing molecules as sequences is better than representing them as graphs? This would contradict several recent papers on graph-based representations. You are only comparing the top molecules.\\\"\\n\\nWe do not make any claim that a sequence-based representation is better than a graph-based representation. Instead we argue that sequence and graph-based representations are complementary approaches that empirically leverage different vocabularies for top scoring compounds. Flexible frameworks (like BBRT) that enable the ensembling of results across molecular representations is key to diverse generation. For future work, we would like to include additional molecular representations (including grammars and alternative graph-based representations).\\n\\n> \\\"Is there a significant difference in the complexity between the top 100 (for example) molecules?\\\"\\n\\nPlease see Fig. 2 right. We report the diversity of the top 100 generated compounds under BBRT-JTNN, BBRT-Seq2Seq and the top 100 compounds from the training data.\"}",
"{\"title\": \"Response Part 1\", \"comment\": \"We thank the reviewer for their thoughtful response.\\n\\n> \\\"1. Framing optimizing as a sequence to sequence problem is not new. As described in the related work section, the BBRT is closely related to Jin et al. However, it is not clearly described what the major improvement over Jin et al is. Please clarify \\u2018their inference method restricts the framework\\u2019s application to more general problems.\\u2019. The method is also closely related to Zou et al (https://www.nature.com/articles/s41598-019-47148-x) and Mueller et al (http://proceedings.mlr.press/v70/mueller17a.html), which are not cited in the text. Zou et al used RL to learn to optimize molecules by mutating existing molecules. Mueller et al used Seq2Seq to optimize the sentiment of sentences. Please cite these papers and discuss why BBRT is better.\\\"\\n\\nJin et al. 2019 translate source graphs to improved target graphs while retaining high structural similarity. BBRT extends the translation framework from similarity-constrained optimization to the more general unconstrained setting of finding the best scoring molecules regardless of its similarity to a seed compound. Our results show that BBRT provides significant improvements in unconstrained molecular optimization relative to its non-recursive peer--just decoding from a translation model (Jin et al. 2019).\\n\\nThank you for these references. Zhou et al. 2019 is cited in the text--in the first paragraph of Section 2 Related Work: \\u201cMolecular optimization has been approached with reinforcement learning (Popova et al., 2018; Zhou et al., 2019).\\u201d \\n\\nMueller et al use a variational autoencoder coupled with constrained optimization in the latent space to decode better sequences. There are two differences: 1) Mueller et al. learns a marginal density p(x) while we directly improve on compounds by repeatedly applying a model that learns p(y|x) and 2) they focus on similarity-constrained optimization (optimize the sentiment of sentences without changing the sentence too much) while we focus on finding the best sequences more generally. Zhou et al. perform molecular optimization with reinforcement learning. While reinforcement learning techniques have shown great promise for molecular optimization, we focus on a different problem--one of molecular optimization via direct supervised translations. We argue that framing translation as optimization allows us to solve an easier supervised learning problem, which we have shown can lead to improved results. While we do not compare against Zhou et al. 2019 directly, we do provide comparisons to two other reinforcement learning techniques (ORGAN and GCPN). \\n\\n> \\\"2. Please compare to ChemBO (http://arxiv.org/abs/1908.01425). The current baselines are one-shot in that they are proposing a batch of molecules once without using the acquired target function label to propose subsequent batches. ChemBO optimizes a target function such as logP over multiple rounds similar to recursive BBRT approach, and should therefore be included as a baseline. Another suitable baseline would be performing BO in the latent space by applying Gomez et Bombarelli recursively (embed molecule; optimize GP in embedding space; decode molecule; iterate).\\\"\\n\\nWe tried downloading ChemBO (https://github.com/ks-korovina/chembo), but observed errors when running the program. Given the short timeline of rebuttals, we were unable to compare to this method, which we will leave for future work. We note, however, that 3 of the 5 baselines (ORGAN, JT-VAE, GCPN) are not \\u201cone-shot\\u201d, they are iterative methods. ORGAN and GCPN optimize molecules with reinforcement learning--iteratively generating molecules, computing a reward, and repeating. JT-VAE performs Bayesian Optimization in a learned latent space to find better compounds. This is an identical set up to ChemBO in terms of the optimization--the main difference between the two being the molecular representation used (learned latent embeddings versus a kernel on molecular graphs). We do not compare directly to Char-VAE (G\\u00f3mez-Bombarelli et al. 2018) because JT-VAE is in the same class of methods (VAE plus post-hoc BO in latent space), but JT-VAE has been shown empirically to find better compounds (Jin et al. 2018). We appreciate your suggestion to apply Char-VAE recursively. Optimizing molecules in an embedding space using Bayesian Optimization will find a local minima, so we have no reason to believe that recursive applications of this model will improve the compounds discovered from the initial (fully optimized) application of BO. It\\u2019s possible that recursive applications can find different local minima, but random restarts can also deal with this issue. For our application, recursive applications of a translation model makes sense because a single translation does not correspond to a local minima. In the BO case, it does.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a novel approach to generating molecules using Black Box Recurrent Translation.\\nThe authors uses existing machine-translation inspired schemes to generate new, similar, molecules with better properties according to some measure.\\nThen, the recursion takes the top K best molecules and runs another iteration to generate even better molecules, ad infinitum.\\nThe authors use the newly introduced SELFIES strings as vocabulary for generation.\\nThe authors also analyze the decoding strategy, and how the process generates interpretable molecular traces.\\nRelating to wetlab work, having a molecular trace available from the recursive translation scheme is valuable for drug-sythesis.\\nThe authors also show that this technique can optimize multiple properties at once.\\n\\nI am leaning towards an accept for this paper, since not only does the technique presented seem general, the authors does in depth analysis into the model and how it affects drug discovery.\\n- Recursive black box translation seems to be widely applicable to new models.\\n- The model seems to reach a significantly better state of the art on the metrics proposed.\\n- None of the baselines seem to use SELFIES as the string of choice.\\n This means it's difficult to tell how much the \\\"Blackbox recursive\\\" part of the algorithm adds to the model.\\n An ablation experiment without BBRT might inform us of how much of the benefit is due to the molecule representation (Fig 4A reports the mean, but it would be good to have the same metric as Table 1).\\n- The authors provide an in depth discussion about how having molecular traces would hhelp in drug design.\\n This makes the tool seem more widely appealing and useful.\", \"a_few_questions_would_clear_up_the_strengths_of_the_paper\": \"- Is there a connection to the backtranslation work in Lample 2018? (Phrase-Based & Neural Unsupervised Machine Translation)\\n It seems like a similar idea - except in this domain, the target language and source language are the same.\\n- How can there be multiple scoring functions? \\n Were they combined in one run, or were these separately optimized runs? Are these only used in Figure 4?\\n- Why would beam search do less well than stochastic? \\n Is it because during recursive translation, the beam search variants have low diversity?\\n Then, training with stochastic decoding and generation with a beam search should do even better, right?\\n This would highlight that the advantage of stochastic decoding is really online in the context of recursive translation, not generally.\\n- What is the point of Fig 4A right? Why do we expect that maximizing non-logP properties will increase mean logP?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper presents a translation-based method for molecular property optimization. It uses a sequence- or graph-based encoder/decoder framework to produce molecules with (hopefully) improved properties, then feeds a subset of these molecules back into the encoder/decoder to generate a new set of molecules; this process is repeated for a fixed number of iterations to arrive at a final set of \\\"optimized\\\" molecules. The method is agnostic to the form of the encoder/decoder; the emphasis is on the iterative approach. Additionally, this approach enables visualization of \\\"molecular\\\" traces that can reveal pathways between molecules that follow relationships similar to matched molecular pairs. The work extends related work in translation-based property optimization [6]. The paper is well-written and generally easy to read.\", \"The method is evaluated on two tasks, logP and QED. Both of these are computed properties that have known issues (see the discussion in [3]), but I understand that these properties are used in many publications and are thus easy to compare. The method presented here performs similar to others on a QED task. They claim superior results on the logP task, but I have concerns about the fairness of the comparison since logP can be exploited by very simple models if there are no limits on the size of the generated molecules (or, similarly, the number of tokens/generative steps allowed for each molecule). Additionally, the authors claim to perform multi-objective optimization but do not actually do this.\", \"The iterative nature of this method is very interesting. However, my concerns about the types of experiments and comparisons that were done (see below for more details) are big enough that I cannot approve this paper in its current form. Weak reject.\", \"Specific notes (starting with page number):\", \"1: \\\"potential druggable candidates\\\" does not make sense; compounds are not \\\"druggable\\\" (their targets are), although they may be \\\"drug-like\\\".\", \"2: Consider citing Kramer et al.'s seminal work on matched molecular pairs [1].\", \"3: Please explain what it means for y to \\\"paraphrase\\\" x?\", \"5: For your logP experiments, you need to be more clear about how you are comparing to other models. You are guaranteed to get to higher logP values if you can generate larger molecules (more tokens) than the baselines, since logP is essentially linear in the number of carbons. Are you doing something to limit the number of tokens you can generate in each iteration? Or why should I believe these comparisons are fair?\", \"5: In Table 1, note that some literature uses a \\\"normalized\\\" penalized logP, while others use the formula directly without a dataset-specific normalization (which can appear to give better results). Can you confirm which you are using here and whether the baseline models are the same?\", \"5: The results in Table 1 would be more compelling if they were not divorced from their starting points. Please include information about the similarity of these molecules to the starting molecule as well as the property delta. Consider an approach like Jin et. al [2], where results were specifically categorized by similarity constraints.\", \"5: \\\"All models were trained on the open-source ZINC dataset.\\\" What subset of ZINC are you using?\", \"5: The supplementary figure showing that logP is broken is missing?\", \"5: \\\"Consistent with the literature we report diversity as...\\\". Please cite some literature that you are consistent with.\", \"6: \\\"we sample 100 times from a top-2...\\\"; does this mean you are doing 100 iterations? Sampling 100 times from the same top-2 sampler doesn't really make sense, but I'm not entirely sure what you are describing here.\", \"7: Figure 4 says these are \\\"ablation\\\" experiments. What exactly are you ablating?\", \"8: You state that better performance on logP and similar performance on QED is not known in the literature. In fact, the MolDQN paper [3] calls this out explicitly (and also contains a discussion of bounded vs. unbounded logP).\", \"8: \\\"Recent RL methods focus on molecular construction and are therefore not well-suited for the generation of molecular traces\\\"; I disagree with this. RL methods that can start from a predefined graph have the ability to move between compounds, possibly in a way that is orthogonal to traditional similarity-based exploration (see the discussion of \\\"MDP edit distance\\\" in [4]). Also note that one of the key features of graph-based generators like [2] and [5] is that all of the intermediate states are valid, so you could do similar molecular traces for interpretability (although your differences are more like MMPs with functional group-level deltas).\", \"9: \\\"synthetic chemists can carry out the individual steps of a molecular trace...\\\"; in general this is not true. The known medicinal chemistry transformations are a relatively small set of operations, and your molecular traces are unlikely to capture them in any systematic way. Please avoid making claims like this unless you can back them up with experimental evidence or comparisons to models explicitly trained for synthetic route planning.\", \"9: These experiments are not multi-property optimization. Measuring the value of a secondary property while optimizing a primary property is not the same as optimizing them both simultaneously. The latter requires some strategy for incorporating both property values into the decision function, such as scalarizing (see \\\"multi-objective optimization\\\" on Wikipedia).\"], \"references\": \"[1] Kramer, C. et al. Learning Medicinal Chemistry Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) Rules from Cross-Company Matched Molecular Pairs Analysis (MMPA). J. Med. Chem. 61, 3277\\u20133292 (2018).\\n[2] Jin, W., Barzilay, R. & Jaakkola, T. Junction Tree Variational Autoencoder for Molecular Graph Generation. arXiv [cs.LG] (2018).\\n[3] Zhou, Z., Kearnes, S., Li, L., Zare, R. N. & Riley, P. Optimization of Molecules via Deep Reinforcement Learning. Sci. Rep. 9, 10752 (2019).\\n[4] Kearnes, S., Li, L. & Riley, P. Decoding Molecular Graph Embeddings with Reinforcement Learning. arXiv [cs.LG] (2019).\\n[5] You, J., Liu, B., Ying, R., Pande, V. & Leskovec, J. Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. arXiv [cs.LG] (2018).\\n[6] Jin, W., Yang, K., Barzilay, R. & Jaakkola, T. Learning Multimodal Graph-to-Graph Translation for Molecular Optimization. arXiv [cs.LG] (2018).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper builds on existing translation models developed for molecular optimization, making an iterative use of sequence to sequence or graph to graph translation models by wrapping them in a meta-procedure. The primary contribution is really just to apply the translation models iteratively, i.e., feeding translation outputs from the models back in as inputs for retranslation. A few strategies are introduced to score / rank candidates before they are chosen for retranslation. The overall idea is very simple, and is likely to work in some basic cases where the property has a natural \\\"additive\\\" nature, e.g., logP that you can improve by adding functional groups. This is recognized but not really controlled in the paper except for selecting for input similarity before retranslating. Moreover, I don't think that you really ever want to just maximize logP for any drug so this particular task is a bit artificial in the first place. Other properties are not additive in the same sense, e.g., drug likeness or QED, and the method doesn't appear to improve it (though, to be fair, there may be a ceiling effect for QED in particular). \\n\\nOne of the main ways that one can control the final output in the iterated translation process is by judiciously selecting or ranking candidates for retranslation. The authors use essentially the score from the model itself, similarity to input, and some basic chemistry metrics to do that. Wouldn't it be much better to train a separate ranking method to guide the iterative steps? \\n\\nThe empirical results are clean though not convincing (see the logP discussion above). Additional properties should be included to demonstrate that the method might actually have some practical value, i.e., generalize beyond additive logP. Multi-property optimization would be one possible setting since de novo models have a hard time to reach intersections of different property constraints. Abstractly, one could imagine that an iterative, successively guided approach could work well. The proposed approach in the paper is somewhat undeveloped. It merely uses a translation model for the primary property, and ranks candidates by the other. This is unlikely to get you to any challenging intersections. Also, since logP was always one of the properties effectiveness in this regard is not really demonstrated either. A slightly more sophisticated approach might use relaxed, separately trained ranking models in intermediate steps, successively tightened towards the intersection as the iteration progresses. E.g.,\\n\\nBrookes et al., Design by adaptive sampling, arXiv:1810.03714\\n\\nThe paper is clearly written but for such a simple method one would need really convincing results and experiments. Maybe better as a workshop submission?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors frame molecule optimization as a sequence-to-sequence problem where a source molecule is translated to a target molecule with improved properties. The authors extend existing methods for improving molecules by applying them recursively over multiple rounds, and show that it is beneficial for optimizing logP but not QED. An advantage over existing methods is that the trajectory of optimized molecules is interpretable. Altogether, I find the paper borderline: it is clearly written but the methodological contribution is incremental, some citations to related work missing, and some parts of the results section are weak. Detailed comments below.\\n\\nMajor comment\\n============\\n1. Framing optimizing as a sequence to sequence problem is not new. As described in the related work section, the BBRT is closely related to Jin et al. However, it is not clearly described what the major improvement over Jin et al is. Please clarify \\u2018their inference method restricts the framework\\u2019s application to more general problems.\\u2019. The method is also closely related to Zou et al (https://www.nature.com/articles/s41598-019-47148-x) and Mueller et al (http://proceedings.mlr.press/v70/mueller17a.html), which are not cited in the text. Zou et al used RL to learn to optimize molecules by mutating existing molecules. Mueller et al used Seq2Seq to optimize the sentiment of sentences. Please cite these papers and discuss why BBRT is better.\\n\\n2. Please compare to ChemBO (http://arxiv.org/abs/1908.01425). The current baselines are one-shot in that they are proposing a batch of molecules once without using the acquired target function label to propose subsequent batches. ChemBO optimizes a target function such as logP over multiple rounds similar to recursive BBRT approach, and should therefore be included as a baseline. Another suitable baseline would be performing BO in the latent space by applying Gomez et Bombarelli recursively (embed molecule; optimize GP in embedding space; decode molecule; iterate).\\n\\n3. The method names (Graph2Graph, Seq2Seq, R-Graph2Graph, R-Seq2Seq, BBRT-JTNN, \\u2026) are not defined in section 5.1-baselines, and used inconsistently. Is JTNN the same as Graph2Graph and does BBRT mean recursive (R-)? This makes is hard to follow the results section.\\n\\n4. Section 5.1: How does the performance depends on the initial seed of sequences? How sensitive is it i) to the choice of the diversity cutoff, and ii) to the target value of the initial molecules? \\n\\n5. Fig 4a, right: Is is expected that logP increases fastest when using it as a scoring function. Please show instead QED vs. the number of iterations. QED combines several molecular properties, including logP, and is therefore more suited for quantifying drug likeness.\\n\\n6. \\u2018Differences between logP and QED.\\u2019 I do not understand this section. Please clarify the goal of an explorative vs interpolative task? Are molecules with the highest QED in the training dataset? Motivate why BBRT does not achieve a higher QED in table 1?\\n\\n7. \\u2018The distinction in the vocabulary highlights the usefulness \\u2026\\u2019. Is the conclusion that representing molecules as sequences is better than representing them as graphs? This would contradict several recent papers on graph-based representations. You are only comparing the top molecules. Is there a significant difference in the complexity between the top 100 (for example) molecules?\\n\\n8. Section 5.3 is verbose and can shortened to a few sentences saying that applying edits to molecules recursively makes the model interpretable. How do traces look like when logP is used a selection criteria? How does the trace of the best molecule shown in figure 3 look like? What is the average edit distance between molecules are and intermediate molecules valid? Are transitions plausible?\\n\\n9. Section 5.4: You are optimizing a single objective (e.g. logP) while reporting in parallel a second objective (e.g. QED). This is not multi-objective optimization, where multiple objectives are optimized in parallel. Optimizing a single objective while reporting a second objective can be also done with methods other than BBRT. Please clarify the take-away message of this paragraph or remove it.\\n\\n\\nMinor comments\\n=============\\n10. Introduction: \\u2018discrete and unstructured\\u2019. Why unstructured? I would say that molecules are structured--they must follow a certain grammar to be valid.\\n\\n11. Introduction: \\u2018treating inference as a first class citizen\\u2019 is unclear since \\u2018inference\\u2019 is undefined. Either remove this sentence or clarify.\\n\\n12. Please discuss that BBRT is limited by the need of a labeled dataset for constructing training pairs.\\n\\n13. Section 5.1, \\u2018Similar computational budget\\u2019. How did you quantify the computation budget?\\n\\n14. Section 5.1, \\u2018In Fig 2, we report\\u2019. Do you mean Fig 3? Same as with \\u2018Fig 3\\u2019 in the following paragraph.\"}"
]
} |
B1eiJyrtDB | Improved Generalization Bound of Permutation Invariant Deep Neural Networks | [
"Akiyoshi Sannai",
"Masaaki Imaizumi"
] | We theoretically prove that a permutation invariant property of deep neural networks largely improves its generalization performance. Learning problems with data that are invariant to permutations are frequently observed in various applications, for example, point cloud data and graph neural networks. Numerous methodologies have been developed and they achieve great performances, however, understanding a mechanism of the performance is still a developing problem. In this paper, we derive a theoretical generalization bound for invariant deep neural networks with a ReLU activation to clarify their mechanism. Consequently, our bound shows that the main term of their generalization gap is improved by $\sqrt{n!}$ where $n$ is a number of permuting coordinates of data. Moreover, we prove that an approximation power of invariant deep neural networks can achieve an optimal rate, though the networks are restricted to be invariant. To achieve the results, we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy. | [
"Deep Neural Network",
"Invariance",
"Symmetry",
"Group",
"Generalization"
] | Reject | https://openreview.net/pdf?id=B1eiJyrtDB | https://openreview.net/forum?id=B1eiJyrtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vfONePFDJh",
"HklGNpG2jH",
"BJgTNhz3ir",
"rkxB0sz2iH",
"Hkl4hU95jr",
"ryxk9laYjH",
"Hyg8wsedor",
"rkeUodgdsr",
"BygmiLgujB",
"HklWqgGrcr",
"rJx0u1p19B",
"BkeJKjc0Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724356,
1573821737647,
1573821492936,
1573821388732,
1573721771943,
1573666951264,
1573550941749,
1573550238282,
1573549723497,
1572311176765,
1571962742264,
1571887990554
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1481/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1481/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1481/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work proves a generalization bound for permutation invariant neural networks (with ReLU activations). While it appears the proof is technically sound and the exact result is novel, reviewers did not feel that the proof significantly improves our understanding of model generalization relative to prior work. Because of this, the work is too incremental in its current form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We add discussion about $\\\\varepsilon^p$.\", \"comment\": \"We appreciate your critical comment.\\nIn our updated paper, we add description about the point after Theorem 2 in our paper. For summary, the increasing speed of $n! \\\\varepsilon^p=n! \\\\varepsilon^{nD}$ in terms of $n$ is sufficiently fast for any $\\\\varepsilon$ and $D$. We appreciate if you check the point.\"}",
"{\"title\": \"We add the comparison with the paper (Sokolic et al., ICML 2017) in our paper.\", \"comment\": \"We clarify the difference between our paper and okolic et al., ICML 2017 in Section in the updated version of our paper. We are glad if you check the paragraph.\"}",
"{\"title\": \"We updated the submitted paper.\", \"comment\": [\"Updated points are as follow:\", \"Add description to show the technical novelty and intuition of our paper. (Section 5 and 6)\", \"We cite the paper Sokolic+ (2017) and discuss differences between it and our paper. (Section 5)\", \"We show that the term $\\\\varepsilon^p$ does not provide a problem with a large $n$. (Section 3)\", \"Correct several sentences and typos.\", \"We omit the mistakenly loaded package and modify the format as following the template.\"]}",
"{\"title\": \"Thank you for your response.\", \"comment\": \"Thank you for agreeing on the novelty of our work.\\n\\nAbout significance, we are confident that it is not easy to develop proof to derive the bound improved by n!. Technically speaking, to obtain the improved bound, we have to find n! subsets of functions WITHOUT overlapping with each other. To the aim, we introduce the notion of the fundamental domain and prove that a volume of overlapping has measure zero (Specifically, Lemma 1 in our paper). Without our techniques, the improvement by n! is a folklore, but not theoretical analysis. Hence, we believe that it is significant to develop such the technique and show the improved bound.\\nIf you are not agree with the importance of our achievement, please give us references which show the improvement rigorously.\"}",
"{\"title\": \"Thanks for your reponse\", \"comment\": \"I agree with you that generalization of invariant DNNs are not studied before. However, my main concern is the significance of the work. Basically, covering numbers count the number of different functions in the hypothesis class where the notion of different depends on some metric. Now, if there the input has invariance, one can take advantage of that and reduce this total number of different functions by n! Even though this very specific problem has not been studied before, it is not clear to me that this contribution is significant enough to be accepted at ICLR.\"}",
"{\"title\": \"Thank you for your accurate comment.\", \"comment\": \"Thank you for your accurate comment. Especially, we would appreciate your evaluation for our technical contributions.\\n\\nWe also thank you for the introduction of the previous research[a]. We confirmed that their main result is very similar to ours. The superiority of our results is as follows. At first, we construct explicit invariant deep neural networks, which guarantee practical and useful methods. One of them is a new one that can achieve the same objectives as DeepSets (Zaheer 2018). Since the paper [a] is written with an abstract framework, our paper can provide useful knowledge. Secondly, our analysis is not limited to classification but can be applied to general learning methods including regression. Thirdly, our results provide a more specific analysis of permutation invariant networks, which can be used for future specific expansion and analysis.\"}",
"{\"title\": \"Could you give me some evidence or references?\", \"comment\": \"Thank you for your comment.\\n\\nAs mentioned in our paper, we developed several novel techniques as follow: (i) we prove a correspondence between invariant DNNs and DNNs on the fundamental domain, and (ii) we derive a covering number for a functional space which is sensitive to a volume of the domain of functions. To the best of our knowledge, such techniques are not commonly used in the analysis of deep neural networks.\\n\\nCould you give me some pieces of evidence or references which support your opinion that says our analysis follows very basis arguments? As your comment does not provide such clear evidence, we cannot find a way to discuss it with you.\\n\\nAbout the format of our paper, we mistakenly load the \\\"fullpage\\\" package, hence the margin of our paper is changed. About the point, we have not refutation and will modify it.\"}",
"{\"title\": \"Though the rate of $\\\\varepsilon$ is critical, the generalization bound can be tight with large $n$.\", \"comment\": \"Thank you for your critical opinion.\\n\\nAs you mentioned, the order of $\\\\varepsilon$ is very important, thus we will add the discussion. We expect that our generalization bound may get loose when $n$ is not sufficiently large. In contrast, when $n$ is reasonably large, our bound becomes tight since $n!$ increases rapidly rather than $\\\\varepsilon^p$.\\n\\nAbout the clarity, we will modify our description and correct the typos.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a derivation of a generalization bound for neural networks designed specifically to deal with permutation invariant data (such as point clouds). The heart of the contribution is that the bound includes a 1/n! (i.e. 1 / (n-factorial)) factor to the major term, where n is the number of permutable elements there are in a data example (think: number of points in a point cloud). This term goes some way towards making the bound tight.\\n\\nThe 1/n! factor in the bound may be an interesting development but the novelty does appear to be limited. Also, the authors fail to discuss that -- as part of that same term -- there is a factor: (1 / (epsilon^p)), where p is the dimension of the input and epsilon is a small error term. As p is proportional to n, and epsilon is quite small, this term could well dominate the factorial in many practical settings. A discussion of the relation between these terms is appropriate and seems to be missing.\", \"clarity\": \"In general the paper is fairly well written, but there are multiple instances of missing articles and strange idiom violations (eg. p. 4, remark 1: \\\"such the bound\\\" versus \\\"such a bound\\\")\\n\\nMore seriously, the proof of Lemma 1 was quite hard to follow (esp. the second paragraph). I would suggest putting less emphasis on the relatively straightforward construction of the sorting mechanism in Propositions 2 and 3, and use the space to more clearly detail the proof of Lemma 1, which is, after all, the heart of the contribution.\\n\\nI also found the proof of proposition 4 too confusing to easily follow. What is the interpretation of the indices (1, ..., K) on the functions?\\n\\nFinally, I would have liked to see some interpretation of the findings in a discussion section (or in an extended conclusion).\", \"minor_issues\": [\"First sentence of the abstract is difficult to parse and does not seem like an accurate assessment of the contribution of the paper.\", \"Paragraph 2 of the introduction presents a sequence of argument whose logic seems inconsistent to me. There is a drift from a discussion of generalization of neural networks to a mention of work on the very distinct topic of the representational capacity of neural networks (i.e. universal approximation property of neural networks). The linking text \\\"To tackle the quesiton, ...\\\" is not appropriate.\", \"Unlike Example 1, Example 2 (p.3) is not helpful in motivating the permutation invariant neural networks. The definition makes direct reference to Proposition 2 that will not be introduced for another 3 pages.\", \"In Sec. 4.1, it seems like a phi symbol is used when I believe a null symbol was intended\", \"Proposition 3: \\\"max( z_1, z_1 )\\\" should be \\\"max( z_1, z_2 )\\\" with the adjustment carrying through to the other side of the equals.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides generalization bounds for permutation invariant neural networks where the learning problem is invariant to the permutation of input data.\\n\\nUnfortunately, the technical value of the content and its novelty is very limited since the proof reduces to a very basic argument that counts invariances (which is simply n! where n is the number of invariant dimensions) and uses a standard approach to give a generalization bound. Therefore, I don't think the results does not help us with better understanding of permutation invariant neural networks. \\n\\nUnfortunately, the paper has several typos and mistakes as well. Another non-technical issue is that apparently authors have removed the ICLR format and reduced margin to fit the paper in 10 pages which is against the spirit of page limit.\\n\\n***********************************\", \"after_author_rebuttals\": \"After reading authors' response and reading the proofs, I realize that the formal proof is not trivial and requires more work that I assumed. However, I do not understand how this work can improve our understanding of permutation invariant networks. Therefore, I think the contributions are not significant enough for publication and my evaluation remains the same.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper derives a generalization bound for permutation invariant networks. The main idea is to prove that the bound is inversely proportional to the square-root of the number of possible permutations to the input. The key result is Theorem 3 that bounds the covering number of a neural network (defined under an approximation control bound, Thm 4) using the number of permutations. The paper proves the theorem by showing that the space of input permutations can reduced to group actions over a fundamental domain, and deriving a bound for the covering number of the fundamental domain (Lemma 1), which is then extended to derive the same for the neural network setting. For the permutation invariance setting, the fundamental domain is obtained via the sorting operator.\", \"pros\": \"1. The paper appears to be mathematically rigorous, and at the same time, is straightforward to follow, with useful intuitions provided whenever required. \\n2. The provided theoretical result perhaps extends the work on universal approximation theorem for permutation invariant networks in Sennai et al, and Maron et al., 2019. Further, the generalization bound for permutation invariance is new to my knowledge.\", \"cons\": \"1. While, the proof appears to be novel for permutation invariance per se, however I do not think the main findings in this paper or the proof approach are sufficiently novel. For example, generalization bounds under invariances have been explored previously, perhaps the most related to this paper is [a] below that already shows (in a similar vein as this paper) that the bound decreases proportional to 1/\\\\sqrt(T), where T is the number of invariances used. While, that work uses affine transformations of the input from a base space for the invariances (which this paper calls fundamental domain), the current paper uses permutation invariance and thus gets the bound proportional to 1/sqrt(n!). In the context of this prior work, the contribution of this paper appears incremental. The paper should cite this work and contrast against the results and proof methods in it.\\n\\n[a] Generalization Error of Invariant Classifiers, Sokolic et al., ICML 2017.\\n\\n2. The paper has several typos and grammatical errors through out, which are easily fixable though!\\n\\nOverall, this paper is technically rigorous, and novel in its very specific context of deriving the generalization bounds for permutation invariant networks. However, in the broader context of invariances in general and their bounds, the contribution appears to be marginal.\"}"
]
} |
B1gskyStwr | Frequency-based Search-control in Dyna | [
"Yangchen Pan",
"Jincheng Mei",
"Amir-massoud Farahmand"
] | Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency. In particular, Dyna is an elegant model-based architecture integrating learning and planning that provides huge flexibility of using a model. One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences. Search-control is critical in improving learning efficiency. In this work, we propose a simple and novel search-control strategy by searching high frequency regions of the value function. Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct. We empirically show that a high frequency function is more difficult to approximate. This suggests a search-control strategy: we should use states from high frequency regions of the value function to query the model to acquire more samples. We develop a simple strategy to locally measure the frequency of a function by gradient and hessian norms, and provide theoretical justification for this approach. We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains. | [
"Model-based reinforcement learning",
"search-control",
"Dyna",
"frequency of a signal"
] | Accept (Poster) | https://openreview.net/pdf?id=B1gskyStwr | https://openreview.net/forum?id=B1gskyStwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JYfyp-RyP",
"HJlqzZ8voS",
"SJej20SDsS",
"rkl4BYTBiB",
"r1eEdLMyjH",
"Byx_XzzJir",
"HJglemDstB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724326,
1573507346177,
1573506738523,
1573407036420,
1572968043859,
1572966944227,
1571676904476
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1480/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1480/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1480/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The reviewers are unanimous in their evaluation of this paper, and I concur.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Address all main issues\", \"comment\": \"We appreciate your insightful and constructive feedback. We hope our response addresses the main weaknesses as you pointed out. We uploaded the most recent version of our paper.\\n\\nWhy the theorem is natural to extend to second order derivative? \\nIn the updated paper, we formally wrote down the theoretical result of second order connection for clarification. In previous version, we say it is natural because that the reasoning line of deriving second order connection is exactly the same as we do for the first-order derivation. For your convenience, here we summarize the key steps: 1) use a local Fourier transform to express a function locally (i.e. within an unit ball); 2) calculate the gradient/Hessian norm based on this local Fourier transform; 3) take integration over the unit ball of the gradient/Hessian norm to build the connection with the local frequency distribution $\\\\pi_{\\\\hat{f}}$ and function energy. In Appendix A.2, after the definition of f_x(y), we take gradient one more time to acquire one row of the Hessian matrix. Then we still use the complex conjugate for the square of Frobenius norm of the Hessian matrix, which can be calculated by summing over the square of l_2 of all the rows in the Hessian. We use l2 norm for vector and Frobenius norm for matrix.\\n\\nThe radius does not have to be one, but assuming it to be one does not lose generality. Since we cannot talk about the frequency of a single point, the key idea is to characterize the frequency of a function locally and we use a ball (hypersphere) to define the local domain. \\n\\nThe suggestion of using some sort of sample average is very interesting. In fact, the idea of adding perturbation to s is implemented in both the previous work by Pan et al. and in our current work, please see Appendix A.6 eq(14)(15). With this noise added to each gradient step, then we can connect the hill climbing to Langevin dynamics, then the points along the gradient ascent trajectory can be roughly thought of as Gibbs distribution where the density is proportional to g(s), and hence it is justifiable that our search-control queue provides states with higher density in the high frequency region. For more details, we refer readers to the section 4.2 in the paper Hill Climbing on Value Estimates for Search-control in Dyna by Pan et al.\\n\\nNotice that sampling using only rule (8)(b) (rule (6) in the previous version) is the algorithm from the work by Pan et al, which is indeed included in our paper and is marked as Dyna-Value. If you actually mean only using rule (8)(a) (rule (7) in the previous version), we explain the choice in 2nd paragraph in section 4. Theorem 1 tells us that the frequency is connected with both the function magnitude and the gradient norm. We want to avoid finding those states with very small values which are unlikely to visit under an optimal policy. As a result, we choose initial states which have high value for gradient ascent, then the gradient ascent should help further move to higher frequency region.\"}",
"{\"title\": \"Address all main issues\", \"comment\": \"We appreciate your insightful and constructive feedback. We uploaded the most recent version of our paper. We hope we address your concern well.\", \"the_sinus_experiment\": \"Mentioning \\u201clinear\\u201d was a slip. We indeed use a neural network, as explained in detail in the 2rd paragraph, (Appendix) A.6 (A.5 in the previous version).\\n\\nComputational cost. We discuss two solutions here. The first one is exactly as you mentioned. We show some results of only using first-order information in the updated paper Appendix A.5. We write g(s) as the sum of the gradient norm and hessian norm for generality and for the purpose of better matching our theoretical result. The gradient w.r.t. state of gradient norm can be calculated as: $\\\\frac{\\\\partial ||\\\\nabla_s V(s)||^2}{\\\\partial s} = \\\\frac{\\\\partial^2 V(s)}{\\\\partial s^2} \\\\frac{\\\\partial V(s)}{\\\\partial s} = H_V(s) \\\\nabla_s V(s)$. Note that the Hessian vector product can be calculated by backpropogating twice, known as an efficient algorithm. We recommend \\\"Fast Exact Multiplication by the Hessian\\\", Pearlmutter\\u201993 for reference.\\n\\nThe second solution should be more interesting. Note that our hill climbing strategy can be used w.r.t feature. That is, we can do gradient ascent on latent feature instead of the observation variables. Enforcing the action value to be some simpler function in the feature can greatly reduce the computational cost. This is particularly useful for the purpose of either handling large observation space (i.e. an image) or handling partial observability. There are several existed works studying feature-to-feature models. Once we acquire feature vectors through search-control, those models can be used for planning update. We believe this is a promising future direction. \\n\\nAbout squaring gradient norm. In general, Hessian norm and squaring gradient norm are not equivalent. Figure 2 in section 3.2 should throw some insight about this. The mathematical derivation of building the connection between Hessian norm and local frequency resembles that between gradient norm and local frequency. We added the second order theoretical connection into the updated paper. \\n\\nWe would like to briefly discuss existed works of exploring uncertain regions. To our best knowledge, in those works, the uncertainties are characterized w.r.t. the learning parameters (not the state variables of the value function). Concretely, given V_\\\\theta(x), the previous works concern about uncertainty of parameters \\\\theta, while we concern about where the training $x$s should come from based on the nature of true V (this is independent of \\\\theta, though we have to use \\\\theta in implementation as the true V is unknown). Please let us know if we miss any reference, we would be happy to add those.\"}",
"{\"title\": \"Address all main issues\", \"comment\": \"We thank you for reviewing our paper within such a short notice and providing us with very specific and valuable feedback. Our most recent version of paper is uploaded. We believe our paper is well-written and quite clear. We hope that if your concerns are addressed, you update your score accordingly.\", \"theorem_1\": \"The theorem is correct. Eq. (3) (in the new version, Eq. (5)) holds in your example.\\nThe Fourier transform of a constant function is Dirac\\u2019s delta function at frequency zero.\\nBecause of the property of the Dirac\\u2019s delta function that \\n$\\\\int f(x) \\\\delta(x - x0) dx = f(x0)$, we see that the second integral in the RHS is proportional to $\\\\int \\\\delta(k - 0) ||k||^2 dk = ||0|| = 0$. Therefore, both sides are zero.\", \"the_sinus_experiment\": \"Mentioning \\u201clinear\\u201d was a slip. We indeed use a neural network, as explained in detail in the 2rd paragraph, (Appendix) A.6 (A.5 in the previous version).\\n\\nFigure 2. We updated that figure and report the concrete proportions of points fall in high frequency region. Please see Empirical Demonstration, Sec 3.2. In fact, (c) has higher density around the spikes and we quantified this in the updated paper. Around the spikes (which have large second-order derivative magnitude), the function changes sharply and it is more difficult to generalize and hence we need more samples around those parts. The experiment is easy to reproduce according to details in the 2nd paragraph, A.6. We also want to mention that the main purpose of that figure is not to compare (b) and (c); instead, we simply want to show that the distribution of using either first order and second order prioritization can lead to high density on the frequency region. \\n\\nRegarding the objective g. We added them in A.5. But we want to clarify that it does not matter which one would dominate or whether using one of them would be better, since we characterize local frequency through both first-order and second-order derivatives, so the g(s) formulation is general and matches with our theoretical result (notice that in the updated paper, we formally established the second-order connection). An even more general form to write g(s) is some weighted sum of the first-order and second-order terms (then your suggestion is a special case). However, we do not need such complications in our experiments. Your phrase \\u201cone of them would dominate\\u201d is, in fact, a benefit. As you mentioned, the two terms can have vastly different scales. Regions which have low (or even zero) gradient magnitude may have high Hessian magnitude, and vice versa. Hence, using the combination can help the hill climbing process in case that one of the term vanished at some point. \\n\\nPrioritizedER baseline. Sorry for missing the detail in the paper. We use the proportional variant with sum tree data structure. We use prioritized ER without importance ratio bias correction but half of mini-batch samples are uniformly sampled from ER as a strategy for bias correction. This matches with our own algorithm. We add these details into the paper in A.6. In fact, our algorithm can easily outperform ER and PrioritizedER. As we increase the number of planning steps, most samples from the ER buffer would be updated sufficiently, hence PrioritizedER or ER should show limited improvement. In contrast, our algorithm can utilize the generalization power of the value function to acquire unvisited states during search-control, hence it can benefit from the larger number of planning updates. Figure 5 from https://arxiv.org/pdf/1906.07791.pdf may be a good visualization for the difference between ER and search-control queue.\", \"out_of_boundary\": \"This refers to being outside the valid subset of the states. In many problems, we know that states are within a proper subset of R^d, e.g., [0,1]^d. If the hill climbing procedure generates a state outside that subset, we restart the hill climbing process. This is the same as discussed by Pan et al., \\u201cHill Climbing on Value Estimates for Search-control in Dyna,\\u201d IJCAI, 2019.\\n\\nAbout \\u201cvariance of the evaluation curve\\u201d in section 5.2. Please notice that our claim about \\u201cthe variance of the evaluation curve\\u201d refers to Figure 4(b) and not Figure 3(a). In the former, the shaded areas are noticeably different. In Figure 3(a), we do not see how our algorithm appears to have high instability in early learning curve, and we are unsure how you would define \\u201cearly learning\\u201d and \\u201chigh instability\\u201d. In fact, in term of variance, the behaviour of our algorithm in both Figure 4(b) and Figure 3(a) should be considered as consistent: it has a lower standard error than other competitors. \\n\\nSection 5.2, points in bottleneck areas. We add concrete counts in the figure caption.\\n\\nFigure 4(b). The wide error-bars for Dyna-Value is exactly what we expected. Without sufficient samples from the bottleneck areas, the agent is likely fail to pass the holes. Some of the runs for Dyna-Value failed (not a single run).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"Disclaimer:\\nThis is an emergency review that I got assigned on Nov 5th, so I unfortunately had only a few hours to assess the paper.\", \"summary\": \"This paper proposes a new mechanism for \\u201csearch control\\u201d, that is, choosing the planning-start-states from which to roll out the model, in the context of Dyna-style updates. The key idea is to favour updates to states where the the value functions changes a lot, because those need more information to become accurate.\\nThe topic is very interesting, and not studied enough, the proposed method looks novel, and the experimental results look promising. However, the clarity of the paper is not great, the theoretical result (Theorem 1) seems to be incorrect, the narrative from Claude Shannon to Joseph Fourier to gradient norms to practical choices is somewhat confusing, the empirical results are not really conclusive, and despite a large appendix there are missing implementation details. I think this paper is currently one or two major revisions away from the acceptance threshold.\", \"comments\": \"1. Theorem 1, equation (3): consider a constant function f(x) = K. Then its derivative is zero, so the left-hand side is zero, yet the right-hand side is the product of two strictly positive factors (each of them proportional to K^2), so how can the equation hold?\\n2. Sine experiment: How can you reasonably do *linear* regression onto a sine function? What are your input features?\\n3. Figure 2 is odd: panels (b) and (c) look very similar in how they bias the sampling toward the left side, yet the performance difference (a) is very stark, how come? Also, how can there be such a high contrast in (b)/(c) if 60% of all samples are chosen uniformly, as stated in the text?\\n4. The paper has numerous grammatical mistakes, to the extent it becomes difficult to read. I know this can happen in deadline mode, but please revise the draft thoroughly (special call-out to the dozens of missing definite/indefinite articles and plural forms). Also, use \\u201c\\\\citep\\u201d where appropriate.\\n5. The objective g as the sum of a gradient norm and a Hessian norm seems odd, as these terms have completely different scales, so usually one of them will dominate, can you explain and motivate this further, and compare empirically to the two terms in isolation?\\n7. For the prioritizedER baseline, what variant and hyper-parameters are you using?\\n8. Please describe \\u201cout of boundary\\u201d mentioned in Algorithm 1.\\n9. Section 5.2 states that the \\u201cvariance of the evaluation curve\\u201d is smaller, indicating robustness, yet Figure 3(a) appears to have high instability in the (early) learning curve?\\n10. Section 5.2 states that Figure 5 should show that the bottleneck areas are sampled more densely, but that\\u2019s a dubious claim. Please quantify the claim or drop it.\\n11. Figure 4(b) has oddly wide error-bars for DQN-Value, which looks suspiciously like a single failed run. Can you add a plot to the appendix with median/quantiles instead of mean/std statistics?\\n\\n\\n--------------\", \"update_nov_17\": \"most of my concerns have been addressed, and I have thus increased my rating.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new way to select states from which do do transitions in dyna algorithm (which trains policy from model experience as if it was a real experience). It proposes to look for states where frequency of value function as a function of a real valued state is large, because these are the states where the function is harder to approximate. The paper also shows that such frequency is large where the gradient of the function is large in magnitude which allows for finding such states in practice. In more detail, similar to previous algorithms, this algorithm keeps both an experience replay buffer as well as another buffer of states (search-control queue) and uses a hill climbing strategy to find states with both higher frequency and higher value. The paper tests the algorithm on toy domains - the mountain car and a maze with doors.\\n\\nThe idea of using the magnitude of the gradient as an exploration signal is not new - \\u201cgo to places where agent learns more\\u201d. In this paper, such signal is not used as a reward but for finding states from which to do training updates. It is also nice that the paper provides a good relation (with explanation) between this signal and the frequency of the value function. The paper is clearly written. One drawback is that the main computations are only tractable in toy domains - it would be good if they discussed how to use this with general neural model with large state spaces (e.g. states obtained with an RNN).\", \"detailed_comments\": [\"In the abstract it says \\u201c\\u2026searching high frequency region of value function\\u201d. At this point it is not clear what function we are considering - what is on the axes? - a time, state, value? (value on y axis and a real valued state on the x as it turns out later).\", \"Same at line end-7 on page 2\", \"End of section 2: and states with high value (as you do later).\", \"A demonstration experiment: Why do you fit linear regression to a sin function? Why not at least one layer NN?\", \"Page 5 top: You reason that Hessian learns faster - why not just squaring gradient norm?\", \"Section 4: Hessian is intractable for general neural network unless you have a toy domain - does it work with just the gradient? This is an important point if this is to be scaled. May be you can also discuss how to compute the gradient of the norm of the gradient.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper basically built upon [1]. The authors propose to do sampling in the high-frequency domain to increase the sample efficiency. They first argue that the high-frequency part of the function is hard to approximate (i.e., needs more sample points) in section 3.1. They argue that the gradient and Hessian can be used to identify the high-frequency region. And then they propose to use g(x)=||gradient||+||Hessian || as the sampling metric as illustrated in Algorithm 1. To be noticed that, they actually hybrid the proposed metric (6) and the value-based metric (7, proposed in [1]) in their algorithm.\", \"strength\": \"Compared to [1], their experiment environment seems more complicated (MazeGridWorld vs. GridWorld). \\nFigure 3 shows that their method converges faster than Dyna-Value.\\nFigure 5 is very interesting. It shows that their method concentrates more on the important region of the function.\", \"weakness\": \"\", \"in_footnote_3\": \"I don't see why such an extension is natural.\\nIn theorem 1, why the radius of the local region has to be?\\nTheorem1 only justifies the average (expectation) of gradient norm is related to the frequency. The proposed metric $g$, however, is evaluated on a single sample point. So I think if adding some perturbations to $s$ (and then take the average) when evaluating $g$ will be helpful.\\nThe authors only evaluate their algorithm in one environment, MazeGridWorld. \\nI would like to see the experiment results of using only (6) as the sampling rule. \\nWhat kind of norm are you using? (||gradient||, ||hessian||)\\nWhy $g$ is the combination of gradient norm and hessian norm? What will be the performance of using only gradient or hessian?\\nFigure 4(b), DQN -> Dyna\", \"reference\": \"[1] Hill Climbing on Value Estimates for Search-control in Dyna\"}"
]
} |
SklcyJBtvB | Off-policy Bandits with Deficient Support | [
"Noveen Sachdeva",
"Yi Su",
"Thorsten Joachims"
] | Off-policy training of contextual-bandit policies is attractive in online systems (e.g. search, recommendation, ad placement), since it enables the reuse of large amounts of log data from the production system. State-of-the-art methods for off-policy learning, however, are based on inverse propensity score (IPS) weighting, which requires that the logging policy chooses all actions with non-zero probability for any context (i.e., full support). In real-world systems, this condition is often violated, and we show that existing off-policy learning methods based on IPS weighting can fail catastrophically. We therefore develop new off-policy contextual-bandit methods that can controllably and robustly learn even when the logging policy has deficient support. To this effect, we explore three approaches that provide various guarantees for safe learning despite the inherent limitations of support deficient data: restricting the action space, reward extrapolation, and restricting the policy space. We analyze the statistical and computational properties of these three approaches, and empirically evaluate their effectiveness in a series of experiments. We find that controlling the policy space is both computationally efficient and that it robustly leads to accurate policies. | [
"Recommender System",
"Search Engine",
"Counterfactual Learning"
] | Reject | https://openreview.net/pdf?id=SklcyJBtvB | https://openreview.net/forum?id=SklcyJBtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jSwukAaPzq",
"S1elCk_3iB",
"rkgWKUb5oB",
"SyguLBRFsS",
"r1GLV4Ctjr",
"ByxCFQAKsS",
"Bkla7_u6KB",
"H1glUK7pFr",
"BygT4d5oYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724297,
1573842888390,
1573684856567,
1573672272361,
1573671982378,
1573671814413,
1571813413398,
1571793223662,
1571690549008
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1479/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1479/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1479/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1479/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1479/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper tackles the problem of learning off-policy in the contextual bandit problem, more specifically when the available data is deficient (in the sense that it does not allow to build reasonable counterfactual estimators). To address this, the authors introduce three strategies: 1) restricting the action space; 2) imputing missing rewards when lacking data; 3) restricting the policy space to policies with \\\"enough\\\" data. All three approaches are analyzed (statistical and computational properties) and evaluated empirically. Restricting the policy space appears to be particularly effective in practice.\\n\\nAlthough the problem being solved is very relevant, it is not clear how this work is positioned with respect to approaches solving similar problems in RL. For example, Batch constrained Q-learning ([1]) restricts action space, while Bootstrapping Error Accumulation ([2]) and SPIBB ([3]) restrict the policy class in batch RL. A comparison with these techniques in the contextual bandit settings, in addition to recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019)) is lacking. Moreover, given the newly added results (DR method by Tang et al. (2019)), it is not clear how the proposed approach improves over existing techniques. This should be clarified. I therefore recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the response. On the narrow question of DR vs. policy restriction, we do maintain that policy restriction is preferable, since it does not require training and optimizing a regression model.\\n\\nStepping back and taking a broader view, the main contribution of the paper is not any particular method. Instead, it is the first paper to comprehensively investigate the problem of support deficiency in the contextual-bandit setting, which all reviewers agree is an important and pervasive problem in many real-world systems. In particular, it articulates that existing approaches can be structured into action restriction, regression imputation, and policy restriction, characterizes these three approaches theoretically and empirically, and then provides tangible recommendations that are far from obvious and valuable in practice. We suggest to evaluate the contribution of the paper along this broader view.\"}",
"{\"title\": \"new simulation\", \"comment\": \"Thank the authors for adding the DR method in Tang et al. (2019) to Figure 1. It looks that the proposed three methods could not clearly beat this DR method in terms of accuracy. I understand that the DR is model-based and the proposed one is model-free. But it is not convincing if the newly proposed method could not beat the existing method in experiments.\"}",
"{\"title\": \"Official Comment to Review #3\", \"comment\": \"We thank the reviewer to point out the related work in sequential RL. We discuss the this work in more detail below, and will add a discussion to the final version of the paper. However, it is important be clear that these works address problems that primarily affect the sequential setting, not the contextual-bandit setting we study. In particular, the key problem of off-policy RL lies in the fact that importance-sampling based estimators suffers variance that grows exponentially with the horizon, making variance control (not bias control due to deficient support) the key problem. This is fundamentally different for one-step contextual bandits, where importance-sampling based estimator are the most competitive ones -- as can be clearly seen in our experiments. Meanwhile, Sutton and Barto (2018) identify a deadly triad of function approximation, bootstrapping, and off-policy learning, which emphasizes that function approximation equipped with Q-learning can even diverge in the off-policy learning setting, which makes most off-policy RL works tend to be very conservative in extrapolation error since the severe error propagation issue. In our work, we focus on the contextual bandit problem, which has many applications in recommender systems, ad placement and online search. We addresses different approaches to handle the important issue of support deficiency and empirically show the strong performance of safe learning by restricting policy class. The algorithm is simple and can be implemented efficiently using SGD. For practitioners, it has important usage in real world applications and for developing better off-policy estimators in RL, and we believe it is important to understand how to handle these cases in the more tractable contextual-bandit case.\\n\\nDiscussing the specific paper you mention, SPIBB ([3]) restricts the policy class in the following way: when there is not enough data for a particular $(x,y)$ pair, it forces the policy to obey $\\\\pi(y|x)=\\\\pi_0(y|x)$, and all the remaining probability mass will be given to the greedy action which has the highest reward based on the reward model. This strategy heavily relies on having the same context appear both at learning and at prediction time, which is an unrealistic requirement in most contextual bandit applications (e.g. recommendation, advertisement). If the context $x$ did not appear during learning, then the learned policy will exactly mimic the logging policy, which leads to no improvement at all. To make experiments feasible, the author proposed a pseudo-count workaround based on the Euclidean state-distance to heuristically avoid this strict requirement, bringing the problem back to a form of (difficult to analyze) extrapolation.\\n\\nFor [2], when translated into the bandit setting, the training objective will be the average reward predicted by the model ensembles, with a variance penalty and a constraint on the policy class. The constraint is based on the MMD between the logging policy and the target policy, and they argue heuristically that MMD is a good distance to measure support mismatch. Compared to our proposed approach of safe learning through constraining the policy space, we directly control the support divergence by using the control variate as a surrogate, which is more explicit and theoretically sound divergence measure. Moreover, their objective is based on an ensemble of direct models, while ours improved upon importance based methods. We want to emphasize that there is no end to improving the quality of regression models and people can use any deep learning architecture, however the bias problem could be severe if a wrong model is chosen, and the high bias problem is very difficult to diagnostic in general. While our recommended approach (safe learning by restricting policy class) is more safe in the sense of not relying on this extrapolated models, and then directly control the support divergence between two policies.\\n\\nBCQ [1] deals with batch RL with continuous actions. When simplified and translated to contextual bandits with discrete actions, it basically has a generative model to generate similar actions (compared with the actions shown in the batch) for each context. It then selects the sampled action that has the highest estimated reward. This is equivalent to a direct modeling approach in an action space that is restricted by the sampling procedure. As the sample size increases, this sampled action space converges to the support set of the logging policy. Similar to [2], it may suffer from high bias of the regression model, and it heavily relies on how good the reward estimate is.\", \"references\": \"[1] Off-Policy Deep reinforcement learning without exploration, Fujimoto et.al. ICML 2019\\n[2] Stabilizing Off-policy Q-learning via Bootstrapping Error Reduction, Kumar et.al. NeuRIPS 2019\\n[3] Safe Policy Improvement with Baseline Bootstrapping. Laroche et.al. ICML 2019\"}",
"{\"title\": \"Official Comment to Review #2\", \"comment\": \"We thank the reviewer for pointing out the related papers handling off-policy learning in contextual bandit problems. Liu's work, as we mentioned in the related work section, focuses on the correction of the state distribution by defining an augmented MDP, and pessimistic imputation is used to get an estimate for policy-gradient learning. When we translate this pessimistic imputation idea into the contextual bandit setting, it is the same as the Conservative Extrapolation method we define in Section 3.2. We will make this relationship more explicit in the revised version of the paper. Note that we do provide experiments for Conservative Exploration (see Figure 1 and Figure 2), and it is clear that this method is too pessimistic and hence not recommended.\\n\\nAs for Xie's work, it is based on the standard IPS estimator. However, it uses estimated propensities (based on maximum likelihood estimation) instead of the true propensities. When the logging policy has full support, using the estimated propensities can reduce the variance of IPS and its asymptotic MSE. However, it provides no remedy for the bias that IPS incurs under deficient support. In this paper, we emphasize that our goal is to address the bias problem of importance-sampling based estimators under deficient support, and IPS based on any surrogate policy still suffers from this severe bias problem.\\n \\nFor Tang's work, it is based on the IH estimator proposed in RL, which is specifically designed to handle the unbounded variance problem of IPS under infinite horizon. Specifically, they propose a doubly robust approach to further reduce the bias of this estimator. When translating it into the bandit setting, it becomes the standard doubly robust estimator, and we argue in Section 3.2 that this is a special case of regression extrapolation when one changes the base estimator from IPS to DR (i.e. change the first term of Equation (7) to the corresponding DR estimator). We have added empirical results for DR in the updated version of the paper (See Figure 1). It turns out DR is no better than Policy Restriction, and the performance of DR is highly related to the performance of direct models, while our proposed Policy Restriction is model-free.\", \"reference\": \"Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction. arXiv:1904.08473, 2019.\\n\\nXie, Liu, Liu, Wang, and Peng, Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy. ICLR 2019.\\n\\nTang, Feng, Li, Zhou, and Liu, Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation, arxiv: 1910.07186, 2019.\"}",
"{\"title\": \"Official Comment to Review #1\", \"comment\": \"We thank the reviewer for the comment and we clarify Section 3.2 as follows. In this section, we introduce safe learning through reward extrapolation, where the augmented IPS estimator listed in Equation (7) is composed of two parts: the first IPS part serves as an unbiased estimate for the reward of policy $\\\\pi$ if limited to the actions which lie in the support of $\\\\pi_0$, i.e., $E_{x}E_{y\\\\notin \\\\mathcal{U}(x,\\\\pi_0)}[r(x,y)]$. The ratio (importance sampling weight) is fully known since we know the target policy $\\\\pi$ and also the logging propensities $\\\\pi_0(y_i|x_i)$. The second imputation part is to extrapolate the reward for the actions that do not lie in the support of $\\\\pi_0$, i.e. $E_{x}E_{y\\\\in \\\\mathcal{U}(x,\\\\pi_0)}[r(x,y)]$, where the estimate for this part is based on a regression model $\\\\hat{\\\\delta}(x,y)$.\\n\\nAs we mentioned in the paper, all regression-based approaches rely on a correctly specified parametric model, or on some smoothness assumption about the reward, in order to extrapolate well. We formally show the bias of this estimator in Proposition 2, and it is quantified by using the error of the regression model at the actions which do not lie in the support set of logging policy. If there exists prior knowledge on the exact form of the reward model, then reward extrapolation is optimal and recommended. However, this is typically not the case in most real world scenarios, where the bias of the reward model is typically unknown and could not be efficiently estimated. This is partly the reason why we advocate for using the method that restricts the policy space.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work addresses the problem of off-policy evaluation in the presence of positivity violations, i.e. some actions are not observed in the logged policy. As the paper points out, positivity violations can lead to unboundedly bad estimates when employing IPS. The authors propose three methods to deal with this problem. The first uses only the observed actions, the second and third use extrapolation and augmentation to provide an approximation to the off-policy problem.\\n\\nI found a few pieces of this paper confusing. In section 3.2 it is proposed that a surrogate reward function be used for actions with unknown support, but the left hand side of the equation would seem to imply that the ratio still needs to be known in order to get an estimate. Perhaps an indicator function is missing? \\n\\nIt is also not made plain what assumptions are being employed in order to allow for extrapolation. From what I can tell, the authors are swapping out a positivity assumption with a smoothness assumption on the reward function. However, I don't think I see this spelled out within the text. \\n\\nOverall, I think this is a promising approach (the empirical results certainly bare that outO but to my eyes it lacks sufficient detail and specificity.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper considers a new off-policy contextual-bandit method that can learn even when the logging policy has deficient support. Three approaches are explored, namely restricting the action space, reward extrapolation, and restricting the policy space.\\n\\nThis paper is well written and it considers an important problem of deficient support. However, the proposed method was only compared to a few old benchmarks. How does the proposed method compare to more recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019), Tang et al. (2019)) in the experiments? The work by Liu et al. (2019) also considered the setting of deficient support.\\n\\nYao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction. arXiv:1904.08473, 2019.\\n\\nJie, Liu, Liu, Wang, and Peng, Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy. ICLR 2019.\\n\\nTang, Feng, Li, Zhou, and Liu, Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation, arxiv: 1910.07186, 2019.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper talks about the problem of off-policy or batch learning in the contextual bandit setting without the complete support assumption. This problem setting is very realistic and encountered in most problems, especially in temporally extended settings, such as reinforcement learning. They compare three approaches for the same: restricting action selection, learning extrapolated reward models, and by restricting the policy class. They derive a SNIPS style estimator for the support constraint in the final approach. The approach with restricting the policy class demonstrates decent empirical results although the direct method is very much comparable.\\n\\nI would lean towards being mostly neutral in terms of acceptance. While the problem being solved is very relevant and their approach compares three different approaches to the deficient support problem, I am not sure how this work is positioned with respect to approaches solving similar problems in the reinforcement learning land. For example, Batch constrained Q-learning ([1]) restricts the set of actions that can be used, Bootstrapping Error Accumulation ([2]) and SPIBB ([3]) restrict the policy class in batch reinforcement learning. I would appreciate some comparison/positioning to such methods in the bandit setting as well. The support estimation metric and the corresponding objective (Eqn 10, 11) should also be compared and contrast with explicit divergences designed for support matching (for example, in [4]).\", \"references\": \"[1] Off-Policy Deep reinforcement learning without exploration, Fujimoto et.al. ICML 2019\\n[2] Stabilizing Off-policy Q-learning via Bootstrapping Error Reduction, Kumar et.al. NeuRIPS 2019\\n[3] Safe Policy Improvement with Baseline Bootstrapping. Laroche et.al. ICML 2019\\n[4] Domain adaptation with asymmetrically-relaxed distribution alignment. Wu et.al. ICML 2019\"}"
]
} |
Syxc1yrKvr | Implicit λ-Jeffreys Autoencoders: Taking the Best of Both Worlds | [
"Aibek Alanov",
"Max Kochurov",
"Artem Sobolev",
"Daniil Yashkov",
"Dmitry Vetrov"
] | We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution. We show that it takes the best properties of VAE and GAN objectives. It consists of two parts. One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model. However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels. To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator. In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective. | [
"Variational Inference",
"Generative Adversarial Networks"
] | Reject | https://openreview.net/pdf?id=Syxc1yrKvr | https://openreview.net/forum?id=Syxc1yrKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ff3z7VSaf",
"rJetsmv2oS",
"rJeKIIljor",
"r1gwbLeooH",
"r1ejPHesoS",
"H1lrdEeosS",
"ryeRzOnX9S",
"SygnRBCatS",
"SJgy_5i_OH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724268,
1573839777237,
1573746256956,
1573746174587,
1573746019333,
1573745772584,
1572222997584,
1571837396182,
1570450023421
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1478/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1478/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1478/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1478/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1478/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper received Weak Reject scores from all three reviewers. The AC has read the reviews and lengthy discussions and examined the paper. AC feels that there is a consensus that the paper does not quite meet the acceptance threshold and thus cannot be accepted. Hopefully the authors can use the feedback to improve their paper and resubmit to another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"I just wanted to acknowledge that I have read your response and I will consider it in making a final recommendation for this paper.\\n\\nThank you!\"}",
"{\"title\": \"Response to Review #3 (part 2/2)\", \"comment\": \"> Existing ablation studies are a bit of a straw-man: the paper compares changing r(y|x) by standard Gaussian or Laplace. However, we know that a large variance does not make any sense and almost all papers use tiny variances (e.g. in beta-VAE the beta-values tend to be very small, which is equivalent to small variances here).\\n\\nWe are sorry that we did not write many details about this ablation study, further we will add them in supplementary materials. For this experiment we considered 2 settings for learning VAE with Gaussian or Laplace conditional likelihood: 1) with constant variance; 2) with learnable variances for each pixel of the image. We observe that 2 setting with learnable variances is unstable for lambda < 1 and gives significantly worse results than 1 setting. The possible explanation is that learnable variances reweight the reconstruction loss dynamically during the training process and combined with additional weighting parameter lambda it can lead to instabilities in learning. Therefore, in Figure 2 we present results for 1 setting with constant variance. The exact value of variance used in likelihood also depends on the lambda parameter. Lambda in our experiments ranges from 0.1 to 1 with step 0.1, therefore, variance value ranges from 0.05 to 0.5 with step 0.05. \\n\\nSo, we can say that our ablation study is fair and it can support the significance of the proposed implicit likelihood. \\n\\n> Are the experimental results all with the same architecture for encoder/generator for all results you compared to? if not, the effect of that should also be tested.\\n\\nYes, we used a standard ResNet architecture [2] as our baselines. \\n\\n[1] Shane Barratt, Rishi Sharma, A Note on the Inception Score, 2018, ICML Workshop\\n[2] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. 2017, NeurIPS\"}",
"{\"title\": \"Response to Review #3 (part 1/2)\", \"comment\": \"Dear reviewer,\\n\\nWe would like to thank you for the thoughtful review. \\nWe address your concerns below. \\n\\n> The theory does not justify non-optimal solutions. It is argued on page 6 that non-optimality of the discriminator serves as some form of regularization, but this requires some justification.\\n\\nIt is a fair question because in practice we do not have optimal discriminators and the theory does not justify this case. However, in adversarial learning it is a common practice to analyze the model under assumptions of optimal discriminators. The theoretical justification of the non-optimality case is an open question in GAN community and our paper is not aimed to solve this problem. \\nWe utilize the adversarial framework to learn density ratios by discriminators. When we write about the non-optimality of the discriminator as some form of regularization we follow the common assumption that non-optimal discriminator can learn smooth version of empirical density ratio. In practice, we always have finite training data and therefore the optimal discriminator will learn ratio of delta functions (because the empirical data distribution is a sum of delta functions centered in data points). So, the non-optimality of the discriminator allows us to estimate somehow the true density ratio in the case of finite data. \\n\\n> why not use LPIPS for training?\\n\\nLPIPS metric initially was proposed as a good proxy for evaluation of visual quality of reconstructions. The first reason why it is not a good idea to train LPIPS directly is that it is as Inception Score (IS) based on the outputs of the deep neural network. For the IS it was shown [1] that if we train the generator by directly maximizing IS we will end up with large IS but very unnatural generated images. We can observe the same problem for the LPIPS. The second reason is that if we train the model using LPIPS then we will not be able to use this metric for a fair comparison with other methods. \\n\\n> However, the r-function used in the experiments does not fulfill the property of a well defined likelihood, and Theorem 1 does not hold, since technically the KL-divergence is infinity.\\n\\nIt is true that r-function is not a well defined likelihood. However, as we said before in the usual adversarial framework the data distribution is also not well defined density in practice. It does not prevent us from learning the smooth version of the density ratio by the non-optimal discriminator (which is always the case). \\n\\n> If we ignore this by adding a small amount of Gaussian noise around the sampled cyclical shifts - like the r' used in the experiments, we can easily write down the explicit likelihood function\\u2026 So the explicit solution of theorem 1 can be written down and another ablation study would be training the method with the explicit formulation for this KL-term(i.e. only training two discriminator models). \\n\\nIt is a good suggestion and we will add the comparison with this explicit likelihood in the next revision of our paper. However, this explicit r-function does not differ in principle from the standard Gaussian distribution and we are likely to obtain the same results. It is another argument why we utilize the non-optimal discriminator instead of the explicit distribution. It is similar to the standard adversarial framework that we use the discriminator to train the generator instead of the explicit distribution of the dataset which can be written down as a mixture of Gaussian distributions centered in data points and with small fixed variance.\\n\\nGood point. However, when we train the discriminator using r-distribution we do not expect that it will perfectly fit the density ratio of r(y|x) and r\\u2019(y|x) as in the standard GAN setting we do not expect the discriminator to exactly recover the empirical data distribution. The non-optimality of the discriminator can be thought of as a form of regularization and it allows us to learn the implicit likelihood of reconstructions defined by the non-optimal discriminator itself. We will add the comparison with the explicit likelihood you mention in the next revision of our paper to illustrate the benefits of using non-optimal discriminator.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Dear reviewer,\\n\\nWe would like to thank you for the thoughtful review. The main concern you raised is about the significance of two contributions: optimization of lambda-Jeffreys divergence and formulation of the implicit likelihood. We will address each of them below.\", \"about_the_optimization_of_lambda_jeffreys_divergence_you_wrote\": \"> The use of a weighted sum of the forward & KL divergences to train a generative model is hardly new, and has already been presented a few times (Larsen et al. 2015, Dosovitskiy & Brox 2016).\\n\\nIt is true that the idea of combining VAE and GAN objectives is not new. There are many approaches and we consider the closest works in related work. However, our contribution is that we do not only just propose to optimize the weighted sum of VAE and GAN losses but we provide a theoretical justification about the proposed objective and prove that under assumptions of optimal discriminators our model minimizes the lambda-Jeffreys divergence. To the best of our knowledge, there are no other papers about auto-encoder models which prove that it optimizes lambda-Jeffreys divergence. If we consider (Larsen et al. 2015, Dosovitskiy & Brox 2016) papers there were proposed to optimize the loss as a sum of VAE loss and GAN-like loss (in (Dosovitskiy & Brox 2016) there was also feature matching loss). However, in these works GAN part is not equivalent to reverse KL divergence (because they do not optimize E_{p_{theta}(x)} log[D(x)/(1 - D(x))]), therefore their losses are not equivalent to lambda-Jeffreys divergence (and authors did not analyze theoretically the corresponded divergence for their objectives). \\n\\nTherefore, the main significance of this contribution consists in the theoretical justification of the proposed objective.\", \"about_the_formulation_of_the_implicit_likelihood_you_wrote\": \"> In this context the paper does not present the impact of its main contribution alone. How would behave a VAE trained solely with this implicit likelihood, but a regular Gaussian latent space and without the GAN loss? This ought to be part of the ablation study in my opinion.\\n\\nThis experiment was a part of our ablation study. We are sorry if it was unclear from the text, we will better emphasize this experiment in the next revision of our paper. \\nIn experiments section you can find it in the Figure 3 and it corresponds to lambda=1 (light green circle). We see that it has good LPIPS but the worst IS compared to other values of lambda. For example, its IS is significantly worse than 0.3-IJAE which we report in Table 1. So, we can say that this implicit likelihood is beneficial when we combine it with the GAN part within lambda-Jeffreys objective. \\n\\nSo, the significance of the implicit likelihood is that it allows us to successfully combine VAE and GAN parts in our objective in contrast to explicit likelihoods which give significantly worse results (see Figure 2). However, your second concern is about this ablation study which is illustrated in Figure 2. You wrote:\\n\\n> the ablation study evaluates the use of L1 or L2 noise instead of the cyclic shift likelihood, but does not say what variance has been used for these, which would (as explained above) be an important parameter to take into account. If a variance of 1 was used, then the results of figure 2 are unsurprising and not insightful, as the discriminator would have merely learned to differentiate between images containing a visible Gaussian noise from images that do not.\\n\\nWe are sorry that we did not write many details about this experiment, further we will add them in supplementary materials. For this experiment we considered 2 settings for learning VAE with Gaussian or Laplace conditional likelihood: 1) with constant variance; 2) with learnable variances for each pixel of the image. We observe that 2nd setting with learnable variances is unstable for lambda < 1 and gives significantly worse results than 1st setting. The possible explanation is that learnable variances reweight the reconstruction loss dynamically during the training process and combined with additional weighting parameter lambda it can lead to instabilities in learning. Therefore, in Figure 2 we present results for 1 setting with constant variance. The exact value of variance used in likelihood also depends on the lambda parameter. Lambda in our experiments ranges from 0.1 to 1 with step 0.1, therefore, variance value ranges from 0.05 to 0.5 with step 0.05. \\n\\nSo, we can say that our ablation study is fair and it can support the significance of the proposed implicit likelihood. \\n\\nYou also claim that blurry images of VAE \\u201cis mostly unrelated to the MLE estimation\\u201d. We want to clarify that we mention blurry images is only as one example of VAE unrealistic samples. Our main claim is that MLE estimation can lead to mass-covering behaviour when the model may generate from low probability regions where samples can be very unrealistic.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Dear reviewer,\\n\\nWe would like to thank you for your thoughtful review and valuable suggestions. We will address each your question below:\\n\\n> Figure 3 Left (CIFAR 10), it's not absolutely clear to me that alpha-GAN and perhaps AGE isn't at least as good as your approach. The meaning of the units of the axes is a bit unclear. Do you have a particular reason to prefer your method over these in this case?\", \"to_make_it_more_clear\": \"on Figure 3 x-axis and y-axis correspond to Inception Score (IS) and LPIPS metrics respectively. From this plot we can see that alpha-GAN, AGE and IJAE have comparable LPIPS. However, the advantage of our method is that we can offer a model with significantly better IS by keeping LPIPS almost the same. Particularly, for CIFAR-10 in Table 1 we chose lambda = 0.3 for which IJAE achieves the best IS and comparable LPIPS (we can see from variance bars that differences in LPIPS are insignificant). So, the main reason to prefer our method over baselines is that it can provide a much better tradeoff between generation and reconstruction qualities.\\n\\n> Related: in Table 1, why are there no bolded results for CIFAR + Reconstruction?\\n\\nThe main reason is that most differences in LPIPS are insignificant due to large variance bars. In the next revision of our paper we can bold all results except ones which are significantly worse than others. \\n\\n> Even though Figures 2 and 3 (to a certain extent) seem to show that results are somewhat robust to the exact value \\\\lambda how would you propose to set it in practice?\\n\\nFrom Figures 2 and 3 we see that the reconstruction quality is robust to decreasing lambda and starts to degrade only when lambda goes below 0.2. While the generation quality is very sensitive to lambda and can be improved significantly by decreasing lambda. Therefore, in practice, we recommend setting lambda around 0.3 when IJAE has the best generation ability and acceptable reconstruction quality. \\n\\n> In Figures 5 and 7 the reconstruction of IJAE sometimes seems to be pretty far from the original image (i.e., it's not that it's blurry as for VAEs, it's that the model seems to be reconstructing a completely different image). How do you explain these results?\\n\\nSuch unfaithful reconstructions can be explained by the fact that we do not use an explicit pixel-wise reconstruction loss and our implicit loss can sometimes accept such reconstructions due to the underfitting of the discriminator on triples. \\n\\n> While VAEs and GANs can work on many types of data (at the very least continuous), your model seems to be developed for images. Could you make it clear what changes would be needed to apply it to non-image data?\\n\\nIt is a good question. It is true that in the paper we mostly focus on images and do not mention other types of data. However, the only thing we should change for non-image data is the implementation of the implicit conditional likelihood r(y|x) which encourages the set of faithful reconstructions for the object x. For example, for images, we chose the distribution over the shifted versions of x. If we consider sequence generating model, for example, and object x is a sequence of words, we can consider r(y|x) as a distribution over sequences that are equal to x up to synonym words. Other parts of IJAE model remain the same for non-image data. \\nWe will add clarification about the applicability of our model to non-image data in the next revision of our paper. \\n\\n> Possible related work. It may be worth citing these two papers \\u2026\\n\\nThank you for pointing out these papers we missed to cite. We will add them to the related work and comment on each paper. \\n\\n> Paper organization: I would suggest moving the related work to after the background. It would be useful to provide the full algorithm somewhere (e.g., using an algorithm \\\"box\\\").\\nGANs and VAEs are not models per se but rather training frameworks for generative models. There are many minor grammatical errors throughout the text. It would be useful to mention early that for IS higher is better and LPIPS lower is better.\\n\\nThank you for your valuable suggestions for improving paper text and organization quality. We will certainly follow them in the next revision of our paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a new training objective for generative models that combines the objectives of VAEs and GANs. The objective is equivalent to minimizing the Jeffreys divergence (a type of f-divergence) between the true probability of the data and its probability under the model. Furthermore, the objective comes with a knob to tradeoff the relative importance of each of the two terms. In addition, the authors develop a implicit likelihood formulation which they claim and show empirically to outperform typical explicit formulations typically used in VAEs.\", \"Overall, it is an interesting paper that reuses a few good ideas to develop a novel training objective. The results show that using an implicit likelihood helps (Figure 2) and that it does relatively better than either GAN or VAE approaches. I have detailed comments below about the organization of the paper, some of the experimental claims as well as a few other works which may be good to cite.\", \"Paper organization: I would suggest moving the related work to after the background.\", \"GANs and VAEs are not models per se but rather training frameworks for generative models.\", \"While VAEs and GANs can work on many types of data (at the very least continuous), your model seems to be developed for images. Could you make it clear what changes would be needed to apply it to non-image data?\", \"There are many minor grammatical errors throughout the text.\", \"It would be useful to provide the full algorithm somewhere (e.g., using an algorithm \\\"box\\\")\", \"Possible related work. It may be worth citing these two paper:\", \"f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS'16\", \"Deep Generative Learning via Variational Gradient Flow, ICML'19\", \"It would be useful to mention early that for IS higher is better and LPIPS lower is better.\", \"Even though Figures 2 and 3 (to a certain extent) seem to show that results are somewhat robust to the exact value \\\\lambda how would you propose to set it in practice?\", \"Figure 3 Left (CIFAR 10), it's not absolutely clear to me that alpha-GAN and perhaps AGE isn't at least as good as your approach. The meaning of the units of the axes is a bit unclear. Do you have a particular reason to prefer your method over these in this case?\"], \"related\": \"in Table 1, why are there no bolded results for CIFAR + Reconstruction?\\n\\n- In Figures 5 and 7 the reconstruction of IJAE sometimes seems to be pretty far from the original image (i.e., it's not that it's blurry as for VAEs, it's that the model seems to be reconstructing a completely different image). How do you explain these results?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a model named lambda-IJAE, which combines the VAE and GAN training schemes to train a generative model achieving competitive performance. The combination of VAE and GAN is justified by its theoretical interpretation as a an optimization of the lambda-Jeffreys divergence between the real data distribution and the generation distribution. This work also introduces a reformulation of the reconstruction term of the VAE loss, allowing it to be estimated implicitly using an adversarial mechanism. Finally, the latent space of the VAE is also modelled implicitly using an adversarial mechanism, following (Mescheder et al. 2017).\\n\\nI am ambivalent about this paper. The proposed implicit likelihood mechanism is very interesting, but the paper contains several weaknesses that together make me unwilling to accept it.\\n\\nFirst of all, the paper presents itself as centered on the notion of optimizing the lambda-Jeyffreys distribution, while the main contribution is actually clearly the formulation of the implicit likelihood. The use of a weighted sum of the forward & KL divergences to train a generative model is hardly new, and has already been presented a few times (Larsen et al. 2015, Dosovitskiy & Brox 2016).\\n\\nIn this context the paper does not present the impact of its main contribution alone. How would behave a VAE trained solely with this implicit likelihood, but a regular Gaussian latent space and without the GAN loss? This ought to be part of the ablation study in my opinion.\\n\\nSecondly, the paper discusses the issue of VAE generating unrealistic samples. This is indeed a very real issue of the VAE linked to it being trained by maximum-likelihood. However illustrating it by \\\"blurry images\\\" (like is done several times in the paper) is a common misconception, as while this is a very classical issue with VAEs, it is mostly unrelated to the MLE estimation.\\n\\nIt is rather a simple consequence of the fact that using an unweighted squared error loss to model the reconstruction of the VAE is almost always a poor model. It is equivalent to modelling the observation with a Gaussian noise of variance 1/2, which is a huge noise when considering data normalized in [0;1] or [-1;1] like is traditional to do with images. Reducing this variance to a more sensible value (like a std of 0.1 for example) or allowing the model to learn it reveals the real failure mode of the VAE generating unrealistic images, which can hardly be described as \\\"blurry\\\".\\n\\nSimilarly, the ablation study evaluates the use of L1 or L2 noise instead of the cyclic shift likelihood, but does not say what variance has been used for these, which would (as explained above) be an important parameter to take into account. If a variance of 1 was used, then the results of figure 2 are unsurprising and not insightful, as the discriminator would have merely learned to differentiate between images containing a visible Gaussian noise from images that do not.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to replace the KL-divergence in VAE training with the lambda-jeffreys divergence of which the symmetric KL-divergence is a special case. The paper proposes a pure implicit likelihood approach that uses three discriminator models to estimate the KL-divergences. Experiments are conducted on CIFAR-10 and TinyImageNet and several scores are reported to show that the proposed method performs as good if not better than current approaches.\\n---------\\nI think the paper tries to achieve too much in too little space and foregoes scientific exactness for the sake of claiming SOTA. Since there is a difference between claiming SOTA on a task and validating a new method, the small amount of space makes it difficult to substantiate both claims at the same time. In the rest of the review i will try to substantiate the claim:\\n\\n1. The paper claims on page 2: \\\"These models do not have a sound theoretical justification about what distance [...] they optimize\\\". While the paper tries to substantiate its claims by showing theoretically that it does the right thing using the optimal discriminator, it leaves the question open what happens with any other discriminator. The theory does not justify non-optimal solutions. It is argued on page 6 that non-optimality of the discriminator serves as some form of regularization, but this requires some justification.\\nMoreover, the paper uses LPIPS to measure reconstruction quality - but this measure is a deep neural network. So if those measures are good enough to compare solutions with and the theoretical justification of the proposed method is shaky in practice - why not use LPIPS for training?\\n\\n2. The paper proposes the discriminator in order to allow for an implicit likelihood. However, the r-function used in the experiments does not fulfill the property of a well defined likelihood, and Theorem 1 does not hold, since technically the KL-divergence is infinity. If we ignore this by adding a small amount of Gaussian noise around the sampled cyclical shifts - like the r' used in the experiments, we can easily write down the explicit likelihood function since:\\n\\nr(y|x)=\\\\sum_i w_i N(y|Shift_i(x), \\\\sigma)\\n\\nwhere Shift_i is the i-th shift in the set described in the paper and w_i its probability p(y|q). So the explicit solution of theorem 1 can be written down and another ablation study would be training the method with the explicit formulation for this KL-term(i.e. only training two discriminator models). If the results are not equivalent, this implies that the discriminator does not reach the optimum. The implications of that should be discussed regarding 1. \\n\\n3. Existing ablation studies are a bit of a straw-man: the paper compares changing r(y|x) by standard Gaussian or Laplace. However, we know that a large variance does not make any sense and almost all papers use tiny variances (e.g. in beta-VAE the beta-values tend to be very small, which is equivalent to small variances here). \\n\\n\\n---------------------------\\nSmaller things\\n- Are the experimental results all with the same architecture for encoder/generator for all results you compared to? if not, the effect of that should also be tested.\\n\\n- my personal biased view on the generated images is: it looks worse than alpha-GAN. Every reconstructed image has a grey tone and the generated images also offer a strong grey palette. The details don't look better as well.\\n\\n- typo inroduce->introduce\"}"
]
} |
HyeYJ1SKDH | FLUID FLOW MASS TRANSPORT FOR GENERATIVE NETWORKS | [
"Jingrong Lin",
"Keegan Lensink",
"Eldad Haber"
] | Generative Adversarial Networks have been shown to be powerful tools for generating content resulting in them being intensively studied in recent years. Training these networks requires maximizing a generator loss and minimizing a discriminator loss, leading to a difficult saddle point problem that is slow and difficult to converge. Motivated by techniques in the registration of point clouds and the fluid flow formulation of mass transport, we investigate a new formulation that is based on strict minimization, without the need for the maximization. This formulation views the problem as a matching problem rather than an adversarial one, and thus allows us to quickly converge and obtain meaningful metrics in the optimization path. | [
"generative network",
"optimal mass transport",
"gaussian mixture",
"model matching"
] | Reject | https://openreview.net/pdf?id=HyeYJ1SKDH | https://openreview.net/forum?id=HyeYJ1SKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Ksdos3Z8aI",
"SJxhj397cB",
"B1gXbw_nFB",
"rklblJ7LKr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724239,
1572215972181,
1571747578716,
1571331816718
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1476/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1476/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1476/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission is concerned with providing a transport based formulation for generative modeling in order to avoid the standard max/min optimization challenge of GANs. The authors propose representing the divergence with a fluid flow model, the solution of which can be found by discretizing the space, resulting in an alignment of high dimensional point clouds.\\n\\nThe authors disagreed about the novelty and clarity of the work, but they did agree that the empirical and theoretical support was lacking, and that the paper could be substantially improved through better validation and better results - in particular, the approach struggles with MNIST digit generation compared to other methods.\\n\\nThe recommendation is to not accept the submission at this time.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper addresses the task of constructing a generative model for data using a novel optimal transport-based method. The paper proposes an alternative view of obtaining generative models by viewing the generation process as a transport problem (specifically, fluid flow mass transport) between two point clouds living in high-dimensional space. To solve the transport problem, a discretization scheme is proposed, which gives rise to a variant of point cloud registration problem, which is solved using numerical optimization. The results are provided on synthetic data and a real MNIST data.\\n\\nWith generative models, and particularly generative adversarial networks (GANs) being notoriously hard to train, alternative ways of constructing generative models are needed. The paper does a good job displaying the potential optimization issues arising when training GANs, and the basic theoretical foundations used in the paper have been validated in prior work. However, with the introduction of a transport-based formulation, its respective issues may arise, that are not described in the paper. Will the optimization always converge, and if yes, to which kind of optimum? What are the requirements for the point clouds R and T? \\n\\nWhile the conceptual contribution is that the , the technical novelty is limited and mostly amounts to the appropriate choice of distributions in R and T and applying the discretization schemes to be able to compute the experiments. Unfortunately, the model is not studied theoretically, i.e. no description is given regarding the class of generative tasks that could be solved using the method, or the class of functions that could be learn in such a way.\\n\\nThe experimental evaluation demonstrates on only two simple examples the results of the work. More experiments are needed to fully understand the possibilities of the framework. \\n\\nThe convergence speed is not indicated, and the efficiency of the optimization is not described. \\n\\nTo summarize, I believe the paper should not be accepted in its present (early) form, as (1) more detailed theoretical insight are needed, and (2) much more computational experiments are needed to fully validate the method.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper targets training generative adversarial networks with a formulation that is motivated by mass transport of fluid flows. While generalized transport formulations are popular for GANs by now, especially the Earth Mover's distance and the Wasserstein GAN version that this paper is based on, this submission instead frames the problem with a divergence-free flow model inspired by Navier-Stokes. The problem of matching output distributions then becomes one of inferring a suitable flow field that aligns the distributions.\\n\\nThe proposed model is discretized in a Lagrangian manner, and divergence freeness is ensured trivially by a constant particle count. Matching the point clouds in terms of distribution is done via a gaussian mixture model to obtain smooth distributions. (This probably doesn't scale too well to large paricle numbers due to the global support, but the paper also focuses on smaller 2D cases.) As mentioned in the text, the method in the end shares similarities with point cloud registration (ICP).\\n\\nA synthetic test with a square target shape and an initial centered distribution is shown to highlight that the method can re-distribute parts of the initial cloud towards all sides of the target shape. \\n\\nAs a tougher case, a digit generator with MNIST data is shown, which however, does not reach a level of quality we could expect from other existing methods for generative models.\\n\\nWhile I found the initial motivation of the paper quite interesting (and novel as far as I can tell), the discretized version is somewhat disappointing. The smooth matching of point clouds does not retain too much of the initial fluid flow model. Unfortunately, the MNIST test also indicates that the method has problems scaling to higher dimensions. This is a central challenge for GANs, and based on the submission I don't have the impression that the proposed formulation is competitive with existing GAN methods.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a generative network based on fluid flow solutions of mass transport problems. The paper is difficult to follow due to a poor structure and obvious technical mistakes. Detailed comments are as follows:\\n\\n1. The dual formulation used in the objective of WGANS involves expectations with respect to data distributions. When authors introduced WGANs, it is extremely loose and essential wrong to state that \\\"In the case of WGANs a simpler expression is derived where we use the score directly setting g to the identity and h = \\u2212g, and require some extra regularity on the score function\\\".\\n\\n2. The organisation of the paper makes it difficult to read:\\na) While the relevant work is discussed only briefly in Section 1 and contain incorrect statements (see an example above), detailed discussions of two toy examples in the introduction section distract the reading.\\n\\nb) Section 2 is a combination of related work and proposed work. For example, it starts more like a section introducing the dynamic transport formulation of mass transport problems (Equations 2.3a-2.3c), but in fact it contains the authors' proposed approach (2.4a-2.4b), which makes it difficult to tell the authors' contributions. Moreover, the connection between 2.3a-2.3c to 2.4a-2.4b is not clear to me.\\n\\n3. Multiple notations are not properly defined or conflicting: what are $\\\\rho(x,t)$ and $\\\\rho(1,x)$?\\n\\n4. Very limited experimental validation with no comparison to other algorithms.\\n\\n5. Multiple typos in the paper, e.g. \\\"Equation equation 1.1\\\".\"}"
]
} |
S1lukyrKPr | LEX-GAN: Layered Explainable Rumor Detector Based on Generative Adversarial Networks | [
"Mingxi Cheng",
"Yizhi Li",
"Shahin Nazarian",
"Paul Bogdan"
] | Social media have emerged to be increasingly popular and have been used as tools for gathering and propagating information. However, the vigorous growth of social media contributes to the fast-spreading and far-reaching rumors. Rumor detection has become a necessary defense. Traditional rumor detection methods based on hand-crafted feature selection are replaced by automatic approaches that are based on Artificial Intelligence (AI). AI decision making systems need to have the necessary means, such as explainability to assure users their trustworthiness. Inspired by the thriving development of Generative Adversarial Networks (GANs) on text applications, we propose LEX-GAN, a GAN-based layered explainable rumor detector to improve the detection quality and provide explainability. Unlike fake news detection that needs a previously collected verified news database, LEX-GAN realizes explainable rumor detection based on only tweet-level text. LEX-GAN is trained with generated non-rumor-looking rumors. The generators produce rumors by intelligently inserting controversial information in non-rumors, and force the discriminators to detect detailed glitches and deduce exactly which parts in the sentence are problematic. The layered structures in both generative and discriminative model contributes to the high performance. We show LEX-GAN's mutation detection ability in textural sequences by performing a gene classification and mutation detection task. | [
"explainable rumor detection",
"layered generative adversarial networks"
] | Reject | https://openreview.net/pdf?id=S1lukyrKPr | https://openreview.net/forum?id=S1lukyrKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rKoJ3HJ8Uc",
"H1xAgLJnjB",
"rygTTxJ5iS",
"r1etElcLor",
"Bkelbk98iS",
"SJgKpAFUsB",
"HyxBFRF8oH",
"rkeErRYIsB",
"S1lo-RKLjH",
"BygbyqPT5r",
"ryeF57Xc9r",
"SkxQro2BqH",
"Ske9ubVNYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724209,
1573807606020,
1573675204607,
1573457968592,
1573457656337,
1573457600763,
1573457532941,
1573457467920,
1573457410819,
1572858329314,
1572643728730,
1572354874519,
1571205490475
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1474/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1474/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1474/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1474/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is well-written and presents an extensive set of experiments. The architecture is a simple yet interesting attempt at learning explainable rumour detection models. Some reviewers worry about the novelty of the approach, and whether the explainability of the model is in fact properly evaluated. The authors responded to the reviews and provided detailed feedback. A major limitation of this work is that explanations are at the level of input words. This is common in interpretability (LIME, etc), but it is not clear that explanations/interpretations are best provided at this level and not, say, at the level of training instances or at a more abstract level. It is also not clear that this approach would scale to languages that are morphologically rich and/or harder to segment into words. Since modern approaches to this problem would likely include pretrained language models, it is an interesting problem to make such architectures interpretable.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer3's comments\", \"comment\": \"We appreciate the reviewer's effort and time and giving us the opportunity to clarify the novelty of our manuscript. We would like to clarify that the novelty of our manuscript is not about a mathematical formula breakthrough, but rather about a framework we designed for extracting information from text without consulting a background information dataset and metainformation of the text. We agree that the components we used in this framework, such as LSTM, GRU, CNN, are not new, but the layered GAN architecture we proposed for rumor detection is novel and high-performance. The major contribution of this work is the delicate way we employ to design an architecture out of these components and the design of this high-performance explainable rumor detector.\\n\\nWe are aware of only one GAN-based rumor detection work [1]. This work uses GAN to convert rumor to non-rumor, and non-rumor to rumor based on patterned words. For example, a rumor contains some obvious words like \\u201cfake news\\u201d, \\u201cis this true?\\u201d, \\\"not believe\\u201d, \\u201cnot sure\\u201d, etc. To the best of our knowledge, this work achieves the highest performance, 78.1% accuracy, on PHEME dataset, which is way higher than state-of-the-art works. However, this work doesn\\u2019t provide explainability, and couldn\\u2019t be extended to provide reasonable explainability. For example, if the patterned words don\\u2019t appear in the text, then this model is not capable to provide reasonable explainations since it doesn\\u2019t deeply capture the semantic meaning of the text. This is the fundamental reason why our LEX-GAN outperforms this model. LEX-GAN is designed for providing explainality and detecting real-world rumors. Our major goal is to differentiate real-world rumors and non-rumors, but not to detect word manipulation in synthetic data. We train LEX-GAN by GAN techniques to let it gain the ability of understanding the text and extracting meaningful information from the text for rumor detection. Word manipulation is a way of augmenting data. It enables LEX-GAN to extract suspicious parts in the sentences and better recognize the rumor/non-rumor patterns. \\n\\nWithout explainability, rumor detection methods and fake news detection methods are quite similar and can be extended to perform similar tasks. \\n\\nWith explainability, however, the methods for tackling these two are quite different. As stated in our previous reply, the major difference is the requirement of verified news database and metainformation of the users and posts.\\n\\nExplainability is provided by the discriminator Dexplain through the adversarial training process. Different than ordinary adversarial training in GAN, we use manipulated text instead of text generated from scratch to enhance the ability of discriminators. The generators manipulate the text by taking not only Dexplian\\u2019s feedback, but also Dclassify\\u2019s feedback into consideration. This approach provides the high accuracy of LEX-GAN. Intelligent word manipulation generates an augmented dataset which not only helps the discriminators to extract meaningful rumor/non-rumor patterns, but also exposes the discriminators in an environment full of statements with unseen patterns. This procedure strengthens the rumor classification of LEX-GAN and results in on average 26.85% macro-f1 outperformance under PEHME. \\n\\n[1] Ma, Jing, Wei Gao, and Kam-Fai Wong. \\\"Detect Rumors on Twitter by Promoting Information Campaigns with Generative Adversarial Learning.\\\" In The World Wide Web Conference, pp. 3049-3055. ACM, 2019.\"}",
"{\"title\": \"Response to the authors' comments\", \"comment\": \"Thanks for the detailed response and I appreciate the efforts of improving the manuscript. However, there's a clear consensus among most reviewers that the novelty is limited, and it is unclear how this method significantly differentiates from existing work in the fake-news and misinformation detection domain - I understand there is a difference such as the word manipulation, but it seems to be incremental rather than originally novel.\\n\\nThe paper has been really well written and easy to follow. I would suggest the authors investigate more about the behaviors of rumor authors and make it convincing in a quantitative way that detecting word manipulation is key in this area.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We are very thankful to the reviewer for the comments.\\n1.\\tWe use the word replacement to augment the available dataset and use both the generated data and original data to train the model. Therefore, the model is essentially trained to recognize rumor and detect detailed glitches. Compared to other works, LEX-GAN uses GAN and essentially adversarial training techniques are utilized as well, hence it is more robust to attacks. Kochkina\\u2019s work includes rumor detection and stance classification. The support evidence you pointed out is used in stance classification. In stance classification, posts and comments are labeled as support, deny, comment, or query due to their orientation toward rumor\\u2019s veracity. Stance classification and rumor detection are two different components in rumor classification system, which include four steps: rumor detection, tracking, stance classification, and veracity classification. LEX-GAN realizes rumor detection task based on the assumption that the rumor detection is done in the early stage when its veracity is unverified, and the comments are unavailable. \\n2.\\tWe thank the reviewer for this important suggestion. To explain the model choice, we start with human thinking process while reading text. When we read a sentence, we don\\u2019t think from scratch, instead, we understand words based on previous words. This is the fundamental reason why we choose RNN models over ordinary neural network. LSTM and GRU are two commonly used RNN models in text data processing. Gwhere chooses which words in a sentence to be replaced and it has to consider the past words and the whole sentence, hence we choose LSTM to realize this function. The choice of Dexplain follows a similar reasoning. Greplace does the actual word-replacing work, the choice of GRU considers both performance and efficiency. GRU is computationally more efficient than LSTM and provides better results when used in Greplace in this work. CNN is also frequently applied in nantural language processing applications to do classification. Dclassify utilizes a CNN to realize a classification between rumor and non-rumor. In summary, RNN models and CNN models are not mutual exclusive under text data, the choice of a hybrid model structure follows the consideration of both performance and efficiency. One of the strengths of LEX-GAN is that under the delicate layered structure that we designed, the choice of model structure effects the results but not significantly, hence LEX-GAN can by deployed into a broad range of applications while maintaining a high level of performance. We generate a variation of LEX-GAN as a baseline to showcase the ability of our layered structure. LEX-LSTM is generated by replacing LEX-GAN\\u2019s Greplace with a LSTM model. The performance of LEX-LSTM is added in Table 1.\\n3.\\tThe limitation and error cases were briefly discussed in Appendix. We follow the reviewer\\u2019s suggestion and add a limitation analysis in Section 5.3. We fixed the reference issue.\\n\\nWe appreciate the important and inspiring comments and suggestions from the reviewer. In addition to the response to the reviews and the modification of the manuscript, we would like to further provide a gentle summary of the contributions of our work: \\n1.\\tWe would like to clarify the difference between explainable rumor detection and explainable fake news detection. Explainable fake news detection has been studied in the literature. However, extension of existing explainable fake news detection works into rumor detection cannot be done because a rumor is defined as an unverified information at the time of posting. Hence, a verified news database cannot be established for explainable rumor detection. To the best of our knowledge, LEX-GAN is the first explainable rumor detection work with demonstrated high accuracy.\\n2.\\tWe would like to provide a quick review of rumor detection and state the strength of our work. Putting aside the explainability, rumor detection has been studied for decades. However, the accuracy of state-of-the-art methods is not promising, which reflects the difficulty of this problem. As we mentioned in the manuscript, to the best of our knowledge, one of the state-of-the-art works GAN-GRU [1] reaches the highest accuracy of 78.1% on a bench-marking rumor dataset PHEME. The average accuracy of other state-of-the-art works on this dataset is around 70%. Our proposed LEX-GAN outperforms all the baselines and achieves 82.4% in terms of accuracy. \\n3.\\tIn addition to rumor detection, we also provide a set of gene mutation detection experiments as an extended application of the proposed LEX-GAN, which showcases the text mining and textual mutation detection power of our framework. We believe our proposed framework could be exploited to make contributions to other domains.\\n\\n[1] Ma, Jing, Wei Gao, and Kam-Fai Wong. \\\"Detect Rumors on Twitter by Promoting Information Campaigns with Generative Adversarial Learning.\\\" The World Wide Web Conference. ACM, 2019.\"}",
"{\"title\": \"Response to Review #2 Part II\", \"comment\": \"We appreciate the important and inspiring comments and suggestions from the reviewer. In addition to the response to the reviews and the modification of the manuscript, we would like to further provide a gentle summary of the contributions of our work:\\n1.\\tWe would like to clarify the difference between explainable rumor detection and explainable fake news detection. Explainable fake news detection has been studied in the literature. However, extension of existing explainable fake news detection works into rumor detection cannot be done because a rumor is defined as an unverified information at the time of posting. Hence, a verified news database cannot be established for explainable rumor detection. To the best of our knowledge, LEX-GAN is the first explainable rumor detection work with demonstrated high accuracy.\\n2.\\tWe would like to provide a quick review of rumor detection and state the strength of our work. Putting aside the explainability, rumor detection has been studied for decades. However, the accuracy of state-of-the-art methods is not promising, which reflects the difficulty of this problem. As we mentioned in the manuscript, to the best of our knowledge, one of the state-of-the-art works GAN-GRU [1] reaches the highest accuracy of 78.1% on a bench-marking rumor dataset PHEME. The average accuracy of other state-of-the-art works on this dataset is around 70%. Our proposed LEX-GAN outperforms all the baselines and achieves 82.4% in terms of accuracy. \\n3.\\tIn addition to rumor detection, we also provide a set of gene mutation detection experiments as an extended application of the proposed LEX-GAN, which showcases the text mining and textual mutation detection power of our framework. We therefore believe our proposed framework could be exploited to make contributions to other domains.\\n\\n[1] Ma, Jing, Wei Gao, and Kam-Fai Wong. \\\"Detect Rumors on Twitter by Promoting Information Campaigns with Generative Adversarial Learning.\\\" In The World Wide Web Conference, pp. 3049-3055. ACM, 2019.\"}",
"{\"title\": \"Response to Review #2 Part I\", \"comment\": \"We thank the reviewer\\u2019s suggestions.\\n1.\\tThe PHEME dataset we used in rumor detection task is a state-of-the-art rumor dataset, unanimously used in a lot of prior works. Here we select some recent publications that used PHEME dataset:\\na.\\tHan, Sooji, Jie Gao, and Fabio Ciravegna. \\\"Data Augmentation for Rumor Detection Using Context-Sensitive Neural Language Model With Large-Scale Credibility Corpus.\\\" (ICLR 2019).\\nb.\\tZhang, Qiang, et al. \\\"Reply-Aided Detection of Misinformation via Bayesian Deep Learning.\\\" The World Wide Web Conference. ACM, 2019.\\nc.\\tNguyen, Duc Minh, et al. \\\"Fake news detection using deep markov random fields.\\\" Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019.\\nd.\\tBondielli, Alessandro, and Francesco Marcelloni. \\\"A survey on fake news and rumour detection techniques.\\\" Information Sciences 497 (2019): 38-55.\\ne.\\tConforti, Costanza, Mohammad Taher Pilehvar, and Nigel Collier. \\\"Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles.\\\" Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). 2018.\\nf.\\tLi, Jing, et al. \\\"A joint model of conversational discourse and latent topics on microblogs.\\\" Computational Linguistics 44.4 (2018): 719-754.\\ng.\\tZubiaga, Arkaitz, et al. \\\"Analysing how people orient to and spread rumours in social media by looking at conversational threads.\\\" PloS one 11.3 (2016): e0150989.\\nRumor detection and gene mutation detection are two independent evaluation case studies and are not used as one proxy for the other. While initially we were not aware of a larger gene dataset, we now added a set of experiments with a larger benchmarking gene dataset. We present the results in Section 5.2. We follow the reviewer\\u2019s suggestion and added one more set of experiments in rumor detection task that uses leave-one-out rule to test the generalization ability of the model. Our leave-one-out rule works as follows: train the model on some news and test on another news, which essentially provide a realistic testing environment. The experimental results of rumor detection task under leave-one-out rule are shown in Table 1.\\n2.\\tThe explainability of LEX-GAN is evaluated experimentally in both rumor detection and gene mutation detection tasks by reporting Dexplain\\u2019s performance. In rumor detection task, Dexplain achieves 80.42% and 81.23% macro-f1 on PHEME\\u2019v5 and PHEME\\u2019v9, respectively. In table 2, Dexplain\\u2019s prediction of suspicious statements in rumors is reported. The explainability of LEX-GAN in rumor detection task is therefore addressed. In gene mutation detection task, Dexplain achieves 89.49% macro-f1 score. In table 4, Dexplain\\u2019s prediction of gene mutation is shown and therefore addresses the explainability of LEX-GAN in gene mutation task. Additional results of Dexplain can be found in appendix.\\n3.\\tWe would like to clarify that we didn\\u2019t claim LEX-GAN doesn\\u2019t need labelled data. We use both real data and generated data to train the model. Rumor is complicated and hard to distinguish since there is no uniform data representation that corresponds to a rumor. LEX-GAN is designed learn the complicated high-dimensional rumor representation. Generated data are used to augment the available dataset and therefore enhance the detection ability of LEX-GAN.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"We thank the reviewer for these important comments.\\nAs for the term \\u201clayered\\u201d we used it because in many applications, such as protocols and standards, the term layered is used to imply the breakdown of the steps of the work in multiple steps, called layers. E.g., the TCP/IP layers, etc. We will think and do some more search to see whether a better term could replace \\u201clayered\\u201d. About the extended dataset, they are generated using the same G in order to ensure the fairness of the comparison between LEX-GAN and all the baselines. We added one more set of experiments that use leave-one-out rule to test the generalization ability of the model. Leave-one-out rule works as follows: train the model on some news and test on another news, which essentially provide a realistic testing environment that contains real, out-of-domain data. The experimental results of rumor detection task under leave-one-out rule are shown in Table 1.\\n\\nWe appreciate the important and inspiring comments and suggestions from the reviewer. In addition to the response to the reviews and the modification of the manuscript, we would like to further provide a gentle summary of the contributions of our work: \\n1.\\tWe would like to clarify the difference between explainable rumor detection and explainable fake news detection. Explainable fake news detection has been studied in the literature. However, extension of existing explainable fake news detection works into rumor detection cannot be done because a rumor is defined as an unverified information at the time of posting. Hence, a verified news database cannot be established for explainable rumor detection. To the best of our knowledge, LEX-GAN is the first explainable rumor detection work with demonstrated high accuracy.\\n2.\\tWe would like to provide a quick review of rumor detection and state the strength of our work. Putting aside the explainability, rumor detection has been studied for decades. However, the accuracy of state-of-the-art methods is not promising, which reflects the difficulty of this problem. As we mentioned in the manuscript, to the best of our knowledge, one of the state-of-the-art works GAN-GRU [1] reaches the highest accuracy of 78.1% on a bench-marking rumor dataset PHEME. The average accuracy of other state-of-the-art works on this dataset is around 70%. Our proposed LEX-GAN outperforms all the baselines and achieves 82.4% in terms of accuracy. \\n3.\\tIn addition to rumor detection, we also provide a set of gene mutation detection experiments as an extended application of the proposed LEX-GAN, which showcases the text mining and textual mutation detection power of our framework. We therefore believe our proposed framework could be exploited to make contributions to other domains.\\n[1] Ma, Jing, Wei Gao, and Kam-Fai Wong. \\\"Detect Rumors on Twitter by Promoting Information Campaigns with Generative Adversarial Learning.\\\" In The World Wide Web Conference, pp. 3049-3055. ACM, 2019.\"}",
"{\"title\": \"Response to Review #3 Part II\", \"comment\": \"3.\\tThe dataset we used in the evaluation of LEX-GAN against prior work on rumor detection task is a state-of-the-art rumor dataset. In addition to the real rumors in the datasets, we generated in total around 8000 rumors. Here we select some very recent publications that used PHEME dataset:\\na.\\tHan, Sooji, Jie Gao, and Fabio Ciravegna. \\\"Data Augmentation for Rumor Detection Using Context-Sensitive Neural Language Model With Large-Scale Credibility Corpus.\\\" (ICLR 2019).\\nb.\\tZhang, Qiang, et al. \\\"Reply-Aided Detection of Misinformation via Bayesian Deep Learning.\\\" The World Wide Web Conference. ACM, 2019.\\nc.\\tNguyen, Duc Minh, et al. \\\"Fake news detection using deep markov random fields.\\\" Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019.\\nd.\\tBondielli, Alessandro, and Francesco Marcelloni. \\\"A survey on fake news and rumour detection techniques.\\\" Information Sciences 497 (2019): 38-55.\\ne.\\tConforti, Costanza, Mohammad Taher Pilehvar, and Nigel Collier. \\\"Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles.\\\" Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). 2018.\\nf.\\tLi, Jing, et al. \\\"A joint model of conversational discourse and latent topics on microblogs.\\\" Computational Linguistics 44.4 (2018): 719-754.\\ng.\\tZubiaga, Arkaitz, et al. \\\"Analysing how people orient to and spread rumours in social media by looking at conversational threads.\\\" PloS one 11.3 (2016): e0150989.\\n\\n\\n[1] Yang, Fan, Shiva K. Pentyala, Sina Mohseni, Mengnan Du, Hao Yuan, Rhema Linder, Eric D. Ragan, Shuiwang Ji, and Xia Ben Hu. \\\"XFake: Explainable Fake News Detector with Visualizations.\\\" In The World Wide Web Conference, pp. 3600-3604. ACM, 2019.\\n[2] Cui, Limeng, Kai Shu, Suhang Wang, Dongwon Lee, and Huan Liu. \\\"dEFEND: A System for Explainable Fake News Detection.\\\" In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2961-2964. ACM, 2019.\\n[3] Ruchansky, Natali, Sungyong Seo, and Yan Liu. \\\"Csi: A hybrid deep model for fake news detection.\\\" In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 797-806. ACM, 2017.\\n[4] Ma, Jing, Wei Gao, and Kam-Fai Wong. \\\"Detect Rumors on Twitter by Promoting Information Campaigns with Generative Adversarial Learning.\\\" In The World Wide Web Conference, pp. 3049-3055. ACM, 2019.\"}",
"{\"title\": \"Response to Review #3 Part I\", \"comment\": \"We thank the reviewer for all the insightful comments.\\n1.\\tWe would like to clarify the novel contributions of our work: \\na.\\tWe are not aware of any research on explainable rumor detection. We agree with the reviewer that explainable misinformation and fake news detection has been studied in the literature. However, extension of existing explainable fake news detection works into rumor detection cannot be done because a rumor is defined as an unverified information at the time of posting. Hence, a verified news database cannot be established for explainable rumor detection. To the best of our knowledge, LEX-GAN is the first explainable rumor detection work with demonstrated high accuracy.\\nb.\\tWe further clarify the distinction between explainable fake news work and rumor detection. In explainable fake news or misinformation detection, a larger verified background knowledge set and training dataset are needed to provide explainability. For example, in work [1], a verified news set needs to be collected to provide explainability. In work [2-3], explainability relies on the comments of the fake news and/or metainformation of users. Without these additional information and verified veracity of sentences, the explainable fake news detection problem becomes a rumor detection problem. However, these explainable fake new detection methods cannot be deployed directly because the shortage of these additional data. LEX-GAN, on the contrary, proposes a solution to explainable rumor detection without requesting additional background information and can easily be extended to explainable fake news detection, if these additional data are available. \\nc.\\tPutting aside the explainability, rumor detection has been studied for decades. However, the accuracy of state-of-the-art methods is not promising, which reflects the difficulty of this problem. As we mentioned in the manuscript, to the best of our knowledge, one of the state-of-the-art works GAN-GRU [4] reaches the highest accuracy of 78.1% on a bench-marking rumor dataset PHEME. The average accuracy of other state-of-the-art works on this dataset is around 70%. Our proposed LEX-GAN outperforms all these prior approaches on state-of-the-art benchmarks and achieves 82.4% in terms of accuracy. \\nd.\\tIn addition to rumor detection, we also provide a set of gene mutation detection experiments as an extended application of the proposed LEX-GAN, which showcases the text mining and textual mutation detection power of our framework. We believe our proposed framework could be exploited and make contributions to other domains.\\n2.\\tContent manipulation is a way of augmenting the available data, hence improves the detection ability of LEX-GAN. We did not assume that rumor is mostly generated by replacing some words in a sentence. We acknowledge the complicated nature of the rumor and LEX-GAN captures it by training on both original complicated rumors and generated rumors. Here we provide two examples to demonstrate the rumor detection power of LEX-GAN compared to baselines. \\na.\\tA rumor example that are correctly detected by LEX-GAN but incorrectly detected by other baselines.\\ne.g. \\u201cwho's your pick for worst contribution to sydneysiege mamamia uber or the daily tele\\u201d\\nLEX-GAN predicted suspicious words (in parenthesis):\\n\\u201c(who\\u2019s) your pick for worst (contribution) to sydneysiege (mamamia uber) or the daily tele\\u201d\", \"lex_gan_score\": \"0.8558\", \"baseline_cnn_score\": \"0.0029\", \"baseline_lstm_score\": \"0.1316\", \"baseline_vae_cnn_score\": \"0.6150\", \"baseline_vae_lstm_score\": \"0.4768\\nAs we can see in example a, LEX-GAN provides a very low score for a rumor, while other baselines all generated relatively high scores, and even detect it as non-rumor. This is a very difficult example since from the sentence itself, we as human rumor detection agents even cannot pick the suspicious parts confidently. However, LEX-GAN gives a reasonable prediction and shows that it has the ability to understand and analyze complicated rumors. In example b, a non-rumor sentence gains a high score from LEX-GAN, but several relatively low scores from the baselines. This example again confirms that our proposed LEX-GAN indeed captures the complicated nature of rumors and non-rumors.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"Three strengths:\\n1. This paper has been well written and easy to follow. Adequate details have been provided to help easily reproduce the experimental results.\\n2. The technical part is sound - the authors apply GAN for rumor detection and propose to use model-where and model-replace to extend conventional GAN models.\\n3. Experiments are conducted on real-world data.\", \"weaknesses\": \"1. Contributions of novelty are limited. The idea of using GAN to detect misinformation such as rumors and fake news has been studied in the literature several times, and the proposed method does not differ from them significantly. The problem of explainable rumor and fake news detection has also been well studied. Therefore, this piece of work is more a marginal extension of existing solutions.\\n2. The technical solution can be very limited. The generator can only manipulate content by replacing something from a true statement. The hidden assumption that misinformation is mostly generated by replacing some word definitely underestimates the complicated nature of fake news/rumor detection problem. If the assumption holds, the rumor detection problem can be easily done by collecting and comparing against true statements.\\n3. The limited experimental results cannot resolve my concerns. The rumor dataset is very small for a typical deep learning model. I am also curious about how many rumors in the dataset are generated by replacing words.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors proposed an interesting model to solve rumor detection problem. The LEX-GAN model takes the advantage of GAN in generating high-quality fake examples by substitute few works in original tweets. It achieved excellent performance on two kinds of dataset.\\n\\nThe term \\u2018layered\\u2019 was a little confusing to me at the very beginning, though it is strengthened in many places around the paper. Maybe the author could use some other word to better summarize the two layers.\\n\\nAnother question is about the extended dataset with generated data, are they generated using the same distribution from G of the final model? What the result would it be if we use real, out-of-domain data?\\n\\nI would like to see this paper accepted to motivate future works on fake news detection and rumor detection..\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a method for detecting rumours in text. Early on in the paper, the authors claim that this method:\\n1) is more accurate in rumour detection than prior work, \\n2) can provide explainability, and\\n3) does not need labelled data because it is trained on synthetic \\\"non-rumour looking rumours\\\".\\n\\nAll three of these statements are problematic.\\nThe experimental evaluation uses a small dataset of rumour classification (about 15000 tweet related to 14 news topics) and an even smaller dataset of gene classification. The rationale is to use the gene classification task as a proxy for rumour detection. This is not valid. The gene classification task does not contribute to the evaluation of the rumour detection method. The rumour classification dataset is relatively small, but even more importantly, the experimental results on that dataset are not thoroughly analysed, for instance through an ablation test. \\n\\nExplainability is not evaluated experimentally, nor formally proven. \\n\\nThe claim that the method does not need labelled data because it is trained on synthetic \\\"non-rumour looking rumours\\\" is shaky, because 1) one could train the method on labelled data, and 2) it is not clear how \\\"non-rumour looking rumours\\\" are guaranteed in the synthesis phase (how are they defined? how are they evaluated to be \\\"non-rumour looking rumours\\\"? etc).\\n\\nNote that there is no definition of what sort of data representation corresponds to a \\\"rumour\\\" in the paper.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In the paper, authors proposed a generative adversarial network-based rumor detection model that can label short text like Twitter posts as rumor or not. The model can further highlight the words that are responsible for the rumor accusation.\", \"proposed_model_consists_of_4_sub_models\": \"a G_Where model finds the word to replace so to create an artificial rumor; a G_replace model decides what the replacement word should be; a D_classify model detects if a sequence is a rumor; a final D_explain model pinpoints the word of concern. D_ models and G_ models are trained in an adversarial competing way.\\n\\nExperiments showed that the LEX-GAN model outperforms other non-GAN models by a large margin on a previously published rumor dataset (PHEME) and in a gene classification task.\", \"my_questions\": \"1) The task modeled is essentially a word replacement detection problem. Is this equivalent to rumor detection? Even if it performs really well on a static dataset, it could be very vulnerable to attackers. Various previous works mentioned in the paper, including the PHEME paper by Kochkina et al, used supporting evidence for detection, which sounds like a more robust approach.\\n\\n2) Authors didn't explain the rationale behind the choice of model structure, e.g. GRU vs LSTM vs Conv. The different structures have been used in mix in the paper. Are those choices irrelevant or critical?\\n\\n3) I would like to see more discussion on the nature of errors from those models, but it's lacking in the paper. This could be critical to understand the model\\u2019s ability and limitation, esp given that it\\u2019s not looking at supporting evidences from other sequences.\", \"small_errors_noticed\": \"The citation for PHEME paper (Kochkina et al) points to a preprint version, while an ACL Anthology published version exists.\"}"
]
} |
Skxuk1rFwB | Towards Stable and Efficient Training of Verifiably Robust Neural Networks | [
"Huan Zhang",
"Hongge Chen",
"Chaowei Xiao",
"Sven Gowal",
"Robert Stanforth",
"Bo Li",
"Duane Boning",
"Cho-Jui Hsieh"
] | Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training. In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass. CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks. We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness.
Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255. | [
"Robust Neural Networks",
"Verifiable Training",
"Certified Adversarial Defense"
] | Accept (Poster) | https://openreview.net/pdf?id=Skxuk1rFwB | https://openreview.net/forum?id=Skxuk1rFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"yPuDDh114w",
"S1e-LkOnjS",
"B1lqtOGcsr",
"H1eLlPRmjr",
"Hyl0oIC7iB",
"HylAHLAXsS",
"Skx-gURXsS",
"BklFgrR7jH",
"SJeZNqlpqH",
"HJgnxLMMcH",
"H1eplpk-5H",
"S1eGg-n9YB",
"B1xHnjsIdS",
"r1geHD2NOH",
"S1eG0peVuB",
"HJlktCsXdH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798724177,
1573842761257,
1573689473787,
1573279470076,
1573279398393,
1573279301563,
1573279208673,
1573278961001,
1572829736713,
1572115956188,
1572039924715,
1571631338496,
1570319277442,
1570191159657,
1570143689948,
1570123383025
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1473/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1473/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"~Matthew_B_Mirman1"
],
[
"ICLR.cc/2020/Conference/Paper1473/Authors"
],
[
"~Matthew_B_Mirman1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a method that hybridizes the strategies of linear programming and interval bound propagation to improve adversarial robustness. While some reviewers have concerns about the novelty of the underlying ideas presented, the method is an improvement to the SOTA in certifiable robustness, and has become a benchmark method within this class of defenses.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes and new results\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nWe summarize the changes we made to address reviewer concerns, and the new results added to the revision of our paper as follows.\\n\\n1. The use of TPUs to achieve SOTA results may be a concern, so we have implemented a multi-GPU version of CROWN-IBP. To train the SOTA CIFAR model, it takes 1 day on 4 GPUs and the obtained verified accuracy is similar to the results obtained on TPUs. Results are provided in Appendix G, Table H. We will open source our multi-GPU training code.\\n\\n2. We have improved the implementation of CROWN-IBP. Running time measurement is provided in Appendix J, Table G. For the largest network, CROWN-IBP is just about twice slower than IBP (both are trained on 4 GPUs). We also discussed these results in Appendix J.\\n\\n3. We have included performance evaluation on a large amount of smaller models trained on a single GPU (in Table C, Appendix F) by investigating performance statistics (min/median/max). This prevents hand-tuning on a small set of models, and we can consistently outperform IBP over a large range of models.\\n\\n4. We have added more discussions on why a tighter bound can stabilize IBP training in Appendix B and bound tightness comparison between IBP and CROWN-IBP (Figure B).\\n\\n5. We add more discussions on how to choose hyperparameters in Appendix C. Compared to IBP, CROWN-IBP only adds one additional hyperparameter, $\\\\beta$. We did not tune or grid-search this hyperparameter in any experiments; $\\\\beta$ has a clear meaning and our selection has clear justifications.\\n\\nWe thank the reviewers again for their helpful comments and hope they can re-evaluate our paper based on our response to each reviewer, and the highlighted updates in our paper above.\\n\\nThanks,\\nPaper 1473 Authors\"}",
"{\"title\": \"Thank you for your review! We hope you can read our response soon.\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you again for your constructive review. Since the discussion period is closing soon, we will really appreciate it if you can read our response and provide us some feedback. We will be glad to discuss with you on any further concerns.\\n\\nIn our response, we have discussed in detail why a tight bound is helpful for training, and the rationales for combining the two bounds behind the scenes. Our paper is not a naive combination of the two bounds, and we took careful design considerations to get the best of both worlds and achieve SOTA. We hope the reviewer can understand our paper better after reading our response.\\n\\nThank you,\\nPaper 1473 Authors\"}",
"{\"title\": \"Thank you for the discussions on randomized smoothing; TPUs are not necessary for CROWN-IBP (Multi-GPU version will be released)\", \"comment\": \"Dear AnonReviewer2,\\n\\nWe thank you for recognizing the contributions of our paper and raising the discussions on randomized smoothing and concerns on expensive computations on TPUs.\\n\\n(Answer 1) Compared to the randomized smoothing based method, our bound propagation-based approach has several theoretical and practical benefits:\\n\\n1. Recent works [1][2] show that randomized smoothing may not scale well for the important case of L infinity robustness. They provided some preliminary theoretical evidence that even the *optimal* robustness certificate for L infinity smoothing has a dependency on dimension d, thus for high dimensional input (e.g., CIFAR with d=3072), randomized smoothing based method cannot give a good quality bound. On the other hand, for L2 norm, the randomized smoothing certificate is dimension-free. This is a fundamental limitation of randomized smoothing. For the L infinity setting like CIFAR epsilon=8/255, bound propagation-based method like CROWN-IBP still gives the best results.\\n\\n2. As also mentioned by the reviewer, randomized smoothing typically needs a large number of samples, e.g., in Cohen et al., 100,000 random samples for a *single* image. In contrast, our verification can be computed using IBP very fast, which is only 2x forward propagation time. Randomized smoothing costs 50,000x more during inference, and our training procedure is 500x (pessimistically) slower during training time. So it is really a trade-off here; each method has its own strength.\\n\\nBeing scalable to large networks on all important norms with less training/inference cost is still an open challenge. It is not solved by randomized smoothing, nor CROWN-IBP. For the next step, our future work will investigate how to combine the strengths from bound propagation-based certified defense (good for L infinity norm, sample free) and randomized smoothing based approach (good for L2 norm, need a lot of samples). Thus, our contribution as the SOTA bound propagation-based certified defense is important, as it can become an ingredient of the next generation certified defense.\\n\\n(Answer 2) Regarding computation cost, the use of 32 TPUs is not necessary. We use TPUs mainly for obtaining a completely fair comparison to IBP (Gowal et al, 2018), as their implementation was TPU-based. We additionally implemented CROWN-IBP using multi-GPUs. Training the same largest CIFAR network takes 1 day on 4x 1080 Ti GPUs (using the same hyperparameters), and we can achieve similar accuracy. We think this computational cost is quite reasonable, compared to other SOTA uncertified defense like adversarial training, which is also quite slow (10-20x extra cost for each epoch, and needs much more epochs to converge than natural training). \\n\\nWe updated new multi-GPU training results in Table H, and we will open source our multi-GPU training code to make our algorithm available to a broader audience. \\n\\nWe hope our response addresses your concerns on TPU training and randomized smoothing, and please kindly let us know if you have any further questions.\", \"references\": \"[1] A Unified Framework for Randomized Smoothing based Certified Defense. https://openreview.net/pdf?id=ryl71a4YPB\\n[2] Filling the Soap of Bubbles: Efficient Black-Box Adversarial Certification with Non-gaussian smoothing. https://openreview.net/pdf?id=Skg8gJBFvr\"}",
"{\"title\": \"Continue\", \"comment\": \"(Q5) Adds a new hyperparameter for tuning\\n\\nCompared to IBP, CROWN-IBP only adds one additional hyperparameter, $\\\\beta$. $\\\\beta$ has a clear meaning: balancing between the convex relaxation-based bounds and the IBP bounds. \\n\\nIn all our experiments, we did not tune or search this hyperparameter. We fix $\\\\beta_{start}=1$, $\\\\beta_{end}=0$ for all experiments (except for CIFAR 2/255), and this allows us to achieve SOTA. In the CIFAR 2/255 setting, it is known that convex relaxation-based methods outperform IBP, thus it is intuitive to set $\\\\beta=1$. \\n\\nWe added a new paragraph \\u201cHyperparameter \\u03ba and \\u03b2\\u201d to the end of appendix C, giving more insights on how to select these hyperparameters. $\\\\beta$ is a hyperparameter that rarely needs to be tuned, and when you change it you know what to expect. It is not a blackbox hyperparameter that requires a grid search or luck to choose the best value. We believe the addition of this hyperparameter is not a significant con of CROWN-IBP.\\n\\n(Conclusions)\\n\\nWe hope we have addressed the concerns of the reviewer, especially why we combine the two bounds and why loose bounds result in unstable training. Despite being simple at the first glance, there are no existing works of this kind and our methods take a lot of design considerations (bound tightness, avoiding over-regularization, computational efficiency) behind the scenes and also achieves state-of-the-art performance. We hope the reviewer can re-evaluate our paper based on the response, and we look forward to discussing it with the reviewer if further concerns are raised.\"}",
"{\"title\": \"Continue\", \"comment\": \"(Q3) Why would we want to combine the two lower bounds? (\\u201cLack of any theoretical insights/motivation for the proposed method\\u201d)\\n\\nThe paper was motivated by the success and weakness of IBP. Gowal et al. showed that with careful tuning, IBP can significantly outperform convex relaxation-based defense (e.g., on MNIST epsilon=0.3 it reduces verified error from 40% to <10%). We believe part of the reason is that convex relaxation-based methods overregularize the network in large epsilon regime, as we observe in Figure 1. However, in practice, IBP training can be unstable, as shown in Figure 3, where IBP cannot converge very well in all settings. The reason for unstable training is due to the loose bound at the beginning of training, as we have explained in (Q1) above. Additionally, IBP is computationally very cheap (so bounds are loose) but convex relaxation-based methods are in the order of 100X more expensive (see Table G) to obtain tighter bounds.\\n\\nThe combination of the two bounds helps us get the best of both worlds (which is also mentioned by AnonReviewer1): tighter bounds at the beginning of training so more stable training (see Q1 above); no overregularization since we can gradually decay the convex relaxation bounds when epsilon increases; much better computational efficiency compared to regular convex relaxation-based methods since we reuse IBP bounds for intermediate layers.\\n\\nBoth convex relaxation-based methods (see Salman et al., 2019 for an overview) and IBP are theoretically sound bounds, the combination of them are still sound and within the minimax optimization framework (Eq. 2). However, we agree that there is no theoretical answers to the questions why IBP works better than convex relaxations for large perturbation radii, and why convex relaxation-based approach over-regularize. We discussed a possible hypothesis in Appendix A, however, we believe this is still an open challenge and beyond the scope of our paper.\\n\\n\\n(Q4) Novelty seems small\\n\\nOur paper is the first to propose such a unique combination of convex relaxation and IBP based verifiable training procedure, and our empirical results achieve state-of-the-art, outperforming all baseline methods in all settings significantly on MNIST and CIFAR-10.\\n\\nMost importantly, our paper is not a naive combination of two methods. There are lots of rationales behind the scenes:\\n\\n1. As discussed in (Q1) and (Q3), we carefully considered the strengths and weaknesses of both IBP and convex relaxation-based method, provided empirical studies (Table 1 and Figure 1) and designed our method to exploit the strengths of both methods. It is not a naive combination of two random methods; this combination has a clear justification: improve stability at the beginning phase of training and avoid overfitting at the late phase of training.\\n\\n2. The computational cost of the convex relaxation-based method is typically very high. A naive combination will not overcome the drawback of high computational complexity. CROWN-IBP cleverly avoids this problem by re-using IBP bounds as intermediate layer results for convex relaxation, reducing the complexity of convex relaxation-based method from $O(L^2 n^3)$ to $O(L n^2 n_L)$ where usually $n_L$ is much less than $n$ (see the \\u201cComputational Cost\\u201d paragraph on page 7). The reduction of computation is in the order of 100 (see Table G, training time comparison).\\n\\n3. Additionally, CROWN-IBP allows efficient implementation for convolutional networks on accelerators (GPUs/TPUs), because the backward bound propagation pass always starts from the last specification vector, which is guaranteed to be a small dense matrix. (See page 7, paragraph \\u201cComputational Cost\\u201d for detailed discussions).\\n\\n--to be continued\"}",
"{\"title\": \"More explanations on the bounds and why tight bounds can stabilize training\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for providing your helpful feedback. Sorry for the potential confusion in our paper and we would like to clarify them in our response, and we have added relevant discussions to our revision.\\n\\n(Q1) Why does loose bounds result in unstable training? Tighter bounds stabilize training?\\n\\nLoose bounds (lower bounds on margins $\\\\underline{m}$, as defined on page 4 \\u201cverification specifications\\u201d) give a loose upper bound of the minimax loss (Eq. 4); in other words, the \\u201crobust loss\\u201d term in (9) will become very large. A large robust loss can be a challenge for the optimizer to minimize, and training can easily diverge or stuck at random guess level.\\n\\nMore specifically, we will explain why a tighter bound like CROWN-IBP can help to stabilize IBP training.\\n\\nWhen taking a randomly initialized network or a naturally trained network, IBP bounds are very loose. But in Table 1, we show that a network trained using IBP can eventually obtain quite tight IBP bounds and high verified accuracy (i.e., the network can adapt to IBP bounds and learn a specific set of weights to make IBP tight and also correctly classify examples). However, since the training has to start from weights that produce loose bounds for IBP, the beginning phase of IBP training can be challenging and is vitally important.\\n\\nWe observe that IBP training can have a large performance variance across models and initializations. Also, IBP is more sensitive to hyper-parameters like kappa or schedule length; as you can see in Figure 3, many IBP models failed to converge (large worst/median verified error) with some kappa or schedule length settings. The reason for instability is that during the beginning phase of training, the loose bounds produced by IBP make the robust loss (Eq. 9) explode, and it is challenging for the optimizer to reduce this loss and find a set of good weights that produce tight IBP verified bounds in the end.\\n\\nConversely, if our bounds are much tighter at the beginning, the robust loss (Eq. 9) always remains in a reasonable range during training, and the network can gradually learn to find a good set of weights that make IBP bounds increasingly tighter. Initially, tighter bounds can be provided by a convex relaxation-based method, and the convex relaxation bounds are gradually replaced by IBP bound (using beta_start=1, beta_end=0), eventually leading to a model with learned tight IBP bounds in the end.\\n\\nTo give you some intuitions on how much tighter CROWN-IBP is than IBP, in appendix B, we added a figure comparing the tightness between IBP bound and CROWN-IBP bound. We take the difference between the two bounds (CROWN-IBP bound minus IBP bound) and plot this difference during the training procedure. At the beginning of training, the bound difference can be very large, and the network gradually learns how to make the IBP bounds tighter during the training process. The use of tighter bounds at the beginning prevents divergence and can guide the network to learn better IBP bounds and achieve better-verified accuracy.\\n\\n(Q2) why not just combine the natural loss with the tight bound, as natural loss can be seen as the loosest bound? Is IBP crucial? and why?\\n\\nThe natural loss does not provide a bound. The loosest bound is negative infinity, and it will explode the robust loss in Eq. (9). (see page 4, verification specifications, for the definition of this lower bound on margin, and Eq (4) for how to use it to get an upper bound for the minimax loss).\\n\\nIBP is crucial because it can provide us with a bound for computing the robust loss. Natural training cannot provide such bounds. Additionally, a network trained using natural loss is not robust and difficult to verify with current techniques. Models trained using IBP bounds as $\\\\underline{m}$ in Eq. (9) can be quickly verified using IBP.\\n\\n--to be continued\"}",
"{\"title\": \"Thank you for the encouraging comments. We have added additional results, and further speedup implementation.\", \"comment\": \"We thank the reviewer for the encouraging comments and constructive feedback. We really appreciate the reviewer\\u2019s precise characterization of the contributions in our work. We provide answers to the raised questions/cons below.\\n\\n(Q1). Extensive experiments on advanced networks/datasets\\n\\nIn our paper we use the same networks as the previous work (Gowal et al. 2018), to stay comparable with their results, which we call DM-Small, DM-Medium, and DM-Large. During the preparation of this submission, we tried much wider networks by increasing the width of the DM-Large model twice and four times, but they did not yield significant performance improvement. Thus we decided to keep models the same as in previous work for a straightforward comparison. We plan to implement more advanced networks (e.g., ResNet, DenseNet, etc) as the next step and scale to larger datasets is our future work.\\n\\nBesides the three models presented in the submitted version of the paper, in this revision, we additionally provide more comprehensive experiments on a large range of MNIST and CIFAR-10 models (18 MNIST models + 17 CIFAR models). The purpose of this experiment is to compare model performance statistics (min, median, and max) on a wide range of models, rather than a few hand-selected models. The results are presented in Appendix F, Table D. On all model structures and parameter settings, CROWN-IBP can outperform IBP in terms of best, median and worst verified errors. Especially, in many situations, the worst-case verified error improves significantly using CROWN-IBP because IBP training is not stable on some of the models. \\n\\n(Q2). More elaborate insights into the choice of the training config/hyper-params:\\n\\nThis is a very good suggestion. kappa controls the trade-off between verified accuracy and standard (clean) accuracy and we typically recommend kappa_start=1 and kappa_end=0. beta determines if we want to use a convex relaxation-based the bound or IBP based bound; the general recommendation is to set beta_start=1 and beta_end=0. We added three paragraphs at the end of Appendix B to discuss the selection of hyperparameters in detail.\\n\\n(Q3). Breakdown of the cost between different layers and operations:\", \"the_per_layer_cost_for_propagating_the_crown_ibp_bound_backward_is_actually_quite_simple\": \"in a high level, for all operations in the neural network, it is $n_L - 1$ times ($n_L$ is the number of classes, for MNIST/CIFAR it is 10) more expensive than forward propagation as there are $n_L - 1$ specifications per example. Thus CROWN-IBP is well suited to problems where the number of classes is small (more classes can be done efficiently by subsampling of specifications, which is left to future work). CROWN-IBP is significantly more efficient than CROWN (Zhang et al., 2018) and convex adversarial polytope (Wong et al., 2018); it is $L n$ times faster than these approaches, where $L$ is the number of layers and $n$ is hidden layer size. Generally, (ordinary) CROWN and convex adversarial polytopes are too slow for training.\\n\\nPractically, CROWN-IBP training time can be much less than $n_L - 1$ times slower than IBP, as CROWN-IBP is typically only used during the epsilon schedule rather than the entire training process, and CROWN-IBP generally executes more efficiently than IBP on parallel hardware because it packs denser computation that utilizes hardware accelerators better.\\n\\nEmpirically, we have further optimized the implementation of CROWN-IBP (with roughly 2X reduction in training time in Table G), and we have prepared a multi-GPU version that can train the largest CIFAR-10 model in about 1 day using 4 GPUs. We have provided updated training time measurement in Appendix J and Table G. On the largest CIFAR-10 model, training using CROWN-IBP is actually only about twice slower than IBP.\\n\\n\\n(Q4). Complementary techniques such as lower precision/quantization:\\n\\nTo see if bfloat16 has any impact on training results, we additionally implement CROWN-IBP on multi-GPUs with float32. We train the CIFAR-10 model using the same hyperparameters as on TPUs and we found that the differences between TPU and GPU training results are small. The results are provided in Table H. We see no big difference between bfloat16 and float32 training.\\n\\n(Q5). Confirmation on state-of-the-art verifiable training baseline:\\n\\nFor the verification of L infinity norm perturbations, the current best baselines in most settings are IBP (Gowal et al., 2018), except on CIFAR 2/255 where (Wong et al. 2018) is the best. CROWN-IBP can achieve better-verified accuracy than previous state-of-the-art works in all settings.\\n\\nWe thank the reviewer again and will be glad to discuss with the reviewer on any parts that are still unclear, or any additional concerns raised.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work proposes CROWN-IBP - novel and efficient certified defense method against adversarial attacks, by combining linear relaxation methods which tend to have tighter bounds with the more efficient interval-based methods. With an attempt to augment the IBP method with its lower computation complexity with the tight CROWN bounds, to get the best of both worlds. One of the primary contributions here is that reduction of computation complexity by an order of \\\\Ln while maintaining similar or better bounds on error. The authors show compelling results with varied sized networks on both MNIST and CIFAR dataset, providing significant improvements over past baselines.\\n\\nThe paper itself is very well written, lucidly articulating the key contributions of the paper and highlighting the key results. The method and rationale behind it quite easy to follow.\", \"pros\": \"> Show significant benefits over previous baseline with 7.02% verified test error on MNIST at \\\\epsilon = 0.3, and 66.94% on CIFAR-10 with \\\\epsilon = 8/255\\n> The proposed method is computationally viable, with up to 20X faster than linear relaxation methods with similar. better test errors and within 5-7X slower than the conventional IBP methods with worse errors\", \"cons\": \"> Extensive experiments with more advanced networks/datasets would have been more convincing, esp. given the computation efficiency that enables such experiments\\n> More elaborate insights into the choice of the training config/hyper-params esp. with the choice of \\\\K_start, \\\\K_end across the different datasets\", \"other_comments\": \"> For the computational efficiency studies, it would be helpful to provide a breakdown of the costs between the different layers and operations, to better asses/confirm that benefits of CROWN-IBP method\\n> Impact of other complementary techniques such a lower precision/quantization? One fo the references compared against is the Gowal et al. 2018 for the as a baseline, however, it seems to be those results were obtained on a different HW platform (TPUs - motioned in Appendix-B), with potentially different computational accuracies (BFLOAT16 ?). So, this bears to question of the impact of precision on these methods and also the computation complexity.\\n> Since I'm not very well versed with the current baseline and state-of-art for variable robust training of DNN, it would be good to get an additional confirmation on the validity of the used baselines.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new variation on certified adversarial training method that builds on two prior works IBP and CROWN. They showed the method outperformed all previous linear relaxation and bound propagation based certified defenses.\", \"pros\": \"1. The empirical results are strong. The method achieved SOTA.\", \"cons\": \"1. Novelty seems small. It is a straightforward combination of prior works, by adding two bounds together.\\n2. Adds a new hyperparameter for tuning.\\n3. Lack of any theoretical insights/motivation for the proposed method. Why would we want to combine the two lower bounds? The reason given in the paper is not very convincing:\\n\\n\\\"IBP has better learning power at larger epsilon and can achieve much smaller verified error.\\nHowever, it can be hard to tune due to its very imprecise bound at the beginning of training; on the\\nother hand, linear relaxation based methods give tighter lower bounds which stabilize training, but it\\nover-regularizes the network and forbids us to achieve good accuracy.\\\"\", \"my_questions_with_regards_to_this\": \"(i) Why does loose bound result in unstable training? Tighter bound stabilize training?\\n(ii) If we're concerned with using a tighter bound could result in over-regularization, then why not just combine the natural loss with the tight bound, as natural loss can be seen as the loosest bound. Is IBP crucial? and why?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new method for training certifiably robust models that achieves better results than the previous SOTA results by IBP, with a moderate increase in training time. It uses a CROWN-based bound in the warm up phase of IBP, which serves as a better initialization for the later phase of IBP and lead to improvements in both robust and standard accuracy. The CROWN-based bound uses IBP to compute bounds for intermediate pre-activations and applies CROWN only to computing the bounds of the margins, which has a complexity between IBP and CROWN. The experimental results are verify detailed to demonstrate the improvement.\\n\\nThe improvement is significant enough to me and I tend to accept the paper. The results on CIFAR10 with epsilon=8/255 is so far the state-of-the-art. However, it is far from being scalable enough to large networks and datasets, which has already been achieved by randomized smoothing based approaches. On CIFAR10, it takes 32 TPU cores to train a 4-conv-layer network. Still, such an approach has the advantage of making robust inferences much more efficiently than randomized smoothing, and thus still worth further explorations.\"}",
"{\"comment\": \"In Table 2 of submitted draft, the best reported verified error for MNIST epsilon_test=0.4 is 12.84%. This number does not match other places in paper (in introduction and Appendix B, page 14, we mentioned 12.06%). During final formatting, the 12.84% number in Table 2 was mistakenly copied from results using a wrong training schedule. The correct verified error, clean error and PGD error for MNIST epsilon_test=0.4 are 12.06%, 2.17% and 9.47% respectively; both verified and clean error are noticeably better. We will make these corrections to Table 2 in our revision.\\n\\nThanks,\\nPaper1473 Authors\", \"title\": \"Erratum: CROWN-IBP verified error in Table 2 for MNIST epsilon_test=0.4 should be 12.06%, not 12.84%\"}",
"{\"comment\": \"Hi Matthew,\\n\\nThanks for your feedback! And yes we agree that the phrase \\\"bound propagation\\\" may cause confusion and does not accurately describe the work (Mirman et al. 2018). To make things clear, we will change the beginning of this sentence to:\\n\\nMirman et al. (ICML 2018) proposed a variety of abstract domains to give sound over-approximations for neural networks, including the \\u201cBox/Interval Domain\\u201d (referred to as IBP in Gowal et al.) and showed that\\u2026\\n\\nHope it is even better now. Thanks again for your helpful comments!\\n\\nSincerely,\\nPaper 1473 Authors\", \"title\": \"Thanks for the feedback; we made it more clear\"}",
"{\"comment\": \"Yes, this looks better. However, I'd note that the phrase \\\"bound propagation\\\" has caused confusion in the past and led to the misunderstanding that these domains only track bounds per neuron or per layer. At their core, more domains such as HZD and DeepPoly keep track of relationships between neurons and even between layers (the necessity of which becomes evident for residual networks).\", \"title\": \"Improved\"}",
"{\"comment\": \"Dear Matthew,\\n\\nThank you for your comments on related works and they are very helpful for improving our paper!\\n\\n1. Apologies for the confusion and we agree it is inaccurate. In fact, we actually meant \\u201cBox Domain\\u201d rather than \\u201cHybrid Zonotope Domain\\u201d; I accidentally typed the wrong word. According to your comments, we will revise the sentence to the following:\\n\\nMirman et al. (ICML 2018) proposed a variety of abstract domains for bound propagation, including the \\u201cBox/Interval Domain\\u201d (IBP) and showed that it could scale to much larger networks than other works (Raghunathan, et al, 2018) could at the time. Gowal et al. (ICCV 2019) demonstrated that IBP could outperform many state-of-the-art results by a large margin with more precise approximations for the last linear layer and better training schemes.\\n\\nDoes this look good to you? Let us know if you have any other concerns.\\n\\n2. Thank you for pointing this out. We will remove DiffAI from that sentence. Our main concern is on the full convex relaxation-based training method like Wong & Kolter (ICML 2018), and it is inaccurate the include DiffAI in that sentence. We will revise any other possibly inaccurate citations as well.\\n\\n3. Thank you for pointing us to your interesting work (arxiv 1903.12519) demonstrating the latest performance of the versatile DiffAI framework. We will cite this paper and include the best numbers reported in your paper into our main result table (Table 2).\\n\\nWe appreciate your helpful comments and please kindly let us know if you have any additional concerns.\\n\\nSincerely,\\nPaper 1473 authors\", \"title\": \"Thank for the comments. We will address your concerns on related works in our revision.\"}",
"{\"comment\": \"I would like to address a couple of points:\\n\\n1. In the related section on page 3, you say:\\n\\n> Mirman et al. (2018) proposed to use \\u201cHybrid Zonotope Domain\\u201d which is effectively IBP to scale up linear relaxation based training. Gowal et al. (2018) first demonstrated that IBP could outperform many state-of-the-art results by a large margin after careful tuning.\\n\\nThis is a poor characterization. Mirman et al (ICML July, 2018) proposed to use a variety of domains, including the Box/Interval domain (IBP) and the Hybrid Zonotope Domain (unrelated to IBP in any form). This paper showed that IBP could scale to much larger networks then other works existing at the time could (Raghunathan, et al, 2018) and was concurrent with Wong & Kolter (ICML 2018). Gowal et al. (first released October, 2018, published in ICCV 2019) used interval (IBP) but with more precise approximations than interval for the last linear layer of the network and used linear parameter annealing in the training scheme to achieve even better results.\\n\\n\\n2. In \\\"Issues with linear relaxation based training.\\\" you say:\\n\\n> DiffAI (Mirman et al., 2018), is their high computational and memory cost, and poor scalability.\\n\\nIn fact, DiffAI introduced the first IBP based training system, and thus it makes no sense to claim that it is much slower and more memory consumptive than other techniques which are also built using IBP.\\n\\nFor more metrics of the efficiency and performance using the DiffAI framework, see https://arxiv.org/pdf/1903.12519.pdf and the reproducible numbers in the DiffAI repo: https://github.com/eth-sri/diffai\", \"title\": \"Characterization of Related Work\"}"
]
} |
SJxIkkSKwB | Learning in Confusion: Batch Active Learning with Noisy Oracle | [
"Gaurav Gupta",
"Anit Kumar Sahu",
"Wan-Yi Lin"
] | We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles. We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training overhead. Our approach bridges between uniform randomness and score based importance sampling of clusters when selecting a batch of new samples. Experiments on
benchmark image classification datasets (MNIST, SVHN, and CIFAR10) shows improvement over existing active learning strategies. We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements.
| [
"Active Learning",
"Noisy Oracle",
"Model Uncertainty",
"Image classification"
] | Reject | https://openreview.net/pdf?id=SJxIkkSKwB | https://openreview.net/forum?id=SJxIkkSKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Abbs6rgJHt",
"HkxpAZB5ir",
"BJxqTlrcsH",
"Byxs_eSciH",
"S1lIMxH5oH",
"H1lbEu1e5r",
"rJlOwYVRtB",
"Skgso79atr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724148,
1573700053227,
1573699778175,
1573699698797,
1573699597759,
1571973160577,
1571862879913,
1571820451361
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1470/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1470/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1470/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new active learning algorithm based on clustering and then sampling based on an uncertainty-based metric. This active learning method is not particular to deep learning. The authors also propose a new de-noising layer specific to deep learning to remove noise from possibly noisy labels that are provided. These two proposals are orthogonal to one another and its not clear why they appear in the same paper.\\n\\nReviewers were underwhelmed by the novelty of either contribution. With respect to active learning, there is years of work on first performing unsupervised learning (e.g., clustering) and then different forms of active sampling. \\n\\nThis work lacks sufficient novelty for acceptance at a top tier venue. Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of the revision\", \"comment\": \"The following is a summary of the additions made to the revised manuscript that has led us to demonstrate our work in a wider setting and in comparison with very recent results.\\n\\n1. As per the reviewer-2 suggestion, we add Cifar100 experiments in addition to the three datasets we already have.\\n2. We also compare our approach with very recent work (Samarth et. al, 2019) in all the experiments.\\n3. We have added more references as per the suggestion of Reviewer-2 and Reviewer-1.\"}",
"{\"title\": \"Thanks for the review\", \"comment\": \"Our work aims to advance the area of active learning, in which we consider both batch sample acquisition and robustness against noisy labels. In regards to our contributions, we agree with the reviewer that the denoising layer and the uncertainty based sampling are orthogonal to each other. But, given our problem setting, both of these contributions are individually crucial in our approach.\\n\\nActive learning is heavily dependent on the certainty levels of the decision of the oracle, and real-world oracles are prone to making mistakes, especially in settings involving oracles drawn from crowd-sourced data, studying the effects of noisy labels and ways to mitigate it is important. Although the additional denoising layer makes the model much more robust to noisy labels, the proposed batch sampling strategy also improves robustness without denoising layer. We also remark that other active learning algorithms could also benefit from the denoising layer when it comes to handling noisy data. \\n\\nIn regards to the fair comparison with other algorithms, in the noiseless setting, we respectfully disagree with the reviewer. In the first column in Figure 3, we show that the proposed algorithm *without* the denoising layer (black dashed line) performs better than the existing baselines. Furthermore, in the noisy setup with various noise strength ($\\\\epsilon$), we compare both of our proposed algorithms, i.e., the one with the denoising layer and the other with it, with other baselines. While, the version of our algorithm with the denoising layer does enjoy an advantage over other algorithms, the one without the denoising layer still performs better than the other baselines, which shows the efficacy of our scheme in the face of uncertainty. We have added more discussion to this effect in Section 5.2 of the revised manuscript.\"}",
"{\"title\": \"Thanks for the review\", \"comment\": \"Many thanks for your comments. We have added references per suggestion, and made corresponding corrections to our manuscript to improve readability of figures and text.\"}",
"{\"title\": \"Response to reviewer comments and summary of changes made\", \"comment\": \"Thanks for the constructive comments. We now address the review comments and explain the corresponding changes made.\\n\\n1. Experiments\\n1(a) In addition to the 3 datasets of MNIST, CIFAR10, and SVHN we have also added the experiments on CIFAR100 which has more number of classes. The results are presented in the Appendix B.4 of the revised manuscript.\\n\\n1(b) We compare our proposed algorithm with the referred ICCV paper [1]. The results are added in the Section 5.2 of the revised manuscript. We also add the following discussion in the revised manuscript.\\n\\nThe most recent baselines like VAAL, Coreset which make representation of the Training + Pool may not always perform well. While Coreset assigns distance between points based on the model output which suffers in the beginning, the VAAL use training data only to make representation together with the remaining pool in GAN like setting. The representative of pool points may not always help, especially if there are difficult points to label and the model can be used to identify them. In addition to the importance score, the model uncertainty is needed to assign a confidence to its judgement which is poor in the beginning and gets strengthen later. The proposed approach works along this direction. Lastly, while robustness against oracle noise is discussed in [1], however, we see that incorporating the denoising later implicitly in the model helps better. The intuitive reason being, having noise in the training data changes the discriminative distribution from $p(y\\\\vert {\\\\bf x})$ to $p(y^{\\\\prime}\\\\vert {\\\\bf x})$. Hence, learning $p(y^{\\\\prime}\\\\vert {\\\\bf x})$ from the training data and then recovering $p(y\\\\vert {\\\\bf x})$ makes more sense as discussed in Section\\\\,4.2.\\n\\n1(c) We have added the suggested references to the revised manuscript.\\n\\n1(d) Specifically regarding the hyper-parameter tuning -- our method has one crucial hyper parameter which is the inverse uncertainty $\\\\beta$, and we did not select it by the validation set but by a fixed function stated in the paragraph right above Section 5.2. The growth of $\\\\beta$ can vary according to different models, and hence we use a single parameter $l$ to take care of it, which we have to select by being data and model specific through cross-validation. We would be happy to address this issue further if the reviewer could explain more regarding hyper-parameter tuning.\\n\\n2. Sampling time:\\nWe note that different active learning methods rely on either using the current model to make decision regarding selection of next batch of samples, or use subsidiary networks (like GAN setting of [1]) to select samples. Therefore, to quantify the total effort made in an experiment we report the total run-time.\", \"the_total_run_time_of_active_learning_experiment_on_cifar_10_dataset_for_various_algorithms_is_as_follows\": \"(a) Random: 6 min 17 s, (b) BALD: 13 min 52 sec, (c) Coreset: 13 min 57 sec , (d) Entropy: 6 min 20 sec, (e) VAAL: 4 hrs 39 min 54 sec, and (f) Proposed: 32 min 35 sec. All the timings are computed on single GPU Nvidia-GTX 1080 with implementations on PyTorch. For coreset implementation, we have used numpy multithreading to speed-up the pairwise distance matrix computation. We can faithfully assume that the time taken by random selection is all about task \\u2018model training\\u2019 at various active learning acquisition steps (6 to be precise). The other algorithms use variety of techniques, and the extra run-time is therefore of the specific employed method.\\n\\n3. Section 5.2 is not informative:\\n3(a) As per the reviewer suggestion, we have added the numerical version of the active learning results presented in Figure-3 (mean and standard deviation) to the Appendix C of the revised manuscript for all the datasets. \\n3(b) The label \\\"proposed\\\" and \\\"proposed + noise\\\" caused confusion in the reading, we have changed \\\"proposed+noise\\\" to \\\"proposed with denoising\\\" in all figures.\\n3(c)-(d) We have improved the figures readability across the manuscript in the revised version.\\n\\n4. Uncertainty-based research: We agree with the reviewer that predictive uncertainty is still an open problem. That is why our method starts with uniform sampling when the model does not yet produce meaningful results and move towards uncertainty-based sampling when the model is trained better. The tuning of the sampling distribution in general as the model gets better in terms of performance as it gets trained incrementally is certainly an important question to address. We aim to provide a framework that can perform robust sampling while research in uncertainty measures advance.\", \"remaining_concerns\": [\"The legends of the figures are changed to \\u2018proposed+denoise\\u2019 to prevent further confusion.\", \"We also updated the abstract in the revised version. The reference (Fchollet, 2015) is implemented in keras and we have only used the model structure from the reference. The implementations are done in PyTorch and we have ported the model structure to torch code.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: The paper proposes an uncertainty-based method for batch-mode active learning with/without noisy oracles which uses importance sampling scores of clusters as the querying strategy. Authors evaluate their method on MNIST, CIFAR10, and SVHN against approaches such as Core-set, BALD, entropy, and random sampling and show superior performance.\", \"pros\": \"(+): The paper is well-written and well-motivated.\\n(+): The problem is timely and has direct real world applications.\\n(+): Applying the denoising layer is an interesting and viable idea to overcome noise effects.\", \"cons_that_significantly_affected_my_score_and_resulted_in_rejecting_the_paper_are_as_follows\": \"\", \"1___experimental_setting_and_evaluations\": \"\", \"the_biggest_drawback_in_this_paper_is_the_experimental_setting_which_is_not_rigorous_enough_for_the_following_reasons\": \"(a) Weak datasets: Authors have chosen some standard benchmarks but they do not seem to be convincing as the datasets are too easy. Based on my experience, the behavior of an active learning agent trained on small number of classes does not necessarily generalize to cases where the number of classes is large. So I\\u2019d like to ask authors to try to evaluate on datasets with more number of classes as well as more realistic images (as opposed to thumbnail images). \\n(b) Comparison to state of the art: More importantly, authors are missing out on an important baseline which is a recent ICCV paper [1] on task-agnostic pool-based batch-mode active learning that has explored both noisy and perfect oracles and to the best of my knowledge is the current state of the art. Authors can extend their experimental setting to the datasets used in [1] including CIFAR100 and ImageNet and provide comparison. The reason that it is important to compare is that the method in [1] is task-agnostic and does not explicitly use uncertainty hence it is interesting to see how this method performs against it. \\n(c) More on baselines and related work: In addition to [1], different variation of ensemble methods have been serving as active baselines in this field and I recommend adding one as a baseline. For a recent work in this line you can see this paper from CVPR 2018 [2]. Moreover, the authors seem to be missing on a long-standing line of active learning research known as Query-by-Committee (QBC) began in 1997 [3] in the related work section which should be cited as well. \\n(d) Hyper parameter tuning: Last but not least about the experiments is the hyper parameter tuning which is not addressed. It is important to not use the well-known hyper parameters for these benchmarks that have been obtained using validation set from the entire dataset. Authors should explain how they have performed this.\", \"2___report_sampling_time\": \"Another important factor missing in the evaluations is reporting time complexity or wall-clock time that it takes to query samples. Authors should measure this precisely and make sure it is being reported similarly across all the methods. I am asking this because random selection is still an effective baseline in the field and it only takes a few milliseconds. Therefore, the sampling time of a new algorithm should be gauged based on that while performing better than random. Given the multiple steps in this algorithm I am skeptical that the sampling time would be proportional to the gain obtained in accuracy versus labeling ratio over random selection baseline.\", \"3\": \"Section 5.2 is not informative:\\n(a) My last major concern is section 5.2 where the discussion on results is given along with supporting figures.\", \"lack_of_quantitative_results\": \"First of all, no quantitative results are given for the values plotted in figure 3 and 4 (neither in the main text nor in the supplement) and different methods happen to be too close to each other, making it hard to see the right color for standard deviations. Also, in the discussion corresponding to those figures no information is provided in this regard. It is important to report how much labeling effort this algorithm is saving by comparing number of samples needed by each method to achieve the same accuracy because that is the main goal in AL. Lack of numbers also makes it hard for this work to be used by others.\\n(b) Figure legends: The way authors have labeled their method in Figure 3 is confusing as the \\u201cProposed+noise\\u201d happens to achieve better performance over \\u201cProposed\\u201d. I think by \\u201cnoise\\u201d authors meant denoising layer was being used (please correct me if I am wrong) but this is not what the legends imply. \\n(c) X axis label: It is common to report accuracy versus percentage of labeled data making it more understandable of how far each experiment has been through each dataset. Additionally, I recommend reporting the maximum achievable accuracy for each dataset assuming that all the data was labeled. This serves as an upper bound.\\n(d) Font sizes in figures: It will be helpful to make them larger.\\n \\n4. I also have a more general concern about uncertainty-based methods. I know that they have been around for a long time but given the fact that predictive uncertainty is still an open problem and there is still no concrete method to measure calibrated confidence scores for outputs of a deep network (Dropout and BN given in this paper have been already outperformed by ensembles (see [4])), hence relying on uncertainty is not the best direction to go. It is literally chicken and egg problem to try to rely on confidence scores of the main-stream task while it is being trained itself. This issue has been raised in this paper but I am still not convinced that the paper has fully addressed it. I think the community needs to explore task-agnostic methods more deeply. [1] is a good start on this path but there is always more to do. This concern is not necessarily a major part of my decision assessment and I only want the authors to state their opinion on this and explain how accurately they think this issue is being addressed.\\n \\nThe following issues are less major and are given only to help, and not part of my decision assessment:\\n\\n1- In Figure 3(c), it appears that the accuracy for \\u201cProposed + noise\\u201d when \\\\epsilon=0.1 is higher than when it is noise-free. It might be a miss-reading as the figure is coarse and it is hard to compare but if that is the case, can authors explain it?\\n\\n2- The Abstract does not read well and does not state the main contribution. It has put too emphasize on batch-mode active learning which has become an intuitive approach since deep networks have become popular. Also the wording \\u201cOur approach bridges between uniform randomness and score based importance sampling of clusters\\u201d should be changed as all other active learning algorithms are trying to do that. \\n\\n3 - In section 5.1 please state that you used VGG 16 (I assume so since it is what was used in the cited reference (Gal et al. 2017) but authors need to verify that. Also, the other citation given for this (Fchollet, 2015) is confusing as it is Keras package documentation while in the next sentence authors state that they have implemented their algorithm in PyTorch. So please shed some light on this.\\n\\n*******************************************************************\\nAs a final note, I would be willing to raise my score if authors make the experimental setting stronger (see suggestions above).\\n\\n[1] Sinha, Samarth, Sayna Ebrahimi, and Trevor Darrell. \\\"Variational Adversarial Active Learning.\\\" arXiv preprint arXiv:1904.00370 (2019). \\n[2] Beluch, William H., et al. \\\"The power of ensembles for active learning in image classification.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[3] Freund, Yoav, et al. \\\"Selective sampling using the query by committee algorithm.\\\" Machine learning 28.2-3 (1997): 133-168.\\n[4] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. \\\"Simple and scalable predictive uncertainty estimation using deep ensembles.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n*******************************************************************\\n*******************************************************************\\n*******************************************************************\", \"post_rebuttal\": \"In the revised version, there are new tables (Table 1-4) provided in the appendix which I found too different than results reported for previous baselines by more than 6%. For example, according to Core-set paper (Sener, 2018), Figure 4, they achieve near 80% using 40% of the data (20000 samples), and according to VAAL paper (Sinha et al. 2019 github page: https://github.com/sinhasam/vaal/blob/master/plots/plots.ipynb), they achieve 80.90+-0.2. However, the current paper reports 71.99 \\u00b1 0.55 for Core-set, and 74.06 \\u00b1 0.47 for VAAL which is a large mismatch.\\nMore importantly, looking at the results provided in VAAL paper (Sinha et al. 2019 or Core-set paper (Sener, 2018) they show their performance as well as most of their baselines is superior to random selection by a large gap, but in this paper results shown in Table 1 to 4 in almost all of them random is superior (or on-par) to all baselines and the proposed method is the only method that outperforms baseline which is clearly a wrong claim. Therefore, I decrease my score from weak reject to reject.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The topic handled in this paper is very important (hot topic), in my opinion. The authors tackled is the problem of training machine learning models incrementally using active learning with the oracle is noisy. Multiple samples are selected instead of a unique sample as in the classical framework. The paper seems technically sound.\\n I have some suggestions for improving the quality of the paper. See below.\\n\\n- Improve the captions of Figures 1 and 2 (more explanation, more clarity).\\n\\n- Use bigger parentheses in Eq. (3).\\n\\n- In other to increase the impact of your work, consider in your introduction (or in the \\\"related works\\\" Section) this kind of approaches that are also active learning algorithms:\\n\\nD. Busby, \\u201cHierarchical adaptive experimental design for Gaussian process emulators,\\u201d Reliability Engineering and System Safety, vol. 94, pp. 1183\\u20131193, 2009.\\n\\nL. Martino, J. Vicent, G. Camps-Valls, \\\"Automatic Emulator and Optimized Look-up Table Generation for Radiative Transfer Models\\\", IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2017\\n\\nThis discussion can increase the number of interested readers.\\n\\n- Upload the final version of your work in Research Gate and ArXiv (to increase the impact of your work).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides a solution for batch active learning with noisy oracles in deep neural networks. Their algorithm suffers less from the well-known cold-start issue in active learning. They also improve the robustness by adding an extra denoising layer to the network.\\n\\nThe main concern is that the two contributions are rather orthogonal to each other and each of them is not that significant. \\nThe first contribution, which alleviates the cold-start problem, is not very surprising, since it is a soft version of previous method BALD. \\nThe second contribution, a de-noising layer, is relatively orthogonal to batch active learning. \\n\\nIn the experiments, the authors compared Proposed\\n+noise with Proposed, Random, BALD, Coreset, and Entropy, but I think the only fair comparison here is between Proposed+noise and Proposed.\"}"
]
} |
HJx81ySKwr | Iterative energy-based projection on a normal data manifold for anomaly localization | [
"David Dehaene",
"Oriel Frigo",
"Sébastien Combrexelle",
"Pierre Eline"
] | Autoencoder reconstructions are widely used for the task of unsupervised anomaly localization. Indeed, an autoencoder trained on normal data is expected to only be able to reconstruct normal features of the data, allowing the segmentation of anomalous pixels in an image via a simple comparison between the image and its autoencoder reconstruction. In practice however, local defects added to a normal image can deteriorate the whole reconstruction, making this segmentation challenging. To tackle the issue, we propose in this paper a new approach for projecting anomalous data on a autoencoder-learned normal data manifold, by using gradient descent on an energy derived from the autoencoder's loss function. This energy can be augmented with regularization terms that model priors on what constitutes the user-defined optimal projection. By iteratively updating the input of the autoencoder, we bypass the loss of high-frequency information caused by the autoencoder bottleneck. This allows to produce images of higher quality than classic reconstructions. Our method achieves state-of-the-art results on various anomaly localization datasets. It also shows promising results at an inpainting task on the CelebA dataset. | [
"deep learning",
"visual inspection",
"unsupervised anomaly detection",
"anomaly localization",
"autoencoder",
"variational autoencoder",
"gradient descent",
"inpainting"
] | Accept (Poster) | https://openreview.net/pdf?id=HJx81ySKwr | https://openreview.net/forum?id=HJx81ySKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QwKxPzk7u5",
"SJxhkqf7iB",
"H1lbmOfQir",
"HyeKGwMQsr",
"ryxfYSfmoH",
"B1eyRHZV5H",
"rJgi-7gN9H",
"BJeUAnt7qr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724118,
1573231075523,
1573230617345,
1573230352551,
1573229946162,
1572242887216,
1572238083193,
1572211917865
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1469/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1469/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1469/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposed to use an autoencoder based approach for anomaly localization. The method shows promising on inpainting task compared with traditional auto-encoder.\\n\\nFirst two reviewers recommend this paper for acceptance. The last review has some concerns about the experimental design and whether VAE is a suitable baseline. The authors provide reasonable explanation in rebuttal while the reviewer did not give further comments.\\n\\nOverall, the paper proposes a promising approach for anomaly localization; thus, I recommend it for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer3\", \"comment\": \"Dear reviewer, thank you for your time and comments. First, we would like to draw your attention to our general comment, answering questions about overall baselines for anomaly localization and statistics on the benefits of our method. This comment also explains changes in the last revision of the paper.\", \"we_will_address_your_concerns_in_order\": \"1) We understand your concern about the differences in AUROC values compared to Bergmann et al. CVPR'19.\\nPlease note that as the code for Bergmann et al. CVPR'19 has not been released, we have implemented ourselves the L2 and DSSIM autoencoder baselines used in the paper. However, our experimental settings have some differences which may explain why the empirical AUC values are not exactly the same of Bergmann et al. CVPR'19. These differences are motivated by a desire to have a single setup of architecture and hyperparameters for all datasets. In details\\u00a0:\\n\\n- As we explained in the Section 4.1 of our paper, we always work with images of size 128\\u00d7128 for textures and objects, while Bergmann et al. work with images of size 256\\u00d7256 for objects datasets and 128\\u00d7128 for textures datasets.\\n- The exact parameters for the random translation and rotation data augmentations for objects datasets are not provided in the paper of Bergmann et al. CVPR'19, thus it is very likely that we do not perform exactly the same data augmentation and thus the training data for the models may be different.\\n- We always compute the ROC directly patch-by-patch from the autoencoder anomaly maps, while Bergmann et al. CVPR'19 reconstruct the texture images from the fusion of overlapping patches and perform averaging of the resulting anomaly maps.\\n\\nWe computed plots to compare Bergmann et al. results with our baselines.\", \"l2_ae\": \"\", \"https\": \"//i.imgur.com/pocIdPw.png\\nThese show that while there are indeed differences in the two implementations, the trend remains comparable. Furthermore, as detailed in our general comment, we feel that the main contribution of our paper is a method to improve the results of any AE-based model rather than providing a new baseline model for anomaly detection.\\n\\n2) Considering the MVTec anomaly dataset, we could argue that there is not a single baseline model which has the best performance for every object and texture category. This is what we observed in our experiments, and it goes in line with the experiments in the benchmark performed by Bergmann et al. 2019. Nevertheless, the autoencoder trained with L2 or DSSIM has the best performance in average in Bergmann et al. 2019, so we included these two baselines. Most importantly, we showed that most of the time AE baselines, deterministic or probabilistic, have a performance improvement when augmented by our method.\\nWe have also provided in our general comment statistics of the improvement rate due to our method over all presented baselines and datasets.\\n\\n3) With respect to the inpainting evaluation, we have provided in Appendix D a qualitative comparison with the recent work of Ivanov et al, ICLR 2019. The quality of the reconstructions is comparable, even though our VAE is trained without any assumptions over the mask's properties. This comparison was not included in the main text for lack of space.\\n\\nWe hope that we have answered your concerns, thank you again.\", \"ssim_ae\": \"\"}",
"{\"title\": \"Reply to Reviewer1\", \"comment\": \"Dear reviewer, thank you for your time and comments. First, we would like to draw your attention to our general comment, answering questions about overall baselines for anomaly localization and statistics on the benefits of our method. This comment also explains changes in the last revision of the paper.\", \"we_address_here_each_of_your_concerns\": \"- \\u00ab\\u00a0The major concern is how the quality of f_{VAE} is estimated. From the paper it seems f_{VAE} is not updated. Will it be sufficient to rely a fixed f_{VAE} and blindly trust its quality?\\u00a0\\u00bb\\nFor a full context, we remind that the VAE is first trained on a dataset comprising only normal data, to obtain an estimate of the probability distribution of normal data. Since it is a standard VAE training, the quality of the model can be assessed using any of the classical techniques (cross validation, visual inspection, etc). During inference, the underlying VAE model\\u2019s weights are indeed frozen and the only optimized parameters are the input image\\u2019s pixels, in an adversarial example\\u2019s fashion. As you suggest, we could potentially update the underlying model with test data identified by our method as normal, as in a continuous learning setup, but we leave this to future work.\\n\\n- \\u00ab\\u00a0Table 1: It is not clear how \\\"the mean improvement rate of 9.52% over all baselines\\\" was calculated.\\u00a0\\u00bb\\nFollowing your suggestion, we clarified how this metric was calculated, and added a few statistics on the benefits of our method in the last revision. They were computed by aggregating the improvement rate between a baseline and its grad-augmented version $(AUC_{grad} \\u2013 AUC_{base}) / AUC_{base}$, over all presented baselines and datasets. \\n\\n- \\u00ab\\u00a0Figure 3: Will VAE-grad or DASE-grad perform better? Since these base lines are used in other places, it is better to compare with them as well.\\u00a0\\u00bb\\nWe augmented figure 3 with the three remaining baselines. They show similar results to the L2AE on these images. Due to the lack of space, we added this comparison in appendix C. \\n\\nWe hope that we have answered your concerns, thank you again for your suggestions.\"}",
"{\"title\": \"Reply to Reviewer2\", \"comment\": \"Dear reviewer, thank you for your time and comments. First, we would like to draw your attention to our general comment, answering questions about overall baselines for anomaly localization and statistics on the benefits of our method. This comment also explains changes in the last revision of the paper.\\n\\nIn particular, in order to better illustrate the variability of the results associated with our method, we added in appendix F a histogram of the AUC improvement rate on all datasets and architectures reported in table 1. The median improvement rate over all baselines and datasets is at 4.33%, the 25th percentile at 1.86% and the 75th percentile at 15.86%.\\n\\nConcerning the inpainting comparision with Ivanov et al., please note that due to the \\u00ab\\u00a0creative\\u00a0\\u00bb nature of the inpainting task, a quantitative metric is hard to define. Nevertheless, we added those results as an interesting application of being able to project on a learned manifold, and reproduced the results from Ivanov et al. from their provided model for the sake of a comparison with another VAE-based method. Figure 8 shows that the quality of the reconstructions in both methods is comparable, even though our VAE is trained without any assumptions over the mask's properties. Following your suggestion, we added a comparison sentence in the caption for figure 8.\\n\\nWe hope that we answered your concerns, thank you again for your review.\"}",
"{\"title\": \"General comments\", \"comment\": [\"Dear reviewers, thank you all for your time and comments. We have followed your suggestions in our new revision of the paper and we believe it strengthens its content. We note that reviewers were positive about the significance of our work, that \\u00ab\\u00a0discusses an important problem of solving the visual inspection problem limited supervision\\u00a0\\u00bb, and they note that our general approach is \\u00ab\\u00a0intuitive\\u00a0\\u00bb, and \\u00ab\\u00a0[leads] to significantly better results\\u00a0\\u00bb, while our second idea \\u00ab\\u00a0significantly speeds up the model convergence\\u00a0\\u00bb. Nevertheless, several questions are raised on what constitutes the overall baseline for unsupervised anomaly localization, as well as the need for further statistics on the benefits of our method.\", \"As the authors of Bergmann et al., 2019, we acknowledge the lack of an overall \\u00ab\\u00a0best\\u00a0\\u00bb baseline for anomaly localization, but we want to emphasize that our contribution is a method to increase the performance of any autoencoder-based model.\", \"Thus, to give a better sense of the overall improvements of our method, we computed the histogram of the improvement rate in AUC between a baseline and its grad-augmented counterpart, over all datasets and over all baseline models. We added this histogram and a short analysis in appendix F. We reported additional statistics of this overall improvement distribution in the main text: the median improvement rate over all baselines and datasets is at 4.33%, the 25th percentile at 1.86% and the 75th percentile at 15.86%.\", \"We clarified the table of results, highlighting the AUC increase or decrease with colors instead of arrows.\", \"We augmented figure 3 with the three remaining baselines, which shows similar results to the L2AE. Due to the lack of space, we added this comparison in appendix C.\", \"We hope that we answered here most of your concerns. We will also answer each of your comments in detail.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper improves anomaly detection by augmenting generative models (VAE, etc) by iteratively projecting the anomalous data onto the learned manifold, using gradient descent of the autoencoder reconstruction term relative to the image input. The work seems related to AnoGAN, only instead of iterating over the latent space, the iteration is over the more expressive input space. The method is intuitive and a good parallel to Adversarial projections is made in the paper. To the best of my knowledge, the idea is novel, although I am not completely sure.\\nThe second idea in the paper is to scale the losses by the reconstruction accuracy, which also is intuitive and shown to significantly speeds up the model convergence. \\n\\nThe experimental results are pretty convincing, showing both quantitatively and qualitatively that the method improves consistently over using the underlying vanilla generative models (AE/DSAE/2 VAEs). One desirable improvement is to get error bounds on the results, those are currently missing. Also, based on the inpainting results in Fig 7, it's not really clear if the method generates better results than Ivanov et al.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper discusses an important problem of solving the visual inspection problem limited supervision. It proposes to use VAE to model the anomaly detection. The major concern is how the quality of f_{VAE} is estimated. From the paper it seems f_{VAE} is not updated. Will it be sufficient to rely a fixed f_{VAE} and blindly trust its quality?\", \"detailed_comments\": [\"Table 1: It is not clear how \\\"the mean improvement rate of 9.52% over all baselines\\\" was calculated.\", \"Figure 3: Will VAE-grad or DASE-grad perform better? Since these base lines are used in other places, it is better to compare with them as well.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: The paper proposes to use autoencoder for anomaly localization. The approach learns to project anomalous data on an autoencoder-learned manifold by using gradient descent on energy derived from the autoencoder's loss function. The proposed method is evaluated using the anomaly-localization dataset (Bergmann et al. CVPR 2019) and qualitatively for the task of image inpainting task on CelebA dataset.\", \"pros\": [\"surprisingly simple approach that led to significantly better results.\", \"applications to image inpainting, and demonstrates better visual results than using simple VAE.\"], \"concern\": [\"While I agree that authors have shown relative performance compared to various approaches, I am not able to map the results of Table-1 to that of Table-3 (second column ROC values) in Bergmann et al. CVPR'19. The setup in two works seem similar. Can the authors please comment to help me understand this difference?\", \"The proposed approach leads to better performance over the baseline models; it is not clear what is a suitable baseline model for the problem of anomaly localization is?\", \"The results for image inpainting looks promising. The authors may want to add comparison with existing image inpainting approaches for the reader to better appreciate the proposed approach.\"]}"
]
} |
rJeBJJBYDB | Chart Auto-Encoders for Manifold Structured Data | [
"Stephan Schonsheck",
"Jie Chen",
"Rongjie Lai"
] | Auto-encoding and generative models have made tremendous successes in image and signal representation learning and generation. These models, however, generally employ the full Euclidean space or a bounded subset (such as $[0,1]^l$) as the latent space, whose trivial geometry is often too simplistic to meaningfully reflect the structure of the data. This paper aims at exploring a nontrivial geometric structure of the latent space for better data representation. Inspired by differential geometry, we propose \textbf{Chart Auto-Encoder (CAE)}, which captures the manifold structure of the data with multiple charts and transition functions among them. CAE translates the mathematical definition of manifold through parameterizing the entire data set as a collection of overlapping charts, creating local latent representations. These representations are an enhancement of the single-charted latent space commonly employed in auto-encoding models, as they reflect the intrinsic structure of the manifold. Therefore, CAE achieves a more accurate approximation of data and generates realistic new ones. We conduct experiments with synthetic and real-life data to demonstrate the effectiveness of the proposed CAE. | [
"Auto-encoder",
"differential manifolds",
"multi-charted latent space"
] | Reject | https://openreview.net/pdf?id=rJeBJJBYDB | https://openreview.net/forum?id=rJeBJJBYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"c4e976xo1",
"HylUDv92oH",
"Skle4vq3sH",
"r1lNLU5hsS",
"SkeJBH92sr",
"Hye6AV9hor",
"B1xTQ492oH",
"BkeUkXq2jS",
"HJeMrVc3Kr",
"SyefiJ_uYB",
"HJeesHDodr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724088,
1573853022476,
1573852967978,
1573852747851,
1573852471447,
1573852372599,
1573852196790,
1573851870041,
1571755065752,
1571483546260,
1570629016136
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1467/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1467/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1467/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to use more varied geometric structures of latent spaces to capture the manifold structure of the data, and provide experiments with synthetic and real data that show some promise in terms of approximating manifolds.\\nWhile reviewers appreciate the motivation behind the paper and see that angle as potentially resulting in a strong paper in the future, they have concerns that the method is too complicated and that the experimental results are not fully convincing that the proposed method is useful, with also not enough ablation studies. Authors provided some additional results and clarified explanations in their revisions, but reviewers still believe there is more work required to deliver a submission warranting acceptance in terms of justifying the complicated architecture experimentally.\\nTherefore, we do not recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer 1, part 3\", \"comment\": \"g) is this paper to be seen as an implementation of Chen et al. (2019)?\\n \\n No, This is not an implementation of Chen et al 2019. Their paper deals with using a network to represent a function on a known, fixed manifold, while we are concerned with capturing the manifold. \\n\\nh) the concept of intrinsic dimension slightly varies in literature. \\n\\nWe have clarified this definition in our revision.\\n\\ni) Section 4.1 mixes 'topological' and 'geometrical' concepts. \\n\\nIn this case we do mean geometry, sorry for our typo.\\n\\nj) p.5: I like the 'partition of unity' approach\\u2026 \\n\\nThis looks like a convex combination, but it is more restricted than a convex combination. Each of p_a has compact support in U_a, i.e. p_a(x) has contribution as long as x in the compact support of p_a.\\n\\n k). - p.13: to what extent are the 'faithfulness' and 'coverage' established metrics? \\n \\n The reconstruction error measures the fidelity of the model, the unfaithfulness measures how far synthesized data decoded from samples drawn on the latent space are to samples from the original training data and coverage indicates how much of the training data is covered by the encoder. Models which produce unrealistic data when sampling from the latent space will have high unfaithfulness sores and models which experience mode collapse will have low coverage scores. Metrics similar to the faithfulness have been previously used to measure novelty, but coverage is established metric for measuring mode collapse in GANs.\"}",
"{\"title\": \"Response to reviewer 1, part 2\", \"comment\": \"b). Uniformity of Covering\\n\\n In general, when we say that the charts need to be balanced and regular, we do no mean uniform. We mean that the distortion of the encoding (and decoding) map should be bounded. Rather than have many charts which approximate the entire manifold poorly, we would like local charts which describe a neighborhood of the manifold very accurately. The size of the charts depends on the geometry of the data. Flat regions can be covered with very large charts, but areas of high curvature may need multiple charts. Therefore uniformly sampling in the latent space does not correspond to uniformly sampling on the data manifold. The observation that some classes appear more than others is not problematic, as the chart selection module does not require equally balanced classes and some of these points may lie in the intersection of several charts. This flexibility is an advantage of our method, since it uses a data-driven approach to determine the coverage of the local neighborhoods on the manifold. The opening examples on the circle, sphere and double torus result in relatively uniform charts because the underlying manifolds are highly regular.\\n\\nc). Digital morphing example. \\n \\n Our purpose with this experiment is not to claim that we are \\u2018better\\u2019 at this type of application, just that we are able to do it without using a variational assumption on the latent space. In the standard VAE set up this assumption prevents the model from memorizing the data-our method does so via the geometrically inspired architecture and Lipschitz regularization. \\n\\nd) Why use chart? \\n\\nThe main contribution of introducing the chart structured space is to allow chart-structured latent space to reflect data manifold structure. This is necessary according to geometric argument and our numerical experiments. Moreover, the structure of the data manifold will always be compatible with some chart space. A chart representation uniquely identifies the manifold up to isometry. This means that any geometric property that we wish to study---geodesic distance, curvatures etc---can all be formulated in terms of the chart representation. We\\u2019ve added an experiment showing how to estimate geodesics on a data manifold from a trained CAE. .\", \"technical_clarity_response\": \"a). Why Euclidean chart do not suffer from being too simplistic. \\n \\n In this work the emphasis is placed on multiple charts (as opposed to one chart), rather than non-Euclidean (as opposed to Euclidean). Models with a single Euclidean latent space can do a very good job of reconstructing data which may have very complicated structure, but these representations do not capture all global properties of the data. Referring to our motivating example there is no way for a standard auto-encoder to detect the cyclic structure of a circle, even though it may be able to reconstruct all points on it. According to differential geometry, any compact manifold can be covered with a finite collection of charts, each of which is homeomorphic a euclidean domain. This motivates us to use a collection of simple euclidean domains as building blocks to represent complicated structures. \\n\\nb) about homeomorphism. \\n\\n We use the mathematical concept of the atlas parameterization as a motivation for the architecture and loss functions.We view the proposed chart auto-encoder as being approximately homeomorphic in the same way that auto-encoders are viewed as being approximately invertible. Moreover, the purpose of using a chart representation is that a homeomorphism always exists between a local neighborhood on the manifold and a Euclidean domain. In addition a compact manifold can always be covered with a finite collection of these charts which obey the chart transition conditions.\\n\\nc) The torus embedding example. \\n\\n The double torus cannot be isometrically embedded in R^2, so no matter what representation is learned, some geometric information (distances, curvatures, etc) will be lost. Furthermore topological structures cannot be preserved either because the torus is not homeomorphic to R^2. \\n\\nd)- p.2: paths become invariant to _what_ exactly? \\n\\n In a VAE whose latent dimension is larger than the intrinsic dimension of the manifold, at any point in the latent space, there are directions in which moving either: does not change the output of the decoder or, moves off the manifold. \\n\\ne). -p.2: what is the 'topological class'? \\n\\nHere we just mean the set of manifolds with the same topology.\\n\\nf) p.2: would LLE not be a good precursor? \\n\\nSimilar to ISOMAP, LLE may help motivate this paper. Both of these techniques use a single operation to encode the entire manifold, while our approach automatically learns multiple local operations.\"}",
"{\"title\": \"Response to reviewer 1, part 1\", \"comment\": \"We thank the reviewer for their extremely thorough and thoughtful review.\\n\\nConcern 1. Response: a). Claim \\u2018triviality\\u2019 of Euclidean space. \\n\\nWhen we say that Euclidean space is trivial, we mean that it has the flat metric (\\u2018trivial\\u2019 metric here is a mathematical terminology referred as a flat metric. We\\u2019ve revised \\u2018trivial\\u2019 as \\u2018flat\\u2019 in our revision) . We agree that many models with Euclidean space do a very good job of reconstructing data which may have very complicated structure. In fact, this is not in conflict with our geometric explanation of encoding and decoding as a flat Euclidean domain can be used to parameterize smaller region of a complicated manifold. We are most interested in how the structure of the data relates to the structure of latent space, and how we can recover geometric information (which is uniquely determined from the charts and transition functions) by training a model.\\n\\nb). Terminology of \\u2018Topology\\u2019 and \\u2018Geometry\\u2019. \\n \\n Topology of the data is referred to global properties (such as holes in the manifold). We intentionally use geometry since the learning results of encoders and decoders provide a parameterization of the data manifold. This can further help us understand geometric information of manifolds including geodesics, curvature etc. Our revision has clarified this terminology. \\n\\nc). Initial Encoder is not analyzed. \\n \\nThe initial encoder serves as a dimension reduction step to find a low dimensional isometric embedding of the data. For example, given a data manifold as a torus embedded in 1000D space, the initial encoder reduces the dimension from 1000D to a lower ambient space (ideally 3D in this case), and the chart encoders map from 3D ambient space to 2D charts as the intrinsic dimension of the torus is 2. This helps us to save parameters and reduce computational costs. Another example is if a data point was an extremely high-resolution photograph, it would not be unreasonable to down sample it before passing it into a classification network. It is certainly possible that this down sampling will lose some geometric information. Since the chart parameterization can preserve the topology of the low dimensional manifold and loss function encourages exact reconstruction of the high dimensional manifold, it is not unreasonable to expect that the most important features are preserved.\\n\\n\\nConcern 2 Response. Questions on number of charts. \\n \\nWe agree that there are many possible chart parameterizations with different number of charts which are all equally valid for a given manifold. There is, however, a lower bound on the number of charts needed to cover some manifolds (for example, at least two charts are needed to cover a sphere). Our original scheme to choose the number of charts to use is to overestimate the number needed and then add a regularization which encourages some chart encoders to die off. We have included a new experiment in the appendix which illustrates this scheme.\", \"concern_3_response\": \"a). More parameters leads to better results?\\n \\n Our aim is manifold inspired models with latent spaces that are lower dimensional than those of traditional encoders and are closer to the intrinsic dimension of the data. The decoupled nature of the decoding operations mean that our models will tend to be larger in terms of number of parameters. We partially agree that more parameters will result in better reconstruction loss. However, the double torus example (used in the introduction and detailed in A10) shows that increasing the number of parameters in a VAE alone (without increasing the latent dimension) does not allow one to simultaneously produce good reconstruction and generation. A latent space which has too small of a dimension will not be able to cover a manifold, and one which is too large will generate points far from the data manifold. Thus the structure of latent space is more important than the number of parameters. This is one of the main objectives of this paper.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"1. It's somewhat subjective, but I feel that autoencoders are becoming less widely used, so the paper might have more impact if it had targeted models like ALI/BiGAN which do reconstruction but purely with adversarial objectives.\", \"response\": \"In general, we can view any traditional auto-encoder or variational auto-encoder as a single chart model without any transition conditions. Then by increasing the dimension of the latent space,,we could cover any data set with a sufficiently large latent space. If the dimension of the latent space is fixed, then there are examples of manifolds which cannot be captured by a single chart. Figure 1 shows an example of this: there is no map between a 2D Euclidean plane and the double torus. When trying to parameterize this data, the model with the 2D latent space fails to capture any of the \\u2018backside\\u2019 of the shape.\"}",
"{\"title\": \"Response to Reviewer 3, Part 2\", \"comment\": \"8. In Fig. 5 seems that few of the charts are more \\\"important\\\" than the others\\u2026.\\n\\nR3.8. Response: Because this example has non-constant curvature we do not expect that the U_i\\u2019s are of uniform sizes. Even though each chart is simply connected, their intersections may be disconnected. However, since the charts should agree in this transition zone this is not an issue. We have included a new set of visualization in the appendix which illustrate the transition zones in the appendix A7. \\n \\n9. As regards the MNIST, I do not think that it is a good example to support the chart learning\\u2026.\\n\\nR3.9 Response: We respectfully disagree. If the data were indeed 10 disconnected manifolds then p(x) should behave as an indicator. However, it is very likely that there is a continuous path between \\u201c1\\u201ds and \\u201c7s\\u201d or between \\u20186\\u2019s and \\u20188\\u2019s. Since we do not assume any ground truth labels the network must learn how to handle these transitions. If the network is properly trained then the each of the charts will always map onto the manifold. However, if a point from off the manifold is passed into the encoder after the model has been trained, the output will be on the manifold rather than being close to the original input.\\n\\n10. I think that is very important a well constructed experiment that shows the behaviour of the charts on overlapping domains ...\\n\\nR3.10 Response: Thank you for this comment. We have added a test showing the behavior in a transition zone in the appendix A7..\\n \\n11. Why in Fig. 7 some bars are missing? \\n\\nR3.11 The CAE III model begin with convolution layers, but VAE II does not, this is explained in the appendix A.2 and A.7. We have made the distinction clearer in the revision. For the sphere data, it is not clear how to use a convolution. The convolution works to extract features from a single training example, which would just be a single point in R^3 for the sphere. Analogously, a point from the MNIST manifold is an image which can be convolved on. \\nThe revision updates results for our simplest models on SVHN.\\n \\n12. In my opinion, from the first paragraph of Sec 4.1. is not clear how the function p(x) is defined\\n\\nR3.12: The exact definition of p(x) depends on the exact architecture and loss functions used. We've clarified this part in the revision. \\n \\n13. The regularization part is a bit unclear\\u2026 \\n\\nR3.13: The Lipchitz regularization acts as a bound of the spectral norm of the decoders. This promotes smoothness of each decoder and allows them to balance out (rather than having the entire manifold dominated by a single chart). We view this initialization as incorporating our domain knowledge which can help us improve the training stability. Bounded Lipschitz models do not mean they are approximately linear. We use the regularization for the encoders and decoders as they are models as inverses of each other. \\n\\n14. How is defined the term at the end of Eq. 5?\\n\\nR3.14: delta_ab is the Dirac delta, we\\u2019ve clarified this in the revision.\"}",
"{\"title\": \"Response to Reviewer 3, Part 1\", \"comment\": \"1. As regards the related work.\\n\\nR3.1. Response: Thank you for the references, we have included these in the updated version of the paper.\\n \\n2. I am not entirely convinced that the proposed model learns the charts of the manifold. Instead, I think it just utilizes several auto-encoders, and each of them specializes in some parts of the data manifold\\u2026.\\n\\nR3.2. Response: If several auto-encoders cover a different parts of a manifold and obey the chart transition conditions, then they are essentially forming an approximation of an atlas. However, the challenge is in dividing the data and learning the transitions. Our methodology is to use neural networks as universal approximators and then use the architecture, loss functions and regularization to mimic charts. For example, a point x which is in the overlap of two charts U_a and U_b will have both the terms ||x-x_a|| and ||x-x_b|| in the loss. Then we will have x_a = x_b when the model is minimized.\\n\\n3. The loss function of Eq. 2 essentially implies that only one chart is specialized for the sample x\\u2026..\\n\\nR3.3. Response: In eq (2), the gradient from the first term is only passed to one of the decoders, but the second term ensures that other decoders in the overlap are also updated. For example, if a point x is in the intersection of U_a and U_b then p_a and p_b will both be larger than the rest of the p_i. Then the weights on l_a and l_b will be largest and ||x-x_a|| and ||x-x_b|| will dominate the second term in the loss. We have written this loss function more clearly. \\n \\n4. The loss function of Eq. 3 is even more debatable\\u2026.\\n\\nR3.4. Response: We agree that this acts essentially as a soft assignment or a convex combination of each of the decoders. This idea is based on the partition of unity from topology and can be used instead of the chart prediction model using the same architecture. Here each of the p_i\\u2019s are only non-zero in the U_i region, this is illustrated in the new experiment in the appendix A7. This is different from ensemble networks which separates the image into different parts to be encoded by different encoders.\\n \\n5. Varying local dimension. \\n\\nR3.5. Response: In general working with manifolds of varying local dimension (or multiple manifolds of different dimensions) is a very hard problem which we do no look to solve in this paper. The problem of these \\u2018degenerate points\\u2019 are equally changing for standard models. We use manifold concepts as theoretical motivation to study the geometric structure of the latent space. Our numerical experiments validate the effectiveness of this interpretation.\\n \\n6. I think that the first global encoder E couples all the other encoders. Also, it is stated by the authors that this step should respect the manifold topology, which in general is not the case. So even if this helps for computational efficiency, it does not respect the theory.\\n\\nR3.6. Response: The initial encoder serves as a dimension reduction step to find a low dimensional isometric embedding of the data. For example, given a data manifold as a torus embedded in 1000D space, the initial encoder reduces the dimension from 1000D to a lower ambient space (ideally 3D in this case), and the chart encoders map from 3D ambient space to 2D charts as the intrinsic dimension of the torus is 2. In practice we observe that global encoder can save parameters and reduce computation costs for high dimensional data. Another example is if a data point was an extremely high-resolution photograph, it would not be unreasonable to down sample it before passing it into a classification network. It is certainly possible that this down sampling will lose some geometric information. Since the chart parameterization can preserve the topology of the low dimensional manifold and loss function encourages exact reconstruction of the high dimensional manifold, it is not unreasonable to expect that the most important features are preserved.\\n\\n7. The pretraining together with the regularization make me to believe that the model first separates the dataset into K clusters, and then learns an approximately linear model for each cluster\\u2026 \\n\\nR3.7. Response: We respectfully disagree. The pretraining works to ensure that each of the decoders is \\u2018on\\u2019 the manifold so that when training begins there are no decoders which are always inactive. Since the chart selection module is learned in conjunction with the rest of the model there is no prior segmentation of the data. During training the charts will move, change sizes, overlap or disappear. This is quite different from clustering the data first and training encoders independently on each cluster. Additionally, our numerical experiments show that the models are not nearly linear (see fig 1,4,5).\"}",
"{\"title\": \"Executive Summary of Revision\", \"comment\": \"We\\u2019d like to thank all reviewers for their thoughtful remarks and contributions. We have uploaded a revision which addresses comments raised in the review process.\\n\\nThere are several main changes which we have made in the revision. Major changes have been marked in magenta, while typos and grammatical errors have been corrected inline. \\n\\nFirstly, we have re-written section 4.1 to better describe the properties of our chart prediction module and how we deal with points which fall into the overlapping regions. We have also added an experiment in the appendix to illustrate the behaviour of the network in these transition zones. \\n\\nNext, we have expanded the discussion on our network regularization and provided an additional example (in the appendix) of our method to choose an appropriate number of charts. \\n\\nIn the experiments section, we have added the requested test for simple models on the MNIST and SVN test sets. Additionally, we have expanded the discussion of the metrics we use to validate our tests. The \\u2018faithfulness\\u2019 metric has also been re-named \\u2018unfaithfulness\\u2019 to reflect that a low score is desirable. \\n\\nLast but not least, we have added several experiments in the appendix to answer reviwers\\u2019 comments and clarify the proposed method. In A.7, we included an illustrative example to show the overlapped regions and transition functions. In A.8, we demonstrated the automatic removal of over-estimated charts. In A.9, we used a synthetic example on approximating geodesic on a data manifold based on our learning results. This indicates a great potential of the charts structure, i.e. it helps us understand geometry information of data manifold. In A.10, we added an experiment to show a single chart VAE can not have good generating results even with more complex network (with more parameters).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThe authors provide a model that is based on multiple Auto-Encoders, and it is claimed that each Auto-Encoder learns a local chart map of the data manifold. \\n\\nIn general the paper is ok written and tries to present the idea and the theoretical background in a nice way. However, there are few things that in my opinion should be improved (see comments). More importantly, at the first sight the proposed model seems to be solid, but I think that there are some details, which make the model to not behave as it should in theory.\", \"comments\": [\"1. As regards the related work, I think that some references should be included. For instance, several recent papers which discuss the topological structure of the latent space [1,2,3,etc]. Also, recently the geometry of the latent space is analyzed through Riemannian geometry and points out that the uncertainty of the generator is crucial for capturing correctly the data manifold structure [4]. Moreover, there is some work where multiple generators are used in order to model the data [5].\", \"2. I am not entirely convinced that the proposed model learns the charts of the manifold. Instead, I think it just utilizes several auto-encoders, and each of them specializes in some parts of the data manifold:\", \"First of all, the chart map is very well defined operator. A point in the intersection of two neighborhoods on the manifold U_a, U_b that overlap, has to be reconstructed \\\"exactly\\\" by the two corresponding charts. However, from the modeling steps and the experiments I cannot see why this is the case.\", \"As regards the technical details. The loss function of Eq. 2 essentially implies that only one chart is specialized for the sample x. However, such that to have chart maps, there should be samples on the intersections of neighborhoods on the manifold that are reconstructed by both charts. I do not see how the proposed model can tackle this issue.\", \"The loss function of Eq. 3 is even more debatable. The reason is that a mixture of auto-encoders is used to reconstruct the point x. However, this is not the definition of a chart. This is simply a way to use several auto-encoders to reconstruct the data, where the function p_i(x) (acts as soft assignment) chooses which of the auto-encoders should be used for the sample.\", \"The chart is defined as an invertible map. I can understand that in practice the decoder is considered as the inverse of the chart. However, we cannot guarantee that there are not cases where the decoder creates a surface with intersections or that \\\"degenerates\\\" some parts of the surface (instead of a 2-dimensional surface, it generates an 1-dimensional curve). In this case, the chart is not invertible on these parts of the surface. So I am not sure if we can directly consider an auto-encoder based on Neural Network as chart map.\", \"3) I think that the first global encoder E couples all the other encoders. Also, it is stated by the authors that this step should respect the manifold topology, which in general is not the case. So even if this helps for computational efficiency, it does not respect the theory.\", \"4) The pretraining together with the regularization make me to believe that the model first separates the dataset into K clusters, and then learns an approximately linear model for each cluster. I think that this is what the Eq. 2 implies, or a soft assignment (weighted) version if the Eq. 3 is used.\", \"5) In the experiments some of the previously mentioned issues appear:\", \"In Fig. 5 seems that few of the charts are more \\\"important\\\" than the others. However, I am more curious for what happens on the intersection of the neighborhoods on the manifold. For instance, from the figure it seems that some of the U_a on the surface are \\\"disconnected\\\". Does this mean that simply some of them (e.g. U_a and U_b) intersect and thats why the disconnected sets (e.g. of U_a) appear? If this is the case, then how the two chart maps behave on the intersection?\", \"As regards the MNIST, I do not think that it is a good example to support the chart learning. Most probably, there are 10 (disconnected) manifolds, and each of them should be modeled by a particular chart (or many charts per digit-manifold). In this case I think that the p(x) should be exactly 0 and 1, such that to chose one chart per digit. Also, in the current setting of the experiment, essentially all the data are considered to lie on the same data manifold. So what is the behaviour of the charts in the parts of the ambient space where there are no data?\", \"I think that is very important a well constructed experiment that shows the behaviour of the charts on overlapping domains (Sec A.3). Even an example in 2D ambient space with embedded 1-dimensional (disconnected) manifolds.\", \"Why in Fig. 7 some bars are missing? Also, from the appendix it seems that the CAE ||| is a very powerful model with 10 latent spaces of 25 dimensions each, and moreover, is the only one that uses convolutions. Since, from the text is not clear if the VAE II uses convolutions, and also, it has only one 25 dimensional latent space. In my opinion this is not a reasonable comparisson.\", \"Probably comparisons with other models that use multiple generators or even latent spaces that respect the topology of the data manifold could be included.\"], \"minor_comments\": \"1) In my opinion, from the first paragraph of Sec 4.1. is not clear how the function p(x) is defined.\\n\\n2) The regularization part is a bit unclear. Strong regularization on the decoders probably means that locally the auto-encoders will behave approximatelly as linear models? Especially, since the initialization is based on local PCAs (pre-training), which can potentially act as an inductive bias. Also, in the beggining of paragraph 3 in Sec 4.2. it is stated that the Lipschitz regulariation is used for the decoders, but next it is introduced for the encoders. This needs clarification.\\n\\n3) How is defined the term at the end of Eq. 5?\\n\\n\\nIn general, I like the problem that the paper aims to solve. However, I have the feeling that the proposed approach is quite debateable. Instead of chart learning, in my opinion, I think that the model just uses several auto-encoders and each of them is specialized at different subsets of the training data. These subsets are chosen at the pre-training phase, and then the function p(.) acts as an (soft) assignment function. Overall, my main question is what happens on the overlap of two neighborhoods on the data manifold? Also, what happens if the data lie on disconnected components?\", \"references\": \"[1] Diffusion Variational Autoencoders, Luis A. Perez Rey, et al., 2019.\\n[2] Hyperspherical Variational Auto-Encoders, Davidson, Tim R., et al., 2018.\\n[3] Hierarchical Representations with\\nPoincar\\u00e9 Variational Auto-Encoders, Emile Mathieu, et al., 2019.\\n[4] Latent Space Oddity: on the Curvature of Deep Generative Models, Georgios Arvanitidis, et al., 2018.\\n[5] Competitive Training of Mixtures of Independent Deep Generative Models, Francesco Locatello, 2019.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"## Summary of the Paper\\n\\nThis paper introduces a new architecture for autoencoders based on the\\nconcept of *charts* from (differential) topology. Instead of learning a\\nsingle latent representation, the paper proposes learning charts, which\\nserve as local latent representations. Experiments demonstrate that the\\nlocal representations perform favourably in terms of approximating the\\nunderlying manifold.\\n\\n## Summary of the Review\\n\\nThis is an interesting paper with an original idea. I appreciate the use\\nof concepts from differential topology in deep learning and agree with\\nthe paper that such a perspective is required to increase our\\nunderstanding of complicated manifold data sets. However, I find the\\nfollowing issues with the paper in its current form, which prevent me\", \"from_endorsing_it_for_acceptance\": \"1. I have doubts about the technical correctness of the proposed\\n architecture; specifically, the relevance of the *initial* latent\\n representation, which employs a Euclidean space, is not analysed.\\n\\n2. The role of the number of charts, which needs to be specified\\n before-hand, is not analysed in an ablation study.\\n\\n3. The experiments do not showcase the *conceptual* improvements of the\\n proposed technique.\\n\\nI shall briefly comment on each of these points before discussing other\\nimprovements.\\n\\nI want to point out that I really like the ideas presented in this\\npaper and I think it has the potential to make a strong contribution but\\nthe issues in their current form require substantial revisions and\\nadditions.\\n\\n# Concern 1: Technical correctness\\n\\nThe paper claims at multiple places that the geometry of Euclidean space\\nis 'trivial' or 'too simplistic' to meaningfully reflect the structure\\nof the data. This claim is double-edged, though: first, there are\\nmany methods that use autoencoders based on these spaces that exhibit\\nsufficient reconstruction capabilities. Second, the proposed\\narchitecture itself uses a Euclidean latent representation as its\\ninitial encoder. The paper states that 'Ideally, this step preserves the\\ntopology of the data [...]', but this is never analysed.\\n\\nI fully agree with the idea that charts are a suitable way to describe\\ncomplicated manifolds, but the paper needs more precision when terms\\nsuch as 'topology' and 'geometry' are being used. Likewise, I disagree\\nwith referring to Euclidean space as 'trivial'. Again, other methods\\ndemonstrate that the space captures high-level phenomena sufficiently\\nwell for reconstruction purposes. At the very least, the paper should\\nbe more precise here.\\n\\nMoreover, I would recommend experiments in which the dimensionality of\\nthe initial encoder is discussed.\\n\\n# Concern 2: Number of charts\\n\\nSelecting the number of charts appears to me as a critical component of\\nthe proposed method. While the appendix contains one experiment for\\nMNIST with different numbers of charts, this concept needs to be fleshed\\nout more. How do we know that we have a sufficient number of charts?\\nSince in differential topology, the choice of chart should not matter,\\nhow does it behave in these cases? Is there a way to detect that the\\nnumber of charts must be increased?\\n\\nI could envision something like a simple 'step size' control procedure:\\nif a quality measure indicates that there need to be more charts, double\\nthe number of charts and re-run the training; if the number of charts\\nis too big, halve it and re-run the training.\\n\\nI get the idea that increasing the number of charts will probably\\ndecrease the reconstruction error, but this comes at the obvious expense\\nof even more parameters. I thus recommend another set of experiments\\nthat shows the influence of the number of charts, maybe even on the\\nsynthetic data sets used in the paper.\\n\\n# Concern 3: Conceptual improvements\\n\\nWhile I enjoyed the didactic approach of the paper, which first\\nintroduces simple test data sets to illustrate the concepts, my\\nmain question is about the conceptual improvements that the charts\\nprovide in the end.\\n\\nI see that the reconstruction error for MNIST goes down---but there are\\nalso significantly more (!) parameters than in the comparison\\narchitectures. The ideas of the sampling or interpolation experiments\\ngo in the right direction, but in their present version, they are not\\nentirely convincing. In fact, they even raised more questions for me: \\n\\n- Figure 6 depicts individual charts but their *covering* of the space\\n is highly non-uniform. The letter '0' is covered more often than the\\n letter '1', for example. How can this be compatible with the claim\\n that the novel architecture learns a suitable set of charts? I could\\n understand some overlaps, but there seems to be a clear difference\\n between the charts generated in the synthetic examples---which do\\n appear to be cover everything in a uniform manner---and the charts for\\n MNIST. This needs to be elucidated some more, in particular since the\\n paper writes that the charts 'cover [...] in a balanced and regular\\n way'.\\n\\n- The digit morphing example is not not entirely convincing to me. Is\\n this not something that I can do equally well with a VAE or generative\\n models in general? I am *not* disputing the claims of the paper here,\\n I am merely stating that *if* the new method is beneficial for this\\n sort of application, a more in-depth experiment is required.\\n\\nThus, while I would like to give the paper the benefit of the doubt, it\\ndoes not show just *why* it is relevant to have a chart-based embedding.\", \"some_suggestions_for_a_set_of_experiments\": \"- Do charts help in separating the input space? I would hypothesise\\n that this is the case---it thus might be worthwhile to study\\n low-dimensional embeddings obtained based on each chart and 'stitch'\\n them together.\\n\\n- Do charts tell us something about the properties of a manifold? For\\n example, are certain charts 'easier' to embed than others? This could\\n be used to indicate different dimensions in a data set.\\n\\n## Experimental setup\\n\\nI have one major point of critique here, namely the way results are\\npresented without any measures of tendency. Instead of showing a bar\\nplot in Figure 7, I would suggest showing a Table with standard\\ndeviations along multiple repetitions of the experiment. It is not clear\\nfrom looking at this to what extent this results can be replicated.\\n\\nMoreover, a discussion of the number of parameters is required. To some\\nextent, I find it not surprising that a better reconstruction error is\\nachieved if more parameters are present.\\n\\nThis makes some of the claims in the paper hard to assess.\\n\\n## Technical clarity\\n\\nThe papers is generally written well and has a good expository style.\", \"here_are_some_cases_where_i_find_that_clarity_can_be_improved\": \"- To add to what I wrote above: if charts are Euclidean as well, the\\n paper should elucidate why Euclidean charts do *not* suffer from being\\n too simplistic.\\n\\n- The discussion of homeomorphisms in the introduction is slightly\\n misleading; none of the functions learned later on is a homeomorphism\\n because of the latent space dimensions.\\n\\n- Homeomorphic mappings of manifolds into a Euclidean space are not\\n necessarily desirable---this is why the definition of a manifold uses\\n the concept of neighbourhoods. I think this should be rephrased in\\n a positive manner, as in: manifolds are complex, so we cannot expect\\n a *single* map to suffice...\\n\\n- The leading example of a torus embedding needs more details. Why is\\n the structure destroyed?\\n\\n- The introduction of 'topological features' on p. 2 is slightly abrupt.\\n It would be sufficient to explain by means of the figure that the\\n mapping obviously does not respect all properties.\\n\\n- p.2: paths become invariant to _what_ exactly?\\n\\n- p.2: what is the 'topological class'?\\n\\n- p.2: would LLE not be a good precursor to the method proposed in this\\n paper?\\n\\n- p.2: is this paper to be seen as an implementation of Chen et al. (2019)?\\n This should be made more clear.\\n\\n- p.3: the concept of intrinsic dimension slightly varies in literature.\\n I would propose mentioning the homeomorphism of every chart to some\\n $d$-dimensional space, and state that if this exists, one calls the\\n manifold $d$-dimensional.\\n\\n- p.3: the circle example could be explained in more detail for readers\\n unfamiliar with the concepts.\\n\\n- p.4: the chart prediction module requires a brief explanation at the\\n point when it is first introduced (1 sentence is sufficient). The\\n method plus architecture is presented but the details come very late;\\n I would prefer some intuition here\\n\\n- p.4: $N$ needs to be defined earlier\\n\\n- p.4: how is the dimension of latent spaces chosen? Please also refer\\n to my comments on the experiments above.\\n\\n- p.5: Section 4.1 again mixes 'topological' and 'geometrical' concepts;\\n suddenly, the concept of curvature crops up---this needs to be\\n explained better!\\n\\n- p.5: Distances can always be measured in connected subsets of\\n real-valued spaces; whether the set is open or closed does not change\\n the fact that a centre exists. Am I misunderstanding this?\\n\\n- p.5: I like the 'partition of unity' approach, but to me, this reads\\n like a convex combination of predictions. Am I misreading this? If\\n not, I would suggest to rephrase this.\\n\\n- p.5/6: the goals of the new method need to be stated more clearly; the\\n paper needs to explain better to what extent *reconstruction error* is\\n affected by charts (it does not seem to be, as I outlined above)---and\\n this again raises the question of which quality measure the new method\\n *can* preserve.\\n\\n- p.6: the definition of the Lipschitz constant could be more precise;\\n please specify the requirements $f$ has to satisfy\\n\\n- Eq. 4 needs more details for me: it seems as if the weights appear\\n twice as a kind of 'decay term' (in the second part, I see the sum\\n but the product appears in both terms). This should be stated more\\n clearly.\\n\\n- p.6: the pre-training needs more details; how crucial is this step?\\n\\n- p.6: what does the 'orientation' imply? It is not defined except in the\\n appendix.\\n\\n- p.6: the jump from the illustrative examples to the non-synthetic ones\\n is large; the uniform sampling of the latent space does not scale to\\n higher dimensions, for example. The paper should comment on this if\\n possible.\\n\\n- In general, I would recommend giving the employed models more\\n 'speaking' names. I found it hard to keep track of all of them and had\\n to refer to the appendix constantly.\\n\\n- For Figure 4, please show the full space, together will all charts\\n\\n- p.7: please give some ideas (see above) for how to use the covering of\\n the points in practice; I like that the object can be reconstructed\\n with a proper set of charts, but the paper could make the necessity\\n of the technique much more obvious by choosing stronger examples.\\n\\n- p.7: the object arguably *also* has a complex geometry, not only\\n complex topology. This should be mentioned.\\n\\n- p.8: the discussion of MNIST is slightly incorrect; as outlined above,\\n many digits appear to be generated by multiple charts, while some,\\n such as `1` do not appear on more than one chart.\\n\\n- The metrics in Section 5.3 should be introduced earlier, maybe at the\\n expense of some exposition in the introduction or the simpler\\n examples; it is not good style to have to refer to the appendix to\\n understand a core experiment of a paper.\\n\\n- p.8: I do not understand the term 'wholly pyramid'.\\n\\n- p.11: the decoder should map to $x_i$, if I am not mistaken\\n\\n- p.11: I would suggest a more consistent terminology to describe the\\n models. The prediction function is replicated multiple times, for\\n example, so why not introduce a shorthand notation for this?\\n\\n- p.12: '\\\\cup' and '\\\\cap' need to be switched: the *intersection* of\\n domains needs to be empty, not their *union*\\n\\n- p.13: to what extent are the 'faithfulness' and 'coverage' established\\n metrics? It seems that they are developed for this paper, so I would\\n explain them in the main text and also make clear why they are\\n desirable metrics---else, the metrics could be criticised as being\\n fine-tuned for the proposed method.\\n\\n For example, if *coverage* can measure the phenomenon of *mode\\n collapse*, this needs to be demonstrated.\\n\\n## Minor comments\", \"some_typos\": [\"low dimensional --> low-dimensional\", \"eigen-functions --> eigenfunctions\", \"considers manifold point --> considers a manifold point\", \"paring subnetwork --> pairing subnetwork\", \"paramterized --> parametrized\", \"preformed --> performed [occurs multiple times]\", \"chats --> charts\", \"Lipshitz --> Lipschitz (in Figure 4)\", \"evalutation --> evaluation\", \"seciton --> section\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overall I felt like this paper gives a nice mathematical exposition on the relationship between charts, manifolds, and autoencoders. I slightly lean for acceptance but am very borderline, especially as the results on \\\"real data\\\" are very weak.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.