forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
BJx4rerFwB | wMAN: WEAKLY-SUPERVISED MOMENT ALIGNMENT NETWORK FOR TEXT-BASED VIDEO SEGMENT RETRIEVAL | [
"Reuben Tan",
"Huijuan Xu",
"Kate Saenko",
"Bryan A. Plummer"
] | Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations. To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations. The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG). Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing. Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics. | [
"vision",
"language",
"video moment retrieval"
] | Reject | https://openreview.net/pdf?id=BJx4rerFwB | https://openreview.net/forum?id=BJx4rerFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5CFK0e5CiS",
"rygs-0TroB",
"Syg5866BoB",
"Hkeyen6rsH",
"BJeFV9pBjr",
"HylzGuproS",
"rJxsIwXpcB",
"Skeukb_ncH",
"Byx0x0jDcB",
"ByxSBaCJqr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745180,
1573408259259,
1573408081669,
1573407719166,
1573407281483,
1573406730119,
1572841299514,
1572794591633,
1572482550332,
1571970365183
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2282/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2282/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2282/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2282/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2282/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2282/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2282/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2282/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2282/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for aligning an input text with the frames in a video that correspond to what the text describes in a weakly supervised way. The main technical contribution of the paper is the use of co-attention at different abstraction levels.\\n\\nAmong the four reviewers, one reviewer advocates for the paper while the others find this paper to be a borderline reject paper. Reviewer3 who was initially positive about the paper, during the discussion period, expressed that he/she wants to downgrade his/her rating to weak reject after reading the other reviewers' comments and concerns. The main concern of the reviewers is that the contribution of the paper incremental, particularly since the idea of co-attention has been used in many different area in other context. The authors responded to this in the rebuttal that the proposed approach incorporate different components such as Positional Encodings and is different from prior work, and that they experimentally perform superior compared to other co-attention usages such as LCGN. Although the AC understands the authors response, the majority of the reviewers are still not fully convinced about the contribution and their opinion stay opposed to the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your review. We address your concerns below.\\n\\n- What is the processing speed of the method compared to the baseline?\\n\\nIn our measurements, the TGA method takes approximately 4.725s to process a single query. In contrast, the FBW module and our combined model (wMAN) take 4.329s and 8.102s respectively. As evident, the difference in processing time is not significant. However, if speed were really a huge concern, using the FBW module alone would provide a much faster processing time as well as improved results over the TGA method. For reference, the TGA model obtains an average recall accuracy of 49.8% while the FBW module achieves 57.0%.\\n\\n- Number of parameters comparison with baseline\\n\\nThe TGA model contains about 3M parameters while wMAN contains 18M parameters. However, the large performance gains are not directly attributed to the increase in parameters. To prove this, we increase the dimensions of feature representations as well as relevant fully-connected layers in TGA such that the total number of parameters becomes 19M. We evaluate this model on Charades-Sta and the results are provided in Table 5 in the appendix. As evident, even with more parameters than our model, it still does substantially worse than ours. To add on to this, our direct adaptation of the Language-Conditioned Graph Network (also provided in Table 5), which has 152M parameters, also yields results inferior to ours. Finally, we also decrease the number of parameters in the FBW module alone to 3M and its performance gain over the TGA model is still significant.\\n\\n- Assumption that sentences are only associated with its ground truth video\\n\\nThis assumption does add random noise to the training process. However, there is a higher probability of assigning a non-relevant sentence that does not correspond to its ground truth video than to the contrary. With that said, this assumption is also often used in tasks such as image-sentence retrieval or phrase grounding.\\n\\n- Determining the size of the sliding window\\n\\nWe definitely agree with this. In response to this, we simply adopted the same candidate proposals adopted by the baseline and prior work for fair comparisons. However, this is an interesting avenue for future work and we are actually exploring possible ways of replacing these manually-defined sliding window mechanism with an efficient subwindow search algorithm.\\n\\n- Can this model be supervised? If so, how does it compare to the supervised baselines?\\n\\nOur model should be easily generalizable to the strongly-supervised setting. Due to time constraints and other commitments, we do not have the time to adapt it to the strongly-supervised setting for this rebuttal, but this is an interesting future work direction.\\n\\nWe hope that we have addressed your concerns satisfactorily. Please let us know if you have any further concerns or questions.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review. We address your concerns below.\\n\\n- wMAN model the relation for all possible pairs of the word and the video frame. However, if the video is quite long, say 10 minutes, 30 minutes, or even few hours, will the method still be efficient and effective?\\n\\nComputing effective video representations for long videos efficiently is still an unsolved problem in computer vision. There is a lot of ongoing work in this area. With this said, based on the observed memory requirements of our proposed approach during inference, the efficiency and effectiveness of our method should be scalable to videos lasting a few minutes. As mentioned before, reasoning about videos lasting a few hours efficiently and effectively is still an unsolved research topic. However, with increased computational resources, there is no reason to believe that our method is not scalable to such videos. One possible solution is to reduce the sampling rate of video frame. Another option is to break the video into smaller parts and localize within each part individually. Finding a way to reason about long videos and natural language effectively from a low frame sampling rate in this task provides an interesting avenue for future work.\\n\\n- When building the relation between the word and the frame, is there any emphasis on verb, some particular word, or self-learned attention?\\n\\nThe motivation behind the frame-by-word interaction mechanism in our approach is that it encourages the model to learn the association between words and action sequences in videos. Words such as \\u2018hold\\u2019 and \\u2018sits\\u2019 definitely play a much more important role in localizing the relevant temporal segment in videos. For example, in Figure 3b, we observe that the top 3 weights assigned to each frame for \\u2018person\\u2019 and \\u2018chair\\u2019 generally occur in tandem with \\u2018sits\\u2019 and \\u2018down\\u2019. This demonstrates that our model learns the association between verbs and entities via self-learned attention. This is consistent with our observations in Figure 3a as well.\\n\\n- Followed by previous question, in the qualitative results, it seems the boundary parts of the predicted video segments are less accurate.\\n\\nOne possible reason is that we are using non-overlapping segments as proposals on Charades-Sta to facilitate fair comparison with prior work. Given that these proposals have static boundaries, it will cause the boundary parts of the candidate proposals to be less accurate.\\n\\n- Experimental results: I suggest the author to provide more ablation analysis to the experiment section.\\n\\nIt appears that contextual cues generally help to improve retrieval accuracy on harder settings such as higher IOU thresholds and Recall@1 accuracy. Using just the FBW module leads to better performance only on the lowest IOU threshold and Recall@5 and Recall@10 accuracies. We observe the same consistency in our ablation experiments on DiDeMo as well. We hypothesize that these cues help to make our model more discriminative in harder settings which is arguably more practical for real-world applications such as in video search engines. Finally, the overall performance of wMAN is better than that of the FBW module. If we average the scores, we obtain 57.0% and 58.2% for the FBW module and wMAN respectively.\\n\\n- Less technical comments \\n\\nWe will update the next version of the paper with the necessary clarifications and modifications.\\n\\nWe hope that we have addressed your concerns satisfactorily. Please let us know if you have any further concerns or questions.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your review. We address your concerns below.\\n\\n- The main contribution of the paper is incremental (specially respect to Mithun et al., 2019), I do not see a ground-breaking contribution. One of the main novelties with respect to previous text-to-clip models is the use of co-attention schemes at the level of words and frames.\\n\\nWe address this in the general response.\\n\\n- Actually, while the model makes an extensive use of frame-to-word encoding, it is not clear to me what is the role of the word-to-video representation in Eqs. 5 and 6.\\n\\nWe describe its purpose in the beginning paragraph of Section 3.2. It is concatenated with the word embedding to create a new visual-semantic representation, which is next used to update the visual representations iteratively during message-passing as shown in equation 7. The intuition is that the word-specific representations help to convey contextual visual information derived from other video frames. We will update the next version of the paper to make it clearer.\\n\\n- However, it is not clear why authors change the structure of the evaluation among the experiments.\\n\\nWe adopt the same evaluate metrics and practices as prior work to enable direct comparison. Scores for different IOU thresholds are used on the Charades-Sta dataset while scores for only IOU=0.5 are used on DiDeMo. With respect to the ablation experiments not being evaluated on the test set, we followed the standard protocol of finetuning hyperparameters and evaluating model components on the validation set. This is more realistic in the real world where we usually do not have access to the test set in practice. We do have the ablation results on the test set too but we left them out due to space constraints. However, we have added them to the Appendix in Tables 8 and 9.\\n\\n- It will be interesting to further analyze why the contextual cues hurt performance in some cases, maybe at least a qualitative analysis.\\n\\nIt appears that contextual cues generally help to improve retrieval accuracy on harder settings such as higher IOU thresholds and Recall@1 accuracies. Using just the FBW module leads to better performance only on the lowest IOU threshold and Recall@5 and Recall@10 accuracies. We observe the same consistency in our ablation experiments on DiDeMo as well. We hypothesize that these cues help to make our model more discriminative in harder settings which is arguably more practical for real-world applications such as in video search engines. Finally, the overall performance of wMAN is better than that of the FBW module. If we average the scores, we obtain 57.0% and 58.2% for the FBW module and wMAN respectively.\\n\\n- In some part of the papers, authors state that the proposed model does better than strongly-supervised state-of-the-art methods on some metrics\\n\\nIn Table 3, we show that we outperform the strongly-supervised methods by 10% on the Recall@1 metric. We have clarified this in the paper.\\n\\nWe hope that we have addressed your concerns satisfactorily. Please let us know if you have any further concerns or questions.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your review. We address your concerns below.\\n\\n1) This is addressed in the general response.\\n\\n2) We will update the next version of the paper with the necessary clarifications to the caption and modifications.\\n\\n3) We have updated the submission with an ablation study of how the number of message-passing steps affects the performance of our proposed approach. They are included in Section B of the appendix. In our experiments across both Charades-Sta and DiDeMo, we have observed that 3 steps work the best.\\n\\nWe hope that we have addressed your concerns satisfactorily. Please let us know if you have any further concerns or questions.\"}",
"{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers for their time and effort in reviewing our submission! Your reviews have provided us with interesting insights for future work. Reviewers agree that weakly-supervised text-based video moment localization is an important research direction (R3) and that our proposed approach leverages a new Word-Conditioned Visual Graph to aggregate contextual information from both words and video frames (R1 and R3). Our model also incorporates positional embeddings for multimodal learning (R1) which has not been used before in non-Transformers based approaches to the best of our knowledge. Besides outperforming the weakly-supervised baseline by a significant margin (R2 and R4), it also performs very comparably to strongly-supervised state-of-the-art methods, even outperforming them on some metrics on the DiDeMo dataset.\\n\\nThe submission has been updated to include more ablation experiment results, especially in the appendix section. To begin, we would like to review the contributions of our paper and benefits of our approach over prior work before addressing the specific questions individually in the sections below.\\n\\nWe agree that the general idea of co-attention has been proposed before but in a different context as mentioned. It has not been successfully applied to weakly-supervised text-based video retrieval, which has its own unique set of challenges compared to image VQA. Our model, though also based on co-attention, is very different from predecessors and experimentally works much better.\\n\\nOur hierarchical co-attention mechanism has some key differences from that of the Video Question Answering (VQA) model Hierarchical Question-Image Co-Attention for Visual Question Answering (HieCoAttVQA). One key difference is that we incorporate Positional Encodings (PEs), typically used in Transformers for language modeling, in our multimodal interaction mechanism. As shown in our ablation experiments, these PEs helps to enrich the capability of our model in modeling long-range dependencies between video segments as opposed to simply increasing the dimensions of the visual representations (shown in Tables 2 and 4). To the best of our knowledge, this is a novel use of PEs for multimodal learning in non-Transformers based approaches. \\n\\nOur co-attention mechanism also introduces a novel word-conditioned visual graph. During the message-passing process, our graph-based approach iteratively updates our visual representations with not only semantic information but with contextual information from other video frames as well, derived from word and word-specific video representations respectively. In contrast, the co-attention mechanism in the VQA paper simply alternates attending to the image and question representations separately. \\n\\nWe would like to reiterate that while co-attention has been proposed in other contexts (e.g. images), the exact method of accomplishing this is crucial for the task that we are addressing, especially in modeling long range dependencies in the much harder domain of videos. The most telling indication of this is the performance difference between our and the TGA (Weakly-Supervised Baseline)model, which also uses co-attention, that we are comparing to. Our model achieves 3x and 2x accuracies of the TGA model on DiDeMo and Charades-Sta on the hardest setting respectively. This demonstrates the importance of the different components in our approach. \\n\\nTo further emphasize the importance of the exact implementation of the co-attention mechanism, we provide results obtained from an adaptation of the Language-Conditioned Graph Network (LCGN) that is also designed to work on the image domain for VQA. Similarly, it employs a co-attention mechanism to reason about relationships between objects and words. The results are included in Table 5 in the appendix. On Charades-Sta, the LCGN model generally performs better than the TGA model. However, the obtained results are still vastly inferior to those achieved by wMAN. Finally, our approach serves as a good baseline of comparison for future work in this direction.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"Overview:\\nThe authors proposed a weakly-supervised method to localize video moments given text queries. The model builds multi-level relational graphs among pairs of word and video frame, and the graph is used to aggregate visual-semantic feature for each word and each frame. Then the attentive features are used to localize the sentence query in videos by calculating the similarity of words and frames. In summary, the proposed weakly-supervised Moment Alignment Network (wMAN) utilizes a multi-level co-attention mechanism to learn richer multimodal representations for language based video retrieval..\", \"pros\": \"1. Significant performance improvement on Didemo and Charades-STA datasets. The authors achieved very good performance on both dataset, even higher than some of the full-supervision methods, such as CTRL and MLVI.\", \"cons\": \"1. The overall novelty of the proposed methods is limited. Essentially, the key points of the model is hierarchical visual semantic co-attention.,which is proposed originally in [Hierarchical Question-Image Co-Attention\\nfor Visual Question Answering], although the original application is VQA in image domain. So in this way, the novelty is only marginal.\\n2. Paper writing can be improved. Figure 2 shows the overall structure of the model, however, the caption doesn't explain all the notations in the figure, such as WCVG, and the equations. Additionally, the reference is very far away from Figure 2, which makes the whole paper hard to read.\\n3. For evaluation part, one important ablation study is missing: the number of steps T for message passing. This eval is important, as it shows the necessity of using \\\"multi-level\\\" attention.\", \"minor_comments\": \"1. Make the caption of Figure 2 self-explainable, e.g. the meaning of LSE.\\n2. There is a \\\"word-conditioned\\\" visual graph network, why not the other way, \\\"frame-conditioned\\\" semantic graph net and iterate over it?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work presents a model for text based video clip (video moments or text-to-clip) retrieval. The goal is to identify a video segment within a longer video that is most relevant to an input sentence. Authors propose a new model based on a weakly-supervised training approach. This model does not require explicit temporal annotations to align text and video, but it only needs as an input the full video and sentence pairs.\", \"key_aspects_of_the_model_are\": \"i) A coattention step frame-by-word and word-by-frame that produces the basic embeddings of the model, which is enriched with positional information, and ii) A contextual step that aggregates contextual information from all the frames using graph propagation. Afterwards, they use a LogSumExp pooling strategy to score similarity among the input sentence and video frame.\\n\\nThe main contribution of the paper is incremental (specially respect to Mithun et al., 2019), I do not see a ground-breaking contribution. One of the main novelties with respect to previous text-to-clip models is the use of co-attention schemes at the level of words and frames. However, the idea of co-attention at different grain-levels have been proposed before. Actually, while the model makes an extensive use of frame-to-word encoding, it is not clear to me what is the role of the word-to-video representation in Eqs. 5 and 6. \\n\\nIn general, the paper is well written. The experimental evaluation is convincing. However, it is not clear why authors change the structure of the evaluation among the experiments. As an example, for the experiments in Charades-STA dataset, they include scores for different IOUs levels, but they do not repeat this for DiDeMo dataset. Similarly, for DiDeMo dataset, results in Table 3 are for the test set, while the ablation study in Table 4 is for the validation set. I will recommend to standardize the evaluations. \\n\\nAnother comment is that in several experiment best performance is obtained using just the FBW module, it will be interesting to further analyze why the contextual cues hurt performance in some cases, maybe at least a qualitative analysis. Also, in some part of the papers, authors state that the proposed model does better than strongly-supervised state-of-the-art methods on some metrics, looking all the reported tables, I do not think that this is the case. Authors show qualitative results about cases where the model perform well, it will be good to also analyze failure cases, actually, according to the final scores, there is still lot of cases that the model can't handle properly.\\n\\nI rate the paper as borderline, but there is not such a rating at ICLR 2020, so I will lean to weak reject.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposed a weakly-supervised wMAN model for moment localization in untrimmed videos. Only the video-level annotation is available for training, and the goal is retrieving the video segment described by the sentence. The proposed model explored to utilize better context information and captured the relation between video and sentence/word via graph neural networks. In particular, instead of modeling the context information between the sentence and each video frame, wMAN tried to learn the representation with multi-level and co-attention, which considers all possible pairs between the word and the frame. The proposed model was evaluated on two publicly-available dataset and achieved reasonable results.\", \"pros\": [\"Weakly-supervised method for video moment localization is a reasonable and important direction.\", \"wMAN explicitly utilized multi-level context information between the sentence and the video frame, and used the graph neural network and the message passing to model the representation. I think this is a reasonable direction.\", \"wMAN is evaluated with two publicly available datasets, and is compared with state-of-the-art methods and other \\\"oracle\\\" baselines. The performance is impressive and could be a better baseline for the future work.\"], \"cons\": [\"wMAN model the relation for all possible pairs of the word and the video frame. However, if the video is quite long, say 10 minutes, 30 minutes, or even few hours, will the method still be efficient and effective?\", \"When building the relation between the word and the frame, is there any emphasis on verb, some particular word, or self-learned attention? For some particular word, say \\\"people\\\" and \\\"cup\\\", won't it have strong connection with many frames? But for some of the words, say \\\"hold\\\" and \\\"sits\\\", could it play a more important role?\", \"Followed by previous question, in the qualitative results, it seems the boundary parts of the predicted video segments are less accurate. Is it because some of the words case these false positive results? What do you think the reason is?\", \"Experimental results: I suggest the author to provide more ablation analysis to the experiment section. For example, the full model of wMAN works better than FBW on R@1, but worse on R@5 and R@10. Is there a particular reason about this? PE seems to be important for wMAN, and the authors provides few sentences analysis about this, but I don't think I fully understand this part. Another problem is that there is only few qualitative results, and in both these two examples, predicted results cover the GT segments. Is this always the case for wMAN? Why? Some failure cases could also be very helpful.\", \"Less technical comments: The paper writing is fine to me, but I don't like the typesetting. I suggest to put the model figure more close to the methodology section and the qualitative results on page 8.\", \"Overall, I think the paper is marginal above the accept line.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"\", \"summary\": \"This paper proposes a method for aligning an input text with the frames in a video that correspond to what the text describes in a weakly supervised way. The authors propose a combination of a \\u201cFrame-By-Word\\u201d (FBW) representation and a Word-Conditioned Visual Graph (WCVG). The proposed method outperforms the weakly supervised baseline presented in the paper in experiments by a large margin. In addition, it quantitatively performs close to previous strongly supervised methods.\", \"pros\": [\"New Word-Conditioned Visual Graph representation\", \"Outperforms weakly supervised baseline\", \"Ablation study of the moving parts\", \"Interesting use of positional embeddings for multi-modal learning\", \"Weaknesses / comments:\", \"What is the processing speed of the method compared to the baseline?\", \"The proposed method makes multiple comparisons while computing the attention weights over all words and frames. Does this cause the method to be slower than the baseline? If so, how much slower is it?\", \"Answers to these questions can help readers to keep in mind the trade-off of the proposed method for achieving the accuracy presented in the paper.\", \"Number of parameters comparison with baseline:\", \"Did the authors make sure to have similar number of model parameters for the baselines and the proposed method? Maybe I missed it, but I couldn\\u2019t see a mention of this anywhere. It would be useful to state this so that readers are sure that it\\u2019s not the number of parameters that is helping the method.\", \"Assumption that sentences are only associated with its ground truth video:\", \"The authors mention that they have the same assumption as Mithun et al., 2019. Can this assumption be detrimental if the dataset does not follow it? Say there are sentences in the dataset that could describe segments in multiple videos. Could this assumption lead to suboptimal representation learning / relationship learning for words / video frames?\", \"Determining the size of the sliding window:\", \"From reading the paper, it looks like the sliding window used for computing the word / frame relationships has to be manually defined. This seems a bit suboptimal for the generalizability of this method. Do the authors have any comments on this?\", \"Can this model be supervised? If so, how does it compare to the supervised baselines?\", \"The authors point out that their weakly supervised method performs close to the strongly supervised previously proposed. This is a nice finding, however, have the authors try to answer the question of what would happen if the proposed model is supervised? Will the proposed model outperform the strongly supervised baselines? Or at least perform the same?\"], \"conclusion\": \"In conclusion, the proposed method makes sense and it has been shown to empirically outperforms a previous weakly supervised baseline. The authors also provide an ablation study of the moving parts to show that the entire pipeline is important to achieve the highest performance in the hardest setting. It would be nice if the authors successfully answer / address the questions / concerns mentioned above in the rebuttal.\"}"
]
} |
B1l4SgHKDH | Residual Energy-Based Models for Text Generation | [
"Yuntian Deng",
"Anton Bakhtin",
"Myle Ott",
"Arthur Szlam",
"Marc'Aurelio Ranzato"
] | Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation. The dominant parametric approach is based on locally normalized models which predict one word at a time. While these work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process. In this work, we investigate un-normalized energy-based models (EBMs) which operate not at the token but at the sequence level. In order to make training tractable, we first work in the residual of a pretrained locally normalized language model and second we train using noise contrastive estimation. Furthermore, since the EBM works at the sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa. Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity compared to locally normalized baselines. Moreover, generation via importance sampling is very efficient and of higher quality than the baseline models according to human evaluation. | [
"energy-based models",
"text generation"
] | Accept (Poster) | https://openreview.net/pdf?id=B1l4SgHKDH | https://openreview.net/forum?id=B1l4SgHKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Nieyh8anDiR",
"41j7UJu8aC0",
"vELKwAGC4o",
"d8jezQgru7",
"ZrhQoS-k2",
"rylLgGNojS",
"BylUcWNiiS",
"rJe1DgVjjr",
"HJggab7ccS",
"Hyl9u-SX5H",
"Syg4BwvptB"
],
"note_type": [
"comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1589528871339,
1588286257652,
1581743353911,
1577997790438,
1576798745150,
1573761518408,
1573761421991,
1573761111455,
1572643255871,
1572192625702,
1571809084390
],
"note_signatures": [
[
"~Ning_Miao1"
],
[
"~Jianwen_Xie1"
],
[
"ICLR.cc/2020/Conference/Paper2281/Authors"
],
[
"~Jianwen_Xie1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2281/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2281/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2281/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2281/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2281/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2281/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Hope to see comparison with some existing energy-based text generation methods, especially MCMC-based methods.\", \"comment\": \"Dear Authors,\\n\\nCongratulations on the accepted paper!\\n\\nIn this paper, you mentioned MCMC methods for energy-based text generation. Actually, we proposed such a method called CGMH (<CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling>) in last year's AAAI. \\n\\nWhile your proposed method generates samples by importance sampling, CGMH performs lexical-level jumps between sentences to generate samples from an unnormalized distribution (such as P_{\\\\theta} in your paper), in the spirit of Metropolis-Hastings sampling. \\n\\nCGMH is also a highly efficient energy-based text generation model, which achieves remarkable results on several constrained-text-generation tasks such as keyword-to-sentence generation and unsupervised paraphrase. Hope we will have the opportunity to discuss and compare these two models.\\n\\nThank you.\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thank you so much! Wish you all a successful and healthy 2020!\"}",
"{\"title\": \"References Updated\", \"comment\": \"Thanks for your comments! Compared to images, a challgenge in text modeling is that the inputs are discrete, such that we cannot directly apply Langevin dynamics (our initial attempts at working in a relaxed space were not very successful). That being said, these prior works on EBMs for images/videos are indeed relevant, and we've updated our references accordingly.\"}",
"{\"title\": \"Missing related reference about un-normalized energy-based models parameterized by neural nets for image/video/3D shape generation\", \"comment\": \"Dear Authors,\\n\\nCongratulations on your nice accepted paper about text generation by un-normalized energy-based models. \\n\\nI would like to point out some papers that are highly related to your current one, and hope you can cite them in your final version. Similar to your current paper, all of them are about un-normalized energy-based models parameterized by modern neural nets for image/video/3D shape generation. The learning is based on MLE. The sampling is based on Langevin dynamics. \\n\\nMore specifically, \\n\\nThe first paper that proposes an energy-based model parameterized by modern deep neural network and learned it by Langevin based MLE is in (Xie. ICML 2016) [1]. The model is called generative ConvNet, because it can be derived from the discriminative ConvNet. (Xie. ICML 2016) [1] originally studied such an EBM model on image generation theoretically and practically in 2016. \\n\\n(Xie. CVPR 2017) [2] (Xie. PAMI 2019) [3] proposed to use Spatial-Temporal ConvNet as the energy function for video generation. The model is called Spatial-Temporal generative ConvNet. \\n\\n(Xie. CVPR 2018) [4] proposed to use volumetric 3D ConvNet as the energy function for 3D shape pattern generation. It is called 3D descriptor Net. \\n\\n(Gao. CVPR 2018) [5] proposed multi-grid MCMC to learn EBM with ConvNet as energy function for image generation. \\n\\n(Nijkamp. NeurIPS 2019) [6] proposed short-run MCMC to learn EBM with ConvNet as energy function for image generation.\", \"thank_you\": \")\\n\\nReference\\n[1] A Theory of Generative ConvNet. \\nJianwen Xie *, Yang Lu *, Song-Chun Zhu, Ying Nian Wu (ICML 2016)\\n\\n[2] Synthesizing Dynamic Pattern by Spatial-Temporal Generative ConvNet\\nJianwen Xie, Song-Chun Zhu, Ying Nian Wu (CVPR 2017)\\n\\n[3] Learning Energy-based Spatial-Temporal Generative ConvNet for Dynamic Patterns\\nJianwen Xie, Song-Chun Zhu, Ying Nian Wu\\nIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2019\\n\\n[4] Learning Descriptor Networks for 3D Shape Synthesis and Analysis\\nJianwen Xie *, Zilong Zheng *, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu (CVPR) 2018 \\n\\n[5] Learning generative ConvNets via multigrid modeling and sampling. \\nR Gao*, Y Lu*, J Zhou, Song-Chun Zhu, Ying Nian Wu (CVPR 2018). \\n\\n[6] On learning non-convergent non-persistent short-run MCMC toward energy-based model. \\nE Nijkamp, M Hill, Song-Chun Zhu, Ying Nian Wu (NeurIPS 2019)\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a Residual Energy-based Model for text generation.\\n\\nAfter rebuttal and discussion, the reviewers all converged on a vote to accept, citing novelty and interestingness of the approach.\\n\\nAuthors are encouraged to revise to address reviewer comments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you so much for your helpful comments!\\n\\n- Missing assumption on the existence of the solution:\\nThe reviewer is correct. The revised version of the paper now makes this assumption explicit in Theorem 1.\\n\\n- Partition function needs extra parameter:\\nAs proven in Ma & Collins and reported in the paper, the assumption is that the amount of training data goes to infinity and that the model has sufficient capacity. There is no need for additional parameters under this assumption since a powerful energy function would learn this normalizer.\\nIn practice, we observed that the score produced by the transformer energy function after training with the NCE objective is well calibrated as we vary the conditioning prefix. No additional bias parameter is required.\\n\\n- Alternating between training the residual energy function and the generator: \\nThis is possible. In this work, we focused on improving the original generator by only training the residual energy function which is the simplest setting. However, iterative training like in GANs might further improve the overall model. \\nSince the transformer models we use in this work are big and slow to train, and training sequence GANs requires policy gradients and heavy tuning to control gradient variance, this direction requires significant engineering efforts. Alternatively, a knowledge-distillation like procedure where we use the generated samples from the joint model to fine-tune the generator might also work. We leave this as future work.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the useful feedback, and especially for the pointers to prior work. This paper is improved due to your care!\\n\\n- Missing references and unclear contribution:\\nWe thank the reviewer for pointing this out. The revised version of the paper has now a whole new paragraph discussing the relation to prior works on energy-based models for sequence generation [1, 2, 3, 4, 5, 6]. In particular, the residual modeling form and the training algorithm is the same as in [5], of course with different choice of generator (transformer in our case VS LSTM in [5]) and energy function (BERT in our case VS CNN-LSTM-based model in [5]). Therefore, the modeling form and loss function should not be considered our contribution.\", \"our_theoretical_contributions_are\": \"a) new lower and upper bounds for the log-probability of the joint model which allows us to show that these models have better perplexity compared to autoregressive approaches (since otherwise the partition function estimated via importance sampling would lead to bias favoring the random field language model), b) the importance weighting sampling scheme used at generation time, and c) the setting which is focused on conditional generation as opposed to rescoring for speech recognition.\\n\\nIn particular, (a) is important because it allows comparing the EBMs (with bi-directional models) against auto-regressive models in the standard metric by which these methods are judged; and we do indeed show that the residual EBMs get good results even compared against large SOTA LMs. We also show this with human evaluations. In our opinion, improving upon these modern language models is an exciting accomplishment. \\n\\nPlease, let us know if our revised discussion of prior work needs further clarifications. Thank you.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you so much for your helpful comments!\\n\\n- Comparison to SeqGAN:\\nCompared to SeqGAN, the difference is that our goal is to use the \\\"discriminator\\\" (energy network) to improve the generator at test time whereas SeqGAN would throw away the discriminator after training and use the improved generator. SeqGAN requires policy gradients, which is usually unstable, and a too-powerful discriminator would make the generator gradients vanish (Arjovsky et al 2017). GANs \\\"have proven difficult to train in the domain of natural language\\\", \\\"most models yield worse results than a simple Language Model\\\", \\\"can be extremely sensitive to the random initialization and small deviations from the best hyperparameter choice\\\", and \\\"extensive evaluation has shown that existing language GANs do not improve over maximum likelihood-trained models\\\" (d\\u2019Autume et al 2019, Semeniuta et al 2018, Caccia et al 2018). \\nTo the best of our knowledge, there's no existing work that used a discriminator as powerful as BERT to successfully train language GANs, and in practice we observed that BERT is able to distinguish real text from LM samples over 95% of the time, hence language GAN learning with such a powerful discriminator seems implausible, not to mention that due to our large datasets and models, it is hard to do hyper-parameter tuning to stabilize policy gradients based training.\\n\\n- PPL: \\nPerplexity is the standard metric for language modeling. It is true that other metrics could be considered, like diversity. We don't think that our approach would improve diversity compared to the original language model (our generator). \\nThe main reason people worry about generation diversity in GANs is because of the mode collapsing problem (e.g., a generator always generating \\\"I don't know\\\" might fool the discriminator). However, unlike GANs which only use the generator at test time, we use the energy function to adjust the original language model to better approximate data distribution, so we don't see mode collapsing problems in our approach. One empirical evidence is that we found the adjusted per-step probabilities are largely similar to the original language model probabilities (Appendix A.1), and empirically language models do not have as severe mode collapsing problems as GANs. \\nOne could improve diversity by sampling hypotheses sequentially from the joint model adding an additional constraint on diversity with respect to the previously drawn samples, or use temperatured sampling. We believe this is an interesting avenue of future research.\\n\\n- Qualitative analysis: \\nThank you for the suggestion. We have added some examples to the supplementary material (A.5).\\n\\n- Conclusions are expected: \\nResidual EBMs provide a very natural way of leveraging BERT for language modeling, and we believe that providing a simple working recipe to use unnormalized sequence-level generative modeling to improve very large state-of-the-art language models is an important contribution.\", \"references\": \"\", \"arjovsky_et_al_2017\": \"Towards Principled Methods for Training Generative Adversarial Networks\\nd\\u2019Autume et al 2019: Training language GANs from Scratch\", \"caccia_et_al_2018\": \"Language GANs falling short\", \"semeniuta_et_al_2018\": \"On accurate evaluation of GANs for language generation\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work is an interesting extension of Gutmann and Hyvarinen (2010), where the parametric model is the combination of a noise model (language model) and an energy function (residual energy), so the difference of parametric model and the noise model cancels out the noise model. Therefore optimizing (3) under some conditions converges to a set of parameters of the parametric model (P\\\\theta(x) here) that best describes the data.\\n\\nOne important assumption of Gutmann and Hyvarinen (2010) is that there exists a set of optimum parameters for the parametric model such that the probability of data and the parametric models match for these optimum parameters. This should be mentioned in Theorem-1. \\n\\nDoes Theorem-1 need extra parameters to act as a normalization constant in order for the theorem to hold at the optimum?\\nlog P_lm(x) - E(x) + const = log p_data\\n\\nTo sample from the model, the authors first sample from the language model and re-sample it with respect to the energy values of the residual model.\\n\\n\\nTo compute the perplexity, they have given an upperbound and lowerboud for the partition function based on number samples in Theorem 2, but I haven't checked the correction of the bounds. They also factorize the joint model in auto-regressive factorization to compute the perplexity by approximate marginalizing. \\n\\n\\nAs mentioned in Section 5, this approach heavily depends on a strong pretrained language model. \\n\\nHave you considered improving the language model during training?\\n\\nThe described idea is simple and effective and I really liked it.\\n\\n\\n--- Based on other reviews and the authors' response (especially review #3), I reduced my rating to 'Weak accept'.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors make good points, starting from the exposure bias and label bias suffered by the mainstream neural auto-regressive models.\\nResidual EBMs are defined and trained using NCE. Experiments on two large language modeling datasets show that residual EBMs yield lower perplexity and generation via importance sampling is of higher quality, compared to locally normalized baselines.\\n\\nIn generally, the paper is well motivated and interesting. But I have some concerns.\\n\\n1. Missing important relevant references.\\n\\nEBMs (a.k.a. un-normalized models, random fields) have been successfully developed in language modeling in recent years. A large body of this paper has been studied in [5,6], including the model and the NCE estimation method. The model Eq.(2) is exactly the model in [5], defining the model in the form of exponential tilting of a reference distribution.\\nConnecting and comparing to these previous works are needed.\\n\\n[1] R. Rosenfeld, S. F. Chen, and X. Zhu, \\u201cWhole-sentence exponential language models: a vehicle for linguistic-statistical integration,\\u201d Computer Speech & Language, 2001.\\n[2] B. Wang, Z. Ou, and Z. Tan, \\u201cTrans-dimensional random fields for language modeling,\\u201d ACL, 2015.\\n[3] B. Wang, Z. Ou, and Z. Tan, \\u201cLearning transdimensional random fields with applications to language modeling,\\u201d IEEE transactions on pattern analysis and machine intelligence, 2018.\\n[4] B. Wang and Z. Ou, \\u201cLanguage modeling with neural trans-dimensional random fields,\\u201d IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2017.\\n[5] B. Wang and Z. Ou, \\u201cLearning neural trans-dimensional random field language models with noise-contrastive estimation,\\u201d IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018.\\n[6] B. Wang and Z. Ou, \\u201cImproved training of neural trans-dimensional random field language models with dynamic noise-contrastive estimation,\\u201d IEEE Spoken Language Technology Workshop (SLT), 2018.\\n\\n2. I am a little bit concerned that the theoretical contribution seems weak. \\nThough Eq. (4) and (5) seem to be novel, I am not sure whether such a contribution is substantial enough to motivate acceptance.\\n\\nI'm happy to adjust the score if the paper can be better placed in the literature and the authors take efforts to improve the paper.\\n\\n--------update after reading the response-----------\\nBeing well-placed in the literature and properly claiming contribution with respect to prior work is one of the key questions in reviewing a paper. The first version of the paper clearly lacks in this respect. That's the main concern when I gave a 1.\\n\\nI appreciate the authors' response. The updated paper has been improved to address my main concern, although the added discussions presented in the updated paper is not as clear as the authors' clarifications in the response. I suggest to polish the main text incorporating these clarifications.\\n\\nGenerally, it is nice to see the successful application of energy-based/random-field-based models in text generation, besides in speech recognition. I update the score to 6 (Weak Accept).\\n\\nIt would have been better that the following can be further clarified.\\n\\n\\\"the partition function estimated via importance sampling would lead to bias favoring the random field language model\\\" --- this comment is not clear to me. \\n\\nBoth Eq.4 and Eq.5 give estimates for perplexity. It would be better to clarify different uses of the two equations. If the perplexities are estimated using Eq.4 (as in Table 1), then what is the purpose of developing Eq.5?\\n\\nHow to calculate the lower and upper bounds of the step-wise perplexity gain at each position in Figure 1?\\n\\nUnder Figure 1, \\\"At each position the lower and upper bounds (see Eq. 4) are estimated using 20,000 samples.\\\" But in the main text, it is said that \\\"We therefore break down perplexity per position in the generated sequences as in Eq. 5\\\" at page 8. It is confusing.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"\", \"contributions\": \"The main contribution of this paper lies in the proposed Residual Energy-based Model (EBM) for text generation. Traditional normalized models operate at the token level with MLE training, while the proposed EBM operates at the sentence level. Therefore, BERT, or RoBERTa, can be leveraged for EBM design. The residual energy function is trained via conditional NCE, which reduces to training a binary classifier to discriminate between real text and text generated by an auto-regressive language model. After model training, text can be generated by top-k joint sampling. Experiments are conducted on two large language modeling datasets. The proposed model achieves lower perplexity, and preferred by human evaluation.\", \"strengths\": \"(1) Writing & Clarity: This paper is well written, easy to follow, and clearly presented. I enjoyed reading this paper. \\n\\n(2) Novelty: The proposed model contains some novelty inside. It is framed in a residual EBM framework, though by the end, the residual energy function reduces to training a binary classifier to discriminate real and fake text. Though simple, this idea is wrapped up in a nice framework. It is also interesting to observe that this sequence-level EBM regularization can be considered as a way to fine-tune BERT for the text generation task.\\n\\n(3) Experiments: Generally, the experiments are comprehensive. Detailed analysis, and human evaluation is also provided.\", \"weaknesses\": \"(1) Clarity: I have some concerns regarding the selection of baselines, with details shown below.\\n\\nThis paper is basically about using BERT as a binary classifier, which serves as a residual energy function to regularize a pre-trained language model and provides sequence-level supervision. The experiments are comprehensive, but on the other hand, it is also quite expected that the proposed model should work better than an MLE baseline, since sequence-level supervision is provided. \\n\\nI think if the authors want to make a stronger paper, they should also compare with other possible ways to inject sequence-level supervision. For example, a simple solution is to use GAN, like in a SeqGAN setup. And the discriminator in the GAN will be the same BERT-based binary classifier. In this GAN setup, sequence-level supervision is also provided. \\n\\nThen the difference is that in the GAN setup, the BERT-based binary classifier is a discriminator, but in this paper's setup, it is a residual energy function. It would be interesting to discuss and conduct experiments to see which way is better. \\n\\n(2) Experiments: I have some concerns regarding the experimental setup. \\n\\na) One of the main results is Table 1, which reports all the PPL numbers. However, reporting PPL results is less interesting, because we also care about the diversity of generated samples. Lower PPL does not necessarily mean higher-quality text. Though Figure 2 provides some analysis on the diversity, a more comprehensive evaluation on this will be appreciated. \\n\\nb) It will be good if the authors can also provide some generated samples for qualitative analysis. \\n\\nOverall, I think this paper is well executed. The paper is well written, and experiments are carefully conducted. However, on the other hand, I also think the conclusion in this paper is expected, it only shows that the proposed model is better than an MLE baseline.\"}"
]
} |
BylQSxHFwr | AtomNAS: Fine-Grained End-to-End Neural Architecture Search | [
"Jieru Mei",
"Yingwei Li",
"Xiaochen Lian",
"Xiaojie Jin",
"Linjie Yang",
"Alan Yuille",
"Jianchao Yang"
] | Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture.
Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost.
We open our entire codebase at: https://github.com/meijieru/AtomNAS. | [
"Neural Architecture Search",
"Image Classification"
] | Accept (Poster) | https://openreview.net/pdf?id=BylQSxHFwr | https://openreview.net/forum?id=BylQSxHFwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_LwcM0laOv",
"BJeTrhF2jB",
"BkxRfHg3jS",
"SyeaqNe3jr",
"S1x9Czlnjr",
"B1xOjiLpKB",
"S1xUnP03KS",
"SketHivcFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745118,
1573850181127,
1573811477840,
1573811348675,
1573810897529,
1571806112453,
1571772333707,
1571613505415
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2280/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2280/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2280/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2280/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2280/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2280/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2280/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Reviewer #1 noted that he wishes to change his review to weak accept post rebuttal, but did not change his score in the system. Presuming his score is weak accept, then all reviewers are unanimous for acceptance. I have reviewed the paper and find the results appear to be clear, but the magnitude of the improvement is modest. I concur with the weak accept recommendation.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply To Common Concerns\", \"comment\": \"We appreciate the invaluable comments from the reviewers. Below is our response to the common concerns and questions from all reviewers.\\n\\n- Code Release\\n\\nWe have released the whole codebase including search, which could be accessed with the following links:\", \"https\": \"//anonymous.4open.science/r/ced78872-1992-43b9-ad69-2d611a14616d/\\n\\n- Transfer learning on COCO Object Detection and Instance Segmentation\\n\\nTo address the concern that the experiment in the paper is not enough, we assess the performance of AtomNAS models as feature extractors for object detection and instance segmentation on COCO dataset. For more details, please check the appendix A at the end of our revised paper.\\n\\nWe first pretrain AtomNAS models (without Swish activation function and Squeeze-and-Excitation (SE) module) on ImageNet, use them as drop-in replacements for the backbone in the Mask-RCNN model, by building the detection head on the last feature map, and finetune the model on COCO dataset. \\n\\nWe use the open-source code MMDetection (https://github.com/open-mmlab/mmdetection ). All the models are trained on COCO train2017 with batch size 16 and evaluated on COCO val2017. Following the schedule used in the open-source implementation of TPU-trained Mask-RCNN (https://github.com/tensorflow/tpu/tree/master/models/official/mask_rcnn ), the learning rate starts at 0.02 and decreases by a scale of 10 at 15-th and 20th epoch respectively. The models are trained for 23 epochs in total.\\n\\nThe results are shown below. The detection results of baseline models are from [1]. We can see that all three AtomNAS models outperform the baselines on both object detection task. The results demonstrate that our models have better transferability than the baselines.\\n\\n| Model | FLOPs | Cls (%) | detect-mAP (%) | seg-mAP (%) |\\n| ------------------------- | -------- | ----------- | ---------------------- | ------------------ |\\n| MobileNetV2 | 301M | 73.6 | 30.5 | - |\\n| Proxyless (mobile) | 320M | 74.6 | 32.9 | - |\\n| SinglePath+ | - | 75.6 | 33.0 | - |\\n| AtomNAS-A | 258M | 74.6 | 32.7 | 30.1 |\\n| AtomNAS-B | 326M | 75.5 | 33.6 | 30.8 |\\n| AtomNAS-C | 360M | 75.9 | 34.1 | 31.4 |\\n| ------------------------- | -------- | ----------- | ---------------------- | ------------------ |\\n\\n\\n=== References ===\\n[1] Stamoulis, et al., Single-Path Mobile AutoML: Efficient ConvNet Design and NAS Hyperparameter Optimization. In CoRR, 2019.\"}",
"{\"title\": \"Authors' Reply to Review #1\", \"comment\": \"We thank the reviewer for the detailed comments. Below we provide responses to each concern.\", \"q\": \"The biggest problem of this paper is that experiment is not enough. It would be more convincing if experiments on other popular datasets (CIFAR10/100 etc.) or tasks (object detection, semantic segmentation, etc.) are implemented.\\n\\nSee 'Reply To Common Concerns' above.\"}",
"{\"title\": \"Authors' Reply to Review #3\", \"comment\": \"We thank the reviewer for appreciating our method. We respond to each concern below.\", \"q\": \"The authors use only one dataset: ImageNet. I would like to see results on some other datasets or tasks. For instance, the authors may apply AtomNAS to other image classification datasets or finetuning the pre-trained models on object detection or semantic segmentation tasks.\\n\\nSee 'Reply To Common Concerns' above.\"}",
"{\"title\": \"Authors' Reply to Review #2\", \"comment\": \"We thank the reviewer for the detailed comments. Below we provide responses to each concern.\\n\\n3. How do you justify the generality of the channel-wise search space? For example, is it possible to also search for input/output channel size (column f in Figure 3) and #layers per stage (column n in Figure 3)? Adding some discussions for this would be very helpful.\\n\\nOur formulation (Equation (1)-(3)) is not only for MobileNetV2 block, but also compatible with other structures like conv-conv, conv-maxpool-conv and conv-bn-relu-conv, which are the building blocks of many popular architectures (e.g., VGG Net, Residual block). We use the MobileNetV2 blocks in the experiments as its wide application in state-of-the-art NAS methods thus easing the comparison between our method and them.\\n\\nWe don\\u2019t introduce additional constraints. Most NAS methods (e.g., [1,2,3,4]) specify the skeleton of the supernet where the input/output channel is manually determined. It\\u2019s an interesting research topic in the NAS community. As for \\u201c#layers per stage\\u201d, it is actually allowed to change during the search process: when all atomic blocks within a layer are removed, the layer is essentially a skip connection. Although in practice, this never happens.\\n\\nAt last, we want to emphasize that our method has an exponentially larger and more flexible search space than previous methods. The biggest contribution of our method is that we could use mixed operations instead of selecting one operation from a few options as did in previous methods. In this way, the search space is much more flexible and bigger than the previous ones. For your reference, the total number of possible structures within the experiment is around $10e162$, compared with $10e21$ for FBnet. It\\u2019s straightforward to extend our method into using mixed convolution types, mixed activation functions, and so on.\\n\\n4. The title seems too broad. I recommend the authors including \\u201cchannel-wise\\u201d in the title.\\n\\nThanks for the advice. We will figure out a more proper title. \\n\\n5. Please provide some justifications on how to set $\\\\lambda$ and $c_i$ in Equation (5).\\n \\nAs the \\u03bb increases, the FLOPs of the final model decrease. We set $\\\\lambda$ in a heuristic way, such that the resulting models have similar FLOPs as previous state-of-the-art networks under 400M: MixNet-S [1], MixNet-M [1] and SinglePath [4].\\n\\nc_i\\u2019s are computed by equation 6, where we first calculate the FLOPs of every atomic block in the model and then normalize them globally to get $c_i$.\\n\\n6. When you say \\u201cexpands the input channel number from C to $3\\\\times 6$C\\u201d, what does \\u201c$3\\\\times 6$C\\u201d mean? Is it 18C? What\\u2019s the reason for choosing this specific value?\\n\\nIt means 18C. We choose this value as 6 is widely used as the maximum expansion ratio [1, 2, 3].\\n\\n7. Could you show the accuracy and complexity (either FLOPS or params) of the supernet during the training? This information would be helpful to interpret and justify your algorithm 1.\\n\\nThe top-1 accuracy of the supernet is 78.39. As mentioned in Section 3.3, the FLOPS of supernet is 1521M with 11M parameters.\\n\\n8. The network architecture in Figure 5 is vague. Could you provide the network source code or frozen graph for this specific model?\\n\\nWe have released the code (including search). See 'Reply To Common Concerns' above.\\n\\n===References===\\n[1] M. Tan, et al., Mnasnet: Platform-aware neural architecture search for mobile. In CVPR, 2019.\\n[2] H. Cai, et al., Proxylessnas: Direct neural architecture search on target task\\nand hardware. In ICLR, 2019.\\n[3] B. Wu, et al., Fbnet: Hardware-aware efficient convnet design via\\ndifferentiable neural architecture search. In CVPR, 2019.\\n[4] Stamoulis, et al., Single-path NAS: designing hardware-efficient convnets in less than 4hours. In CoRR, 2019.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This basic idea of this paper is to decompose the common building blocks of large network into atomic blocks, which equips NAS with more fine-grained search space. What's more, the authors propose a resource-aware search to reduce the computation and dynamically shrinkage the model to accelerate the learning. Retraining the final network is no longer needed. They achieve state of art on ImageNet under several complexity constraints.\", \"pros\": \"\", \"novel_idea\": \"the insight of this paper is that \\\"larger network building blocks can be represented by an ensemble of atomic blocks\\\". With this in hand, it can search the exact channel number through channel selection (i.e. atomic block selection, according to my understanding).\", \"efficiency\": \"Resource-aware selection and dynamical shrinkage of the model also make it more efficient in inference and training.\", \"cons\": \"It would be better if the author could provide some comparison on GPU time. Since FLOPs is only an indirect metric for speed evaluation. \\n\\nThe biggest problem of this paper is that experiment is not enough. It would be more convincing if experiments on other popular datasets (CIFAR10/100 etc.) or tasks (object detection, semantic segmentation, etc.) are implemented.\", \"conclusion\": \"This is an interesting paper with novel idea and efficient implementation. However, more experiments are needed to validate the utility of the proposed method.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Summary] This paper proposes a channel-wise neural architecture search (NAS) approach. The NAS search algorithm is similar to previous one-shot NAS, but the search space is channel-wise: each channel has it\\u2019s own kernel size, which is quite novel and interesting. Results are strong in terms of FLOPS and parameters.\\n\\n[High-level comments]:\\n\\n1. I like the novel idea of channel-wise search space, which provides great flexibility for NAS. Although some recent works (e.g., MixNet) have tried to partition channels into groups, this paper goes further and searches for different kernel size for each single channel. With this channel-wise search space, it naturally enables a combination of per-channel kernel size selection and per-channel pruning, leading to strong results in terms of FLOPS and parameters, as shown in Figure 4 and Table 1.\\n\\n2. In general, channel-wise NAS is difficult as different channels are often coupled in various ways. However, the authors observe that recent NAS (such as MnasNet/MixNet/SCARLET-A/EfficientNet) are mostly based on a common MB pattern. By targeting to this specific pattern and applying some additional constraints (e.g. fixed input/output channel size and fixed number of layers per stage as shown in Figure 3), the authors successfully make the channel-wise NAS work well. I appreciate the authors efforts, but I am also a little concerned that the proposed approach might be limited to this specific small search space. \\n\\n[Questions and suggestions to authors]:\\n\\n3. How do you justify the generality of the channel-wise search space? For example, is it possible to also search for input/output channel size (column f in Figure 3) and #layers per stage (column n in Figure 3)? Adding some discussions for this would be very helpful.\\n\\n4. The title seems too broad. I recommend the authors including \\u201cchannel-wise\\u201d in the title.\\n\\n5. Please provide some justifications on how to set \\u03bb and c_i in Equation (5).\\n\\n6. When you say \\u201cexpands the input channel number from C to 3 \\u00d7 6C\\u201d, what does \\u201c3x6C\\u201d mean? Is it 18C? What\\u2019s the reason for choosing this specific value?\\n\\n7. Could you show the accuracy and complexity (either FLOPS or params) of the supernet during the training? This information would be helpful to interpret and justify your algorithm 1.\\n\\n8. The network architecture in Figure 5 is vague. Could you provide the network source code or frozen graph for this specific model?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The authors propose AtomNAS, a neural architecture search (NAS) algorithm, with a new fine-grained search space and the dynamic network shrinkage method. The searched models achieve a new state-of-the-art result on the ImageNet classification task for mobile setting with restricted FLOPs.\", \"The proposed method is novel and technically sound. In addition, the experimental results on ImageNet are impressive. However, the experiment section is not solid enough.\", \"The authors do not include the searching cost and inference latency in Table 1. Different NAS papers have different objectives and searching cost. For instance, ProxylessNAS (Cai et al., 2019) and DenseNAS (Fang et al., 2019) focus on searching cost. They require only 200 and 92 GPU hours (with TITAN XP). However, the proposed AtomNAS takes 32 * 25.5 = 816 GPU hours (with V100). The authors only point out that DenseNAS uses more parameters. It would be better if the authors can make the comparison more transparent.\", \"I wonder when given the same searching budgets as ProxylessNAS and DenseNAS, how well AtomNAS can perform.\", \"The authors use only one dataset: ImageNet. I would like to see results on some other datasets or tasks. For instance, the authors may apply AtomNAS to other image classification datasets or finetuning the pre-trained models on object detection or semantic segmentation tasks.\", \"In general, the paper is well written and easy to follow. I would encourage the authors to add legends to Figure 5 and Figure 6. While the meaning of each color is explained in the caption, it is not straight forward.\", \"In short, the proposed method is interesting and the results on ImageNet are impressive, I weakly accept this paper and hope that the authors can make the experiment section more solid in a revised version.\"]}"
]
} |
S1gmrxHFvB | AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty | [
"Dan Hendrycks*",
"Norman Mu*",
"Ekin Dogus Cubuk",
"Barret Zoph",
"Justin Gilmer",
"Balaji Lakshminarayanan"
] | Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half. | [
"robustness",
"uncertainty"
] | Accept (Poster) | https://openreview.net/pdf?id=S1gmrxHFvB | https://openreview.net/forum?id=S1gmrxHFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Y-wmMPBIB",
"HkxqnYVhsH",
"S1x-rOioor",
"HygIjPjiiH",
"r1lzMvXior",
"BkgkPVejjS",
"H1eh3Az5ir",
"HJlCqR9FoS",
"ryeT2_OKsB",
"S1eRGbrFoB",
"B1gcvuztjr",
"SkgAtOSPsr",
"rJgjmOrvor",
"Skg8FuaQoB",
"SJlJ-Yvmor",
"HygBn2yTYS",
"BkloWXh9KB",
"SJlwxsHKKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745088,
1573829041567,
1573791800711,
1573791646405,
1573758730165,
1573745750907,
1573691060494,
1573658262102,
1573648564827,
1573634326379,
1573623905940,
1573505157724,
1573505059486,
1573275774333,
1573251319055,
1571777709238,
1571631875445,
1571539694522
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2278/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper tackles the problem of learning under data shift, i.e. when the training and testing distributions are different. The authors propose an approach to improve robustness and uncertainty of image classifiers in this situation. The technique uses synthetic samples created by mixing multiple augmented images, in addition to a Jensen-Shannon Divergence consistency loss. Its evaluation is entirely based on experimental evidence.\\n\\nThe method is simple, easy to implement, and effective. Though this is a purely empirical paper, the experiments are extensive and convincing. \\n\\nIn the end, the reviewers didn't show any objections against this paper. I therefore recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your rebuttal and the insightful comments to my questions. I really appreciate that.\\nI think the rebuttal has addressed some of my main concerns and I will adjust my score accordingly. \\n\\nAlso, I wonder if it would be a good idea to be a bit specific on \\\"Jensen-Shannon consistency loss could potentially be useful for NLP\\\" and include that in the conclusion/future work section of your paper?\"}",
"{\"title\": \"Double Check\", \"comment\": \"Reviewer 3,\\n\\nThank you for your responsiveness throughout this week and helping us improve our paper.\\n\\nWe have added a new Appendix B to give more analysis into how AugMix works.\\nAfter tomorrow OpenReview will not allow us to answer further questions or address other concerns.\\nDo you have any further experiments you should like us to run for you or have any other questions?\"}",
"{\"title\": \"Double Check\", \"comment\": \"Reviewer 1,\\n\\nThank you for your responsiveness throughout this week and helping us improve our paper.\\n\\nAfter tomorrow OpenReview will not allow us to answer further questions or address other concerns.\\nDo you have any further experiments you should like us to run for you or have any other questions?\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your responsiveness and your reply.\\n\\nWe agree that pointing out limitations is important. One potential limitation of most proposed data processing techniques for natural images, including ours, is that there is not an obvious extension of the entire method to natural language processing. However, at the very least our proposed Jensen-Shannon consistency loss could potentially be useful for NLP.\\nGuo et al. [1] points out a potential issue with manifold intrusion. A possible problem with using data processing techniques including AugMix is that practitioners may include numerous additional augmentations, some of which could potentially change the class and intrude the manifold. We analyze an instance of manifold intrusion [1] in the newly updated Appendix C (Figure 11), as this is an important caveat to mention to the readers. However, we think AugMix in its current form does not have the issue of manifold intrusion since images of different labels are not mixed and the set of augmentations we chose does not change the class like in Figure 11. On all the problems we have tried so far (CIFAR-10, CIFAR-100, ImageNet), AugMix consistently improves performance.\\n\\nWe thank you for your responsiveness. Do you have any remaining concerns?\\n\\n[1] Hongyu Guo, Yongyi Mao, Richong Zhang. MixUp as Locally Linear Out-of-Manifold Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 2019.\"}",
"{\"title\": \"Thanks, but with further question\", \"comment\": \"Thank you for your clarification. That helps.\\nAlso, I found the newly added Appendex B was interesting and useful.\\n\\nHere is one more question if you don't mind.\\n\\nSince there is no guarantee (at least I did not see that) that the proposed method will work. I wonder when you would expect your approach to fail, namely degrading the performance of the baseline model? As an example, the Mixup could suffer from the manifold intrusion problem.\"}",
"{\"title\": \"New Appendix B Added to Further Explain Why AugMix works\", \"comment\": \"Thank you for your responsiveness and your reply. We have added Appendix B which hopes to offer some additional insight into the mechanisms behind AugMix\\u2019s performance.\\n\\n1. \\u201cCalibration of mixup\\u201d: \\nSorry for the confusion, we realize now that our previous comment above \\u201cMixup alone substantially harms calibration\\u201d may have sounded broader than we intended (note this comment was only mentioned in the response above, and not in the paper, so no changes are required in the paper). \\nThere are some differences between our setup and that of Thulasidasan et al. [1]. They are as follows.\\n* Calibration on i.i.d. data vs calibration under shift: First, we consider calibration under unforeseen corruptions, while their analysis is performed on clean data alone. Ovadia et al. 2019 also show that calibration in i.i.d. setting does not always translate to calibration under shift.\\n* Tuning: We also note that [1] tunes the alpha coefficient and in Figure 2h of [1] ( https://arxiv.org/pdf/1905.11001.pdf#page=5&zoom=100,0,89 ) we can see that the model becomes increasingly miscalibrated as alpha approaches the value recommended in [2]. An additional difference is that [1] uses a non-standard weight decay coefficient with Mixup. We use 1e-4, following [2] and [3], but [1] uses 5e-4 weight decay; we have found that 5e-4 weight decay noticeably increases error with Mixup, and [4] also notes that stronger weight decay can influence calibration. We have added in our paper that we use a standard weight decay coefficient as in [2, 3] per your advice.\\n\\n2. \\u201cI wonder why \\\"diverse\\\" here is a good thing?\\u201d\\nWe note that previous works [5,6] have shown that if image modifications are not sufficiently diverse then the network will memorize and overfit to the specific and narrow distribution of modifications seen during training. To attain generalization, it is important to increase the variance of the augmentation distribution which we achieve through much stochasticity.\\n\\n\\u201cwhy the proposed method works.\\u201d: We have added a new Appendix B to give more analysis into how AugMix works. In addition to diversity of augmentations and the explanation in Appendix B, AugMix also enforces consistency between augmentations of the same image, which can be thought of as a way to encourage invariance in classifier predictions with respect to augmentations that preserve semantics. Our ablation experiments in Table 4 show the relative contributions of these two ingredients. We hope this explanation and the newly added Appendix B and Figure 9 shed light on how AugMix provides robustness.\\n\\nWe hope we were able to address your valid concerns and we thank you for your helpful suggestions. Do you have any remaining concerns?\\n\\n[1] On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak. NeurIPS 2019.\\n\\n[2] Zhang, Hongyi, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. \\\"mixup: Beyond empirical risk minimization.\\\" ICLR (2018).\\n\\n[3] Hongyu Guo, Yongyi Mao, Richong Zhang. MixUp as Locally Linear Out-of-Manifold Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 2019.\\n\\n[4] On Calibration of Modern Neural Networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger. ICML 2017.\\n\\n[5] Examining the Impact of Blur on Recognition by Convolutional Networks. Igor Vasiljevic, Ayan Chakrabarti, Gregory Shakhnarovich.\\n\\n[6] Generalisation in humans and deep neural networks. Robert Geirhos et al. NeurIPS 2018.\"}",
"{\"title\": \"thank you for your rebuttal and further questions\", \"comment\": \"Thank you for your rebuttal. I really appreciate the additional experimental results and analysis, which I found very helpful. Below please find my further comments after reading your rebuttal and the other reviews.\\n\\n1. Your observation of \\u201cMixup alone substantially harms calibration\\u201d could be further explained or discussed. As shown in Thulasidasan\\u2019s NeurIPS19 paper \\\"On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks\\\", Mixup does significantly improve the model calibration and predictive uncertainty. \\n\\n2. The paper could be further improved if the authors could provide more insight or explanation on why the proposed method works. In your rebuttal, you did mention that \\u201caugmentations produced by AugMix are more \\\"diverse\\\"\\u201d may be one of the reasons for the improved performance of the proposed method. I wonder why \\\"diverse\\\" here is a good thing?\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for running the additional experiments. My concerns have been addressed.\"}",
"{\"title\": \"\\\"Data Shift\\\" Removed, Addressing Concern\", \"comment\": \"Reviewer 3, thank you for your reply.\\n\\nWe followed previous works in characterizing corruption and perturbation robustness as robustness to data shift, but we have heeded your valid concern and have now more clearly specified that we are considering robustness and uncertainty on corrupted distributions that are unseen until test time.\\nSpecifically, we have changed the title by removing the phrase \\\"data shift\\\" altogether from the title. This change is reflected in the PDF, but not yet on the OpenReview website due to OpenReview's restrictions. Should this paper be accepted, we should be able to edit the title and abstract on OpenReview as well. In the text of the PDF, we now more heavily emphasize that we are considering unseen corruption and perturbations. The title, introduction, abstract, and conclusion have been modified to conform to your recommendation.\\n\\nThank you for your responsiveness.\"}",
"{\"title\": \"My concerns are addressed\", \"comment\": \"Thank you for the clarification and adding hyperparameter-tunning results. Most of my concerns are resolved. But I suggest to clearly address the assumption of your paper, the robust and calibrated solution for unseen corruptions.\"}",
"{\"title\": \"New Version of the Paper\", \"comment\": \"The paper has been updated and includes hyperparameter sensitivity analysis and DenseNet results. Hyperparameter results are in Appendix A, and the results indicate that, as hyperparameters vary, AugMix performance is relatively stable. The new DenseNet results show that AugMix reliably improves corruption robustness, perturbation stability, and uncertainty estimation.\"}",
"{\"title\": \"Reviewer #2 Reply\", \"comment\": \"Thank you for your detailed review. We are glad you liked the method and hope that you will champion our paper.\\n\\n1. We trained the AllConvNet and Wide ResNet for 100 epochs since we used a cosine learning rate schedule and not a waterfall learning rate schedule; the latter schedule usually requires 200 epochs, but the former requires fewer epochs for these architectures. On CIFAR-10-C, we observed that training a Wide ResNet for 200 epochs instead of 100 decreased the AugMix error rate by ~0.5%, so AugMix can provide some amount of additional training robustness when trained for longer. However, training with Cutmix for 200 epochs on CIFAR-10-C increases the error rate by 2%. You are correct to note that our AutoAugment run was trained for 90 epochs on ImageNet, and we have updated the results with a 180 epoch run. 180 epochs gives similar performance to 270 epochs, according to a recent correspondence with the authors of AutoAugment. The clean accuracy of AutoAugment is nearly that of the accuracy in the original paper, even though we disable a few operations so as to maintain separation from ImageNet-C test corruptions.\\n\\n2. We have added new hyperparameter analysis experiments in Appendix A. This section includes an experiment to analyze the Jensen-Shannon loss. One possible explanation for the performance gain realized by JS(Porig;Paugmix1;Paugmix2) over JS(Porig;Paugmix1) is that the former reduces the variance of the estimate of the true mixture distribution.\\n\\n3. It is difficult to directly compare Patch Uniform to numbers from the Patch Gaussian paper since we follow convention and evaluate on 224x224 images not 299x299 images, while the Patch Gaussian paper evaluates on 299x299 images. In a footnote on page 4, the authors note that they will update their paper with results on 224x224 images, after which we will be able to directly compare against a well-tuned tuned form of Patch Gaussian.\\n\\n4. We note in Table 2 that AugMix also improves clean accuracy on ImageNet, though we are primarily interested in improving robustness. While it is plausible that this field may encounter a (possibly small) tradeoff between robustness and accuracy, our simultaneous improvements in both directions show that we are not at that point yet for corruption and perturbation robustness. As we limited ourselves to the set of augmentation operators used in AutoAugment, expanding the pool of label-preserving data augmentations would be a straightforward extension of AugMix that would likely yield additional improvement. Our experiments on AugMix+SIN show that AugMix may be combined in alongside other robustness methods without additional tuning.\\n\\n5. Your suggestion is incorporated in the new version of the paper. Thank you.\"}",
"{\"title\": \"Clarifying the Problem Setup and a Comparison to AutoAugment\", \"comment\": \"Thank you for your detailed response.\\n\\n1. \\u201cThe claim about the improvement of uncertainty also is not supported well by the experiments\\u201d\\nWe should like to point to the middle of Figure 7 showing calibration on ImageNet-C. This is a challenging problem as pointed out by the paper you mentioned [1]. AugMix significantly improves the calibration of the baseline method. Furthermore, combining AugMix with ensembles (the best performing method in Ovadia et al. [1]) significantly improves performance and achieves much better calibration under distributional skew as demonstrated by the near-horizontal calibration error line. To the best of our knowledge, AugMix + ensembles achieves state-of-the-art performance on calibration under distribution skew on ImageNet-C. If ensembles are too expensive, then a single-model with AugMix provides superior ImageNet-C calibration over ensembles. In addition, Figure 6 (right) and Table 5 also show that AugMix significantly improves calibration. We hope this evidence substantiates our claim that AugMix improves uncertainty estimates.\\n\\n\\u201cData shift\\u201d is sometimes used interchangeably with \\u201cdistributional skew\\u201d and \\u201cdistribution shift.\\u201d For instance, the paper you mentioned by Ovadia et al. [1] evaluate \\u201ccalibration under dataset shift\\u201d on images using ImageNet-C and CIFAR-10-C, and we do too.\\n\\nThat said, \\u201cdata shift\\u201d is not often used interchangeably with \\u201cdomain adaptation.\\u201d Our paper does not contain experiments with MNIST classifiers transferring to SVHN since that is in the realm of domain adaptation, a setup which assumes knowledge of the structure of the data shift or access to a fine-tuning set. We consider the problem of robustness to unseen corruptions, where we assume no knowledge of the data shift a priori.\\n\\n2. Comparing AugMix to AutoAugment and Mixup\\nWe believe that the simplicity of our method is a feature. AugMix is not a direct combination of AutoAugment and Mixup. AutoAugment requires training several thousand models to find an augmentation policy, whereas AugMix requires training only one. Hence, its computational cost is in league with that of traditional data augmentation techniques, but even so AugMix can outperform AutoAugment. While we use convex combinations of augmentations of one image, this does not make it an extension of Mixup. In Mixup, examples are from different classes are mixed, while we do nothing of the sort. While AugMix's name may suggest that it is a combination of AutoAugment and Mixup, the proposed method does not mix different training images and obviates the need for training several thousand models.\\n\\nWe believe AugMix works better because (i) augmentations produced by AugMix are more \\\"diverse\\\" as the base operations are randomly sampled and randomly mixed in every minibatch and (ii) consistency between augmentations is enforced with our Jensen-Shannon divergence loss. Our ablation experiments in Table 4 show the relative contributions of these ingredients.\\n\\n3. Further ablations\\nThanks to your reasonable request, we are running numerous additional ablations. We aim to share these results soon, which are so far indicating that AugMix is stable across different hyperparameter choices.\\n\\n4. Competing with other techniques\\nThere are few techniques in the nascent area of data shift. However, we compared to all existing techniques proposed to tackle data shift (SIN, MaxBlurPool, etc.). In addition, we also compared to numerous other techniques (Cutmix, adversarial training, etc.) in order to provide an extensive comparison.\\n\\nWe hope we were able to address your concerns and we thank you for your helpful suggestions. Do you have any remaining concerns?\\n\\n[1] Ovadia, Yaniv, et al. \\\"Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift.\\\" NeurIPS 2019.\"}",
"{\"title\": \"AugMix Improves DenseNet Robustness\", \"comment\": \"Thank you for your careful analysis of our paper.\\n\\n1. In choosing our base augmentation operators we opted to reuse the operators in AutoAugment for the sake of simplicity, while taking out the ones which appeared in the ImageNet-C test set. As for the scalar hyperparameters, we have launched several ImageNet sweeps perturbing depth, width, count, and beta/dirichlet coefficients. We aim to share these results with you soon, and these experiments are so far indicating that, across different hyperparameter choices, AugMix is quite stable.\\n\\n2 and 4. \\nThanks for pointing out the paper by Guo et al., 2019 [1]. We have cited it in the revision and added a discussion in the related work.\\n\\nWe would like to clarify that our method does not mix images of multiple classes together, but rather the three augmentation chains are created from a single image and mixed back into one single image. We should like to note that the AugMix pseudocode and Figure 4 illustrate that the label remains constant through the augmentation process. Since the base transformations are label preserving, and we only mix different augmentations of the same image, we do not believe the manifold intrusion phenomenon presents a significant issue to our method unlike mixup, as most of the operations are fairly structure-preserving. One might expect a performance drop coincident with manifold intrusion, but AugMix increases performance on both clean and corrupted inputs. In view of your concern, we looked at several example images from AugMix and did not observe class collisions. However, we agree that the concept of manifold intrusion from Guo et al., 2019 for mixup is a real concern.\\n\\n3. Additional discussion\\n\\nOne possible explanation for the performance drop of \\u201cAugMix on top of Mixup\\u201d is that mixup is not a label preserving transformation, and applying augmix on top of mixup could suffer from manifold intrusion, thereby causing error rate to increase. We have added a comment linking to the relevant explanation from Guo et al., 2019 in the paper. Mixup alone substantially harms calibration as well, which means combining it with AugMix would make uncertainty estimates worse too. However, In Table 2 we show that combining AugMix with another label preserving augmentation such as SIN, outperforms both AugMix and SIN individually. Hence AugMix can combine with well with other techniques.\\n\\nFuture work on extending AugMix by including ideas from n-fold Mixup (Guo et al. 2019) to avoid manifold intrusion could yield further benefits.\\n\\n5. Additional experiments\\n\\u201cDoes the method work for other network architectures such as DenseNet?\\u201d \\nTo test robustness across network architectures, we report results with AllConvNet, Wide ResNet, and ResNeXt in Tables 1, Table 5, Table 6, and we observe that AugMix significantly improves performance across different architectures.\\nOn DenseNet, we observe that the error rate greatly decreases from 30.7% (baseline) to 12.7% (AugMix). We have added DenseNet results (Table 1, 5, 6) in the updated draft, thanks to your suggestion. \\n\\nWe hope we were able to address your valid concerns and we thank you for your helpful suggestions. Do you have any remaining concerns?\\n\\n[1] Hongyu Guo, Yongyi Mao, Richong Zhang. MixUp as Locally Linear Out-of-Manifold Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a novel method called augMix, which creates synthetic samples by mixing multiple augmented images. Coupled with a Jensen-Shannon Divergence consistency loss, the proposed method has been experimentally, using CIFAR10, CIFAR100, and ImageNet, shown to be able to improve over some augmentation methods in terms of robustness and uncertainty.\\n \\nThe paper is very well written and easy to follow. The idea of the approach is simple, and should be easy to be implemented. The evaluation of the proposed method is currently based on experimental evidence. Nevertheless, the empirical studies in its current form could be further improved. Please see my detailed comments below.\\n\\n1. The proposed approach relies on a chain of augmented methods. In this sense, experimental studies on the sensitivity for how the augmentation methods in the chain (e.g., augmentation operations) and their chain structure (e.g., length of the chain) impact the performance of the augMix should be provided. This is in particular relevant because the authors did mention that \\u201cadding even more mixing is not necessarily beneficial\\u201d on page 8.\\n\\n2. Since the proposed method mixes multiple augmented images, a more appropriate comparison baseline would be a method involving creating synthetic data with multiple images. For example, the n-fold Mixup method as discussed in Guo AAAI2019 (Mixup as Locally Linear Out-Of-Manifold Regularization).\\n\\n3. Some experimental results/observations deserve further discussions. For example, on page 8, the authors mention that \\u201capplying augMix on top of Mixup increases the error rate to 13.3%\\u201d. I wonder if the authors could provide any insights or hypothesis on why the proposed model behaviors in this way? \\n\\n4. Would that be any manifold intrusion issue as discussed in Guo\\u2019s AAAI2019 paper? That is, would it be possible to generate images that are very close to some real images but with different labels? For example, by looking at the bottom-center image in the Appendix B, the synthetic image created seems to be close to some birds with other categories. \\n\\n5. Does the method work for other network architectures such as DenseNet?\\n\\n\\n*********new comment**********\\nDuring the rebuttal period, the paper has been improved with additional experimental results, analyses, and observations. I therefore have adjusted my evaluation score accordingly.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper discusses a new data augmentation method which improves the accuracy of the network for several specific shifted domain scenarios. The main goal of the paper is to increase the robustness of the deep model trained on the augmented data to generalize well beyond the data corruption like the rotation, translation, noise,.... For each input, they apply $k$ different operation of image shift and make the weighted combination of them. The weight vector is generated randomly from Dirichlet distribution with the parameter $\\\\alpha$. The weighted combined images would be added to the original image in convex combination. The convex weights are generated from distribution Beta with parameter $\\\\beta$. Later they train the network with adding the Jensen-Shannon divergence for the posterior distributions of augmented images as the consistency regularizer. They show this data augmentation will increase the accuracy of the model for shifted and non-shifted domains and also it leads to more calibrated model for domain shift problem.\", \"pros\": \"the paper is well-written with clear implementation details. The level of experiments are wide and cover different aspects. The experiments shows the significant improvement compared to several baselines. The authors conducted the experiments for a wide range of model-datasets to show the validity of their ideas.\", \"cons\": \"1- The title of this work is a strong claim that is not supported in the paper. In this paper, it is mentioned that AugMix is a data augmentation method that generates data to add to the training set and after training with data augmentation, the method would be more robust to other distortions that can be added to the datasets. Generally, the definition of domain shift is wider than just adding perturbation to the dataset. To support the claim, the paper should also report the results for similar tasks datasets such as CIFAT10-STL10- or MINIST-SVHN for different models and with different domain adaptation methods. The claim about the improvement of uncertainty also is not supported well by the experiments. The method should be tested for many model-datasets specifically, to support improving the uncertainty under the domain shift idea like the paper [1]. \\n\\n2- The novelty of the work is limited. The generating method of distorted images is the combination of previously proposed methods like [2] and [3]. The motivation of why the proposed method is working well is not clear. How this objective function can improve the robustness to the image perturbation but it does not lose the accuracy is not discussed. It would be better if the proposed method were supported by theory and also the intuition and explained why it should get better results than previous data augmentation methods such as AutoAugment [3].\\n\\n3- Fine-tuning the parameters like $k$, $\\\\alpha$ and $\\\\beta$ is not discussed at all.\\n\\n4- To show the robustness of the proposed method to domain shift, the paper compares the proposed method to other data augmentation methods that are not designed for domain shift which seems unfair.\", \"references\": \"[1] Ovadia, Yaniv, et al. \\\"Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift.\\\" arXiv preprint arXiv:1906.02530 (2019).\\n[2] Zhang, Hongyi, et al. \\\"mixup: Beyond empirical risk minimization.\\\" arXiv preprint arXiv:1710.09412 (2017).\\n[3] Cubuk, Ekin D., et al. \\\"Autoaugment: Learning augmentation policies from data.\\\" arXiv preprint arXiv:1805.09501 (2018).\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method called AugMix, which is intended to improve model robustness to data distribution shift. AugMix appears fairly simple to implement. Several new images are created by augmenting an original image through chains of sequentially applied transformations (the \\\"Aug\\\" part of AugMix), then the augmented images are combined together, along with the original image, via a weighted sum (the \\\"Mix\\\" part of AugMix). Additionally, a Jensen-Shannon Divergence consistency loss is applied during training to encourage the model to make similar predictions for all augmented variations of a single image. This technique is shown to achieve state-of-the-art performance on standard robustness benchmarks without loss of clean test accuracy, and is also shown to improve calibration of model confidence estimates.\\n\\nOverall, I would tend to vote for accepting this paper. The method is simple yet effective, the paper is very well written and easy to follow, and experiments are extensive and, for the most part, convincing.\", \"questions\": \"1) The one main concern I have with the training procedure is the amount of time the models were trained for. It is known that model trained with aggressive data augmentation schemes often require much longer training than normal in order to fully benefit from the stronger augmentations. For example, AutoAugment trains ImageNet models for 270 epochs [1], while CutMix trains for 300 epochs [2]. However, the ImageNet experiments in this paper claim to follow the procedure outlined in [3], which only trains for 90 epochs. This is reflected in the clean test accuracy, where AutoAugment only appears to provide a 0.3% gain over standard training, while we might expect 1.3% improvement (according to [2]). The AllConvNet and WideResNet in CIFAR-10 and CIFAR-100 experiments were also trained for only 100 epochs each, where 200 is more conventional. Again this shows in the reported numbers: on WideResNet for CIFAR-10, Mixup only has a 0.3% gain were as we might expect 1% improvement instead [4], and AutoAugment has 0.4% improvement, were as we might expect 1.3% gain if trained longer [1]. My question then is, how much does training time affect results? Do AugMix, and other techniques such as Mixup, CutMix, and AutoAugment, achieve better robustness when models are trained for longer, or do they become more brittle as training time is extended? \\n\\n2) For the Jensen-Shannon divergence consistency, how much worse does it perform when using JS(Porig;Paugmix1) versus JS(Porig;Paugmix1;Paugmix2)? What might cause this behaviour?\\n\\n3) Patch Gaussian is changed to Patch Uniform to avoid overlap with corruptions in ImageNet-C. How does Patch Uniform compare to Patch Gaussian in terms of performance for non-Gaussian noise corruptions?\\n\\n4) How does AugMix perform as an augmentation technique in terms of clean test accuracy compared to other SOTA techniques? Is there a trade-off between clean test accuracy and robustness, or does AugMix improve performance in both domains? Can AugMix be combined with other augmentation techniques or does this destroy robustness properties?\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"5) It would be nice if the best result in each column could be bolded in Tables 2-4.\", \"references\": \"[1] Cubuk, Ekin D., Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. \\\"Autoaugment: Learning augmentation policies from data.\\\" CVPR (2019).\\n\\n[2] Yun, Sangdoo, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. \\\"Cutmix: Regularization strategy to train strong classifiers with localizable features.\\\" ICCV (2019).\\n\\n[3] Goyal, Priya, Piotr Doll\\u00e1r, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. \\\"Accurate, large minibatch sgd: Training imagenet in 1 hour.\\\" arXiv preprint arXiv:1706.02677 (2017).\\n\\n[4] Zhang, Hongyi, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. \\\"mixup: Beyond empirical risk minimization.\\\" ICLR (2018).\"}"
]
} |
BygMreSYPB | Learning Latent Dynamics for Partially-Observed Chaotic Systems | [
"Said ouala",
"Duong Nguyen",
"Lucas Drumetz",
"Bertrand Chapron",
"Ananda Pascual",
"Fabrice Collard",
"Lucile Gaultier",
"Ronan Fablet"
] | This paper addresses the data-driven identification of latent representations of partially-observed dynamical systems, i.e. dynamical systems whose some components are never observed, with an emphasis on forecasting applications and long-term asymptotic patterns. Whereas state-of-the-art data-driven approaches rely on delay embeddings and linear decompositions of the underlying operators, we introduce a framework based on the data-driven identification of an augmented state-space model using a neural-network-based representation. For a given training dataset, it amounts to jointly reconstructing the latent states and learning an ODE (Ordinary Differential Equation) representation in this space. Through numerical experiments, we demonstrate the relevance of the proposed framework w.r.t. state-of-the-art approaches in terms of short-term forecasting errors and long-term behaviour. We further discuss how the proposed framework relates to Koopman operator theory and Takens' embedding theorem. | [
"Dynamical systems",
"Neural networks",
"Embedding",
"Partially observed systems",
"Forecasting",
"chaos"
] | Reject | https://openreview.net/pdf?id=BygMreSYPB | https://openreview.net/forum?id=BygMreSYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3tHB8yvUa0",
"SJxNsrX2or",
"SJgTW8cYjr",
"H1g0MS5YsH",
"rylaafctjB",
"Bkx9VkctjS",
"S1gg5RtYjr",
"rJegRjtFsr",
"HkxEhStFor",
"H1lvkgDO9B",
"rJl7V-W6KH",
"Syxv3Mo2YS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745059,
1573823899825,
1573656068745,
1573655829737,
1573655237117,
1573654321959,
1573654152507,
1573653448295,
1573651883826,
1572528094778,
1571782955364,
1571758767513
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2277/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2277/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2277/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2277/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents an ODE-based latent variable model, argues that extra unobserved dimensions are necessary in general, and that deterministic encodings are also insufficient in general. Instead, they optimize the latent representation during training. They include small-scale experiments showing that their framework beats alternatives.\\n\\nIn my mind, the argument about fixed mappings being inadequate is a fair one, but it misses the fact that the variational inference framework already has several ways to address this shortcoming:\\n1) The recognition network outputs a distribution over latent values, which in itself does not address this issue, but provides regularization benefits.\\n2) The recognition network is just a strategy for speeding up inference. There's no reason you can't just do variational inference or MCMC for inference instead (which is similar to your approach), or do semi-amortized variational inference.\\n\\nBasically, this paper could have been somewhat convincing as a general exploration of approximate inference strategies in the latent ODE model. Instead, it provides a lot of philosophical arguments and a small amount of empirical evidence that a particular encoder is insufficient when doing MAP inference. It also seems like a problem that hyperparameters were copied from Chen et al 2018, but are used in a MAP setting instead of a VAE setting. Finally, it's not clear how hyperparameters such as the size of the latent dimensions were chosen.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Official Blind Review #3 update\", \"comment\": \"Thank you for the long and detailed review and rebuttal. I am now reading the updated version of the paper.\"}",
"{\"title\": \"Reviewer 3 answers 1\", \"comment\": \"The authors would like to thank the reviewer for his valuable comments and suggestions, please find below the our answers.\\n\\n>> \\\"Relatively well written (if sometimes confusing) paper that reinvents the inference of latent variables ...\\\"\", \"a\": \"For an application to forecasting, we need to infer the initial condition in the latent space (more precisely, the unknown component y of the augmented state) from a given past observation series. This is stated as a minimisation issue (Eq.7), i.e. the inference of the latent state sequence which best match the past observation series w.r.t. a learnt ODE model. Similarly to the training phase, this inference is solved using a gradient descent. As in any gradient descent algorithm, one has to set an initialization. Different initialization strategies were explored, especially strategies which benefit from the latent state sequences inferred during the training phase.\\n\\nIn most related work, as mentioned above, no such minimization is required as one defines an explicit mapping between the observed time series and the latent state sequence. For instance, in Latent-ODE scheme, one would first predict the initial latent state from a past observation series using a learnt mapping model. Second, an ODE integration scheme is applied from this initial condition to predict future states.\"}",
"{\"title\": \"Reviewer 3 answers 2\", \"comment\": \">> \\\"The literature review mixes older and newer references ...\\\"\", \"a\": \"We would like to thank the reviewer for the good feedback on the SLA application. We argue that the proposed methodology is significantly different from the previous work pointed out by the reviewer. In addition to the fact that none of these interesting papers addresses the identification of an ODE representation for partially-observed systems, we may stress the following points (paper numbers corresponding to the reviwers comment not to our papers numbers):\\n\\nPaper [1]: This paper also addresses the long-term simulation of the trajectories of the system using a time delay embedding and SVR in the defined latent space. As mentioned above, a critical aspect of such schemes is the selection of the time delay. In [1], reported results show that simulated trajectories match real ones up to 4 Lorenz time using 10000 training data. We reach a similar performance using 4000 training data. In our experiments, we used as baselines time delay embeddings combined with nearest-neighbor (analog methods) and sparse regression (SR) techniques. The later is very similar to a SVM. It actually relates to an ODE formulation and accounts for non-linear (polynomial) terms in the ODE, which is known to be critical to reproduce chaotic Lorenz-63 dynamics. Our experiments point out that we reach much better performance than those baselines both in terms of forecasting performance and long-term behaviour (cf. largest Lyapunov exponents in Tab.1). The reported results also stress that the inference of the largest Lyapunov exponent, i.e. the ability of the learnt model to reproduce realistic long-term patterns, is very sensitive to the considered delay embeddings. \\n\\nPaper [4]: this paper does not consider the case of partially observed systems since in the presented experiment the latent states can be fully determined given the observations, which is a much simpler experimental setup. As such, it does not involve the definition of some augmented state. From a methodological point of view, the authors use an Ensemble Kalman filtering scheme to reconstruct the latent state from the observation series. Here, we prefer considering a variational setting, which is similar to 4D-Var assimilation scheme. This choice appears more natural to provide an end-to-end learning framework using deep learning framework. \\n\\nPaper [5] : From a modeling perspective, this paper states the dynamics of the latent states as a function of the previous latent states as well as of the previous observations. As such, it relies on an explicit Takens's delay embedding strategy, for which the selection of the time delay is critical. In our work, there is no such requirement for selecting a time delay. Importantly, [5] does not aim to identify an ODE representation. Besides, to the best of our knowledge, the suggested paper [5] does not achieve state-of-the-art (based on delay embedding) attractor reconstruction using closed loop forecast as the presented attractor reconstruction application in [5] is just the projection of the observed variables in the latent space. Similar results can be obtained using an appropriate delay embedding without any dynamical model. \\n\\nPaper [6] : The proposed architecture models the dynamics in the observation space and uses a binary latent space as a proxy for increasing the expressiveness of the model. Although this technique seems relevant for modeling periodic human motion, a huge limitation of this model is that it cannot model chaotic behavior (as discussed in the paper since two different regimes as running and walking can not be generated from the same initial condition).\\n\\nPaper [7] : The proposed model is limited in terms of architecture since it uses RBF approximations for the dynamical and observation models. The proposed architecture aims to infer low-dimensional latent space where the dynamics of the observations evolve (which is represented in our paper by the operator M). This architecture is similar to the tested Latent-ODE model where the training of the model is based on likelihood maximization of the posterior over the latent states given the observations. As stated above, in our scheme, we do not explicitly constrain the mapping between the observation series and latent space according to some non-linear (possibly complex) model. We only consider an implicit mapping through a minimization issue. Furthermore, in [7], the results are presented only for a short term forecast application (8 frames) where in our paper we are also interested in the long forecast of the model.\"}",
"{\"title\": \"Reviewer 3 answers 3\", \"comment\": \"Regarding the definition of the experimental setup, we considered a training dataset with 4000 points, which is a trade-off w.r.t. previous works which performed experiments with 2000 [5] and 10000 [1] points. To our knowledge, none of these previous works compute Lyapunov exponents to evaluate the long-term behaviour of the learnt models, which make direct comparisons with published results more complex. This is the reason why we report experiments with different models compared within the same experimental setup. Benchmarked models include both \\\"old\\\" state-of-the-art schemes that do achieve reasonably good long-term patterns on Lorenz-63 dynamics, e.g. analog schemes combined with a delay embedding, state-of-the-art ODE inference schemes combined with delay embedding schemes (Sparse regression method and Latent-ODE scheme) and a simple RNN model. We may also point out that Latent-ODE exploits a state-of-the-art deep learning framework (pytorch) and involves a LSTM mapping to infer the latent state from the observation series. As such, it may also be regarded as being representative of LSTM-based deep learning models using dely embeddings with the additional benefit of identifying an ODE representation as targeted in our work.\\n\\nFinally, all the minor comments were addressed in the revised version of the manuscript. Specifically, the operator M was omitted in the mathematical development of the paper to provide a more friendly written development similar to classical SSM. The generalization of the approach to ROM (including the operator M) is now in the appendix. \\n\\n[1] Mattera & Haykin (1999) \\\"Support vector machines for dynamic reconstruction of a chaotic system\\\"\\n[2] Muller, Smola, Ratsch, Scholkopf, Kohlmorgen & Vapnik (1999) \\\"Using support vector machines for time-series prediction\\\"\\n[3] Wan (1994) \\\"Time series prediction by using a connectionist network with internal delay lines\\\"\\n[4] Ghahramani, and Roweis (1999) \\\"Learning nonlinear dynamical systems using an EM algorithm\\\"\\n[5] Mirowski & LeCun (2009) \\\"Dynamic Factor Graphs for Time Series Modeling\\\"\\n[6] Taylor, Hinton & Roweis (2006) \\\"Modeling human motion using binary latent variables\\\"\\n[7] Wang, Fleet & Hertzmann (2006) \\\"Gaussian process dynamical models\\\"\\n[8] Krishnan, Rahul G., Uri Shalit, and David Sontag. \\\"Deep Kalman Filters.(2015).\\\" arXiv preprint arXiv:1511.05121 (2015).\\n[9]Fraccaro, Marco, et al. \\\"Sequential neural models with stochastic layers.\\\" Advances in neural information processing systems. 2016.\\n[10] Chen, Tian Qi, et al. \\\"Neural ordinary differential equations.\\\" Advances in neural information processing systems. 2018.\"}",
"{\"title\": \"Reviewer 1 answers 2\", \"comment\": \"Finally, all the minor comments were addressed in the revised version of the manuscript.\"}",
"{\"title\": \"Reviewer 2 answers\", \"comment\": \"The authors would like to thank the reviewer for his valuable comments and suggestions, please find below the our answers.\\n\\n>> \\\"The proposed method requires knowledge of the underlying dynamic model to solve the ODE, which is not fair for other methods.\\\"\", \"a\": \"We apologize for the confusion that might have been caused by our writing. Several paragraphs and sentences were updated in each section, please refer to the revised version of the manuscript for more details. The optimization problem is solved simply by considering the latent variables of u as parameters of the loss function, so we can use automatic differentiation to compute the gradients of the loss function with respect to the model parameters and with respect to the latent states. We then update both the model parameters and the latent states using classical gradient decent techniques.\\n\\nThe derivation of the bijective mapping M depends on the application. For example, regarding the derivation of reduced order models to spatio-temporal fields, Galerkine projections of fluid flows are usually considered since it comes with some physical interpretability of the projection. Here, for the SLA case-study, M is just a PCA projection. In more complex situations such as in [1], one can use an autoencoder to actually learn the mapping M. This can be done offline or online with the learning of the dynamical model parameters and the latent states. The latter technique is particularly interesting and is considered as one of our future works. \\n\\nFor the sake of simplicity we ommited the use of the operator M in the mathematical developpements. we added a section in the appendix to derive the same equations in the case of reduced order models.\\n\\n[1] Champion, Kathleen, et al. \\\"Data-driven discovery of coordinates and governing equations.\\\" arXiv preprint arXiv:1904.02107 (2019).\\n\\nFinally the minor comments were addressed in the revised manuscript.\"}",
"{\"title\": \"Reviewer 1 answers\", \"comment\": \"The authors would like to thank the reviewer for his valuable comments and suggestions, please find below the our answers.\\n\\n>> \\\"I am personally not familiar with the literature on this problem ...\\\"\", \"a\": \"Unfortunately, to the best of our knowledge, there is no other work that ran exactly the same data set for forecasting applications.\\n\\n[1] Chen, Tian Qi, et al. \\\"Neural ordinary differential equations.\\\" Advances in neural information processing systems. 2018.\"}",
"{\"title\": \"General comments\", \"comment\": \"The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. In the revised version of the document, we addressed the issues raised as best as possible. Please find below some general comments.\\n\\nFirst of all, the aim of this paper is to propose a framework to derive ODE representations for partially-observed systems. This is a generalization of recently proposed ODE representations of time series to partially-observed systems with an emphasis on chaotic system reconstruction through long term forecasting of the learnt ODEs. We rely on the inference of a latent space. This technique is only used as a tool to generalize ODE representations to partially observed systems. As detailed below (in the reviewers answers), previous works investigated the inference for latent spaces dynamical systems but with different objectives. \\n\\nWe believe that modeling an arbitrary (deterministic) time series using ODEs is extremely interesting since it can benefit from a broad literature on differential equations to stabilise models (through stability analysis of ODE), understanding the phenomenons underlying the observations (through PDF analysis of the trajectories using differential transport).... On both a synthetic dataset and a real case-study (sea surface dynamics), we show that the proposed framework can significantly improve short-term forecasting performance and reproduce long-term chaotic behaviour.\\n\\nWe may clarify these two objectives from an experimental point of view. This is actually discussed in one of papers pointed out by reviewer 3 [1]. One may emphasize the difference between short term forecasting applications and dynamical system reconstruction using models which is essentially long term forecasting of the approximate models. The later is stated elsewhere in recent literature as closed-loop (iterated) prediction. Quoting the paper proposed by the reviewer : \\\"Abarbanel, in one of his recent publications (Abarbancl, 1996), revealed the distinction between the dynamic reconstruction and prediction problems. According to him, the capability to solve the prediction problem do not always imply the capability to capture the dynamics of the underlying chaotic system. Dynamic recontruction aims at modeling the attractor dynamics (in state-space) while in the prediction problem only the short term prediction capability is of concern, many of the techniques proposed in the literature for chaotic time-series prediction (see Lillekjendlie et al. (1994) For review) fail at solving the dynamical reconstruction problem (Abarhanel, 1996).\\nAs per the above terminology, the dynamic reconstruction problem may be considered as a system approximation problem, not a function approximation one. This means that the obtained model, though trained in an open loop mode of operation, has to be tested by seeding, at first, its input with a point in the trajectory and, then, feeding back the output to its input to generate recursively the outputs. The reconstructed system should be as close as possible to the original one in tenns of its invariants. Two chaotic systems can be considered to be close not only if they present close short-term evolutions from the same initial condition but also if their chaotic invariants are sufficiently close. In particular, one cannot consider that a non-chaotic system be a good approximation of a chaotic one.\\\". In our paper, and as stated in the abstract, we are interested in both short-term forecasts and the long-term asymptotic behavior of simulated trajectories (only given the initial condition). The literature proposed by the reviewer 3 only addresses short-term forecasting. The Lorenz-63 dynamics, which lead under specific parameterization as chaotic dynamics, provide a toy example to evaluate both the short-term RMSE and the largest Lyapunov exponent (which is an invariant of the Lorenz 63 dynamical system) as evaluation metrics for learnt models. In the SLA experiment in the other-hand, and due to the lack of stability of the learnt models, we could not compute the Largest lyapunov exponents. However, we show in the appendices that our model still give realistic forecasts up to 175 days with the same dominant frequency as the true data, which is a significantly larger time horizon compared with the benchmarked data-driven models. \\n\\nOverall, we have clarified these aspects in the introduction and related works sections citing the papers suggested by the reviewers and pointing out the differences with the key objectives of our paper.\\n\\n[1] Mattera & Haykin (1999) \\\"Support vector machines for dynamic reconstruction of a chaotic system\\\"\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper addresses the problem of data-driven identification of latent representations for partially-observed dynamical systems. The proposed method uses an augmented state-space model where the dynamical operator is parameterized using neural networks. Given some training data, the proposed approach jointly estimates the parameters of the model as well as the corresponding unobserved components of the latent space for the training sequences. Experimental results on forecasting using real and synthetic datasets are presented.\\n\\nOverall I think that the paper makes an interesting contribution. I find very interesting the idea of performing inference on the set of unobserved components in the latent space. The empirical results seem sufficient to me, but I am not familiar with relevant baselines (see below). Please find below some comments and questions.\\n\\nI am personally not familiar with the literature on this problem, so my assessment might be affected by this. I did not find the paper easy to read and the presentation assumes a lot of previous knowledge. I think that the background and related work section could be more friendly written (considering the ICLR audience).\\n\\nThe training scheme (described in Section 3) uses one-step-ahead forecasting. The temporal consistency of the unobserved component of the latent space is only loosely enforced with the regularization term in (6). One could train using forecasting with more steps (and only doing inference for the initial y_t of the subsequence), as this is closer to what is used at test time. Do you think this would be helpful for having better accuracy when forecasting more steps?\\n\\nIt would be good to provide more details on how to build the forecasting operator (implementing 4-th order Runge-Kutta scheme) and what is exactly the bilinear architecture of Fabelt et al.\\n\\nRegarding the experimental validation, I like that the paper starts with a simple motivating example and moves to more complex cases. Experimental results are convincing to me, as the model is able to recover the performance of other models that do have access to the full state. I am not familiar with the literature so I'm unable to judge whether all relevant baselines are included. \\n\\nRegarding the Latent-ODE baseline, would results change running with different (larger) dimension for the latent space?\", \"the_paper_should_cite_the_work\": \"Ayed, et al. \\\"Learning Dynamical Systems from Partial Observations.\\\" arXiv preprint arXiv:1902.11136 (2019). Would this be a relevant baseline to compare to?\\n\\nIs the training data regularly-sampled? Would the model be robust the irregularly-sampled training data?\\n\\nThe authors evaluate all methods with one and four step forecasting in the last two experiments. I think that it would be informative to show a wider range of number of steps, to show how performance degrades with longer predictions (more than 4).\\n\\nFinally, regarding the Modelling Sea Level Anomaly task, all baselines are ran by the authors. It would be informative to also include results of prior art using this dataset, if possible.\", \"other_minor_comments\": \"The citation format is wrong. Most citations should be using the \\\\citep command\\n\\nIn the second paragraph of Section 1, it says: \\\"Unfortunately, When the\\\"\", \"in_the_caption_of_figure_2_it_says\": \"\\\"according to thr\\\"\\n\\nA few lines before the \\\"Modelling Sea Level Anomaly\\\" subsection there's an exclamation sign before the text\\\"1e-4 for a one-step...\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Update: I raised the score from 1 to 3 to acknowledge the authors' consideration for the 2000-2010 literature on learning dynamical systems from partial observations. Unfortunately, the writing is still confusing, some of the claims in the introduction and rebuttal are inexact ([5] does not embed the observations and does work with partially observed environments), and the method lacks originality compared to existing work. Other work that relies on ODE integration and in finding high-dimensional state variables has recently been published and tested on more ambitious datasets, e.g. Ayed et al, \\\"Learning Dynamical Systems from Partial Observations\\\", arXiv 2019.\\n\\n***\\n\\nTL;DR: relatively well written (if sometimes confusing) paper that reinvents the inference of latent variables in nonlinear dynamical systems that has been published in the 2000s, and that misses an important chunk of literature (and experiments on dynamical systems such as Lorenz-63) from that time.\\n\\nThis paper proposes an approach for learning dynamical systems from partial observations x, by using an augmented state variable z that follows dynamics that can be described by an ordinary differential equation (ODE) with dynamics f. The authors motivate their work by the problem of dynamical system identification when only partial observations are available. The authors claim that was to date primarily addressed using time-delay embedding, following Takens' theorem. The authors introduce s-dimensional unknown state variables z, dynamical function f (for the ODE on z), flow phi on z, limit cycles on z, observation function H: z -> x that goes from z to n-dimensional observations x, low k-dimensional manifold r (with a map M: x -> r), and state augmentation variable y. The reconstructed state variable u is the concatenation of r and y. One key ingredient of the method is to infer the optimal value of state augmentation variable y during learning (see equations 5 and 6) and inference for forecasting (7); this is not well explained in the abstract and introduction.\\n\\nI would note that the problem of state space modeling (SSM) and dynamical system identification has been well studied, and the notation and reformulation in this paper is somewhat confusing for those who are used to the notation in SSMs (specifically, expressing the observation approximation as M^{-1}(G(phi(u_{t-1}))). Learning a state-space model involves both learning parameters and inferring the latent states representation (or, in graphical models, the distribution of these latent states) given the parametric models. One approach has been to formulate the state-space model learning by maximum likelihood learning of the model parameters so that the generative model fits observed data x, and this would involve factoring out the distribution of latent states z; the algorithm would rely on Expectation Maximisation, and could involve variational approximations or sampling. While the state space models were hampered by their linearity, several papers in 2000s showed how it is possible to learn nonlinear dynamical models, e.g. [4], [5], [6] and [7] to cite earlier ones. Equations (5) and (6) are similar to the standard equations for a dynamical system expressed in continuous time, with the only difference that the optimisation is with respect to y only, rather than w.r.t. z or u (why not \\\\tilde z or \\\\hat z?).\\n\\nThe paper mentions various initialisation strategies for y (last paragraph of section 3). Why not predict from the past of the observations, like is done in many other similar work?\\n\\nThe literature review mixes older and newer references. For example, on page 1, I would note that the Takens' theorem has been applied in conjuction with Support Vector Regression as early as 1999 [1][2], and with neural networks in 1993 [3].\\n\\nMost importantly, the ideas of this paper have already been published in [4] (with architecture constraints on the neural network state-space model), in [5] (with any nonlinear neural network state-space model), in [6] (using Restricted Boltzmann Machines) and in [7] (using Gaussian Process latent variable models).\\nThe model is illustrated with experiments on a 2D linear attractor, on the Lorenz-63 attractor. Given the results published in [1] and [2] using SVR on 1D observations of that attractor, and in [5] using a recurrent neural network, I am unconvinced by these results. It seems in particular that the number of training points (around 4000) limits the performance of RNN / LSTM models. The application to Sea Level Anomaly is interesting.\", \"minor_comments\": \"\\\"Unfortunately, When\\\" (page 1)\\nThere is a missing -1 after M in equation (5) and (10)\\nIn equation (7), should not the sum go from t=0 to T, as x_t is unknown for t>T?\\nWhat is prediction and what is extrapolation on Figure 1?\\nThe caption of Fig 1 contains (left)\\nThe figures seem squeezed with the captions / titles un-aesthetically wide. \\nLabels on Figure 5 in the appendix seem mixed, and red should be the ground truth\\n\\n[1] Mattera & Haykin (1999) \\\"Support vector machines for dynamic reconstruction of a chaotic system\\\"\\n[2] Muller, Smola, Ratsch, Scholkopf, Kohlmorgen & Vapnik (1999) \\\"Using support vector machines for time-series prediction\\\"\\n[3] Wan (1994) \\\"Time series prediction by using a connectionist network with internal delay lines\\\"\\n[4] Ghahramani, and Roweis (1999) \\\"Learning nonlinear dynamical systems using an EM algorithm\\\"\\n[5] Mirowski & LeCun (2009) \\\"Dynamic Factor Graphs for Time Series Modeling\\\"\\n[6] Taylor, Hinton & Roweis (2006) \\\"Modeling human motion using binary latent variables\\\"\\n[7] Wang, Fleet & Hertzmann (2006) \\\"Gaussian process dynamical models\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper proposes a new deep learning approach based on Takens\\u2019s theorem to identify the dynamics of partially observed chaotic systems. In particular, the method augments the state using the solution of an ODE. Experiments on Lorenze-64 dynamics and sea level anomaly demonstrate the advantage of the proposed method over state-of-the-art baselines.\", \"The unification of Taken\\u2019s embedding theorem and deep learning provides a novel perspective into dynamical systems\", \"Impressive experiment results compared with baselines including RNN and latent ODE\", \"The proposed method requires knowledge of the underlying dynamic model to solve the ODE, which is not fair for other methods\", \"The model is trained using data from the same initial conditions, which is essentially overfitting. The authors should provide experiments for dataset from different initial conditions.\", \"The writing is not very clear. For example, how to solve the optimization problem in Eqn (7), as the augmented states u_{t-1} are unknown? How to find the bijective mapping M for general dynamical systems?\"], \"minor\": \"question mark in section 4 page 6. Figures 2 plots are difficult to read, pls provide more details in columns and rows.\"}"
]
} |
SkxzSgStPS | Exploration via Flow-Based Intrinsic Rewards | [
"Hsuan-Kung Yang",
"Po-Han Chiang",
"Min-Fong Hong",
"Chun-Yi Lee"
] | Exploration bonuses derived from the novelty of observations in an environment have become a popular approach to motivate exploration for reinforcement learning (RL) agents in the past few years. Recent methods such as curiosity-driven exploration usually estimate the novelty of new observations by the prediction errors of their system dynamics models. In this paper, we introduce the concept of optical flow estimation from the field of computer vision to the RL domain and utilize the errors from optical flow estimation to evaluate the novelty of new observations. We introduce a flow-based intrinsic curiosity module (FICM) capable of learning the motion features and understanding the observations in a more comprehensive and efficient fashion. We evaluate our method and compare it with a number of baselines on several benchmark environments, including Atari games, Super Mario Bros., and ViZDoom. Our results show that the proposed method is superior to the baselines in certain environments, especially for those featuring sophisticated moving patterns or with high-dimensional observation spaces. | [
"reinforcement learning",
"exploration",
"curiosity",
"optical flow",
"intrinsic rewards"
] | Reject | https://openreview.net/pdf?id=SkxzSgStPS | https://openreview.net/forum?id=SkxzSgStPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-r5JLbwJ5p",
"SJe_-TWosH",
"Skg5I2SDjB",
"rygAnxNSoH",
"B1eO3uGrjr",
"rkx6Fww4iH",
"H1lI1PwNoH",
"ryxnSPL4sH",
"BJgr1UU4or",
"SyefqHL4oS",
"SkleInpWiS",
"S1xxeBsnFB",
"SJxmXHNhtS",
"HJgBCOT4Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798745028,
1573752063824,
1573506129928,
1573367989523,
1573361839688,
1573316484540,
1573316318361,
1573312323870,
1573311965495,
1573311881917,
1573145671782,
1571759335870,
1571730715501,
1571244237297
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2276/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2276/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2276/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2276/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for improving exploration by implementing intrinsic rewards based on optical flow prediction error. The approach was evaluated on several Atari games, Super Mario, and VizDoom.\\n\\nThere are several strengths to this work, including the fact that it comes with open source code, and several reviewers agree it\\u2019s an interesting approach. R1 thought it was well-written and quite easy to follow. I also commend the authors for being so responsive with comments and for adding the new experiments that were asked for.\\n\\nThe main issue that reviewers pointed out, and which I am also concerned about, is how these particular games were chosen. R3 points out that these 5 Atari games are not known for being hard exploration games. Authors did conduct further experiments on 6 Atari games suggested by the reviewer, but the results didn\\u2019t show significant improvement over baselines.\\n\\nI appreciate the authors\\u2019 argument that every method has \\u201cits niche\\u201d, but the environments chosen must still be properly motivated. I would have preferred to see results on all Atari games, along with detailed and quantitative analysis into why FICM fails on specific tasks. For instance, they state in the rebuttal that \\u201cThe selection criteria of our environments is determined by the relevance of motions of the foreground and background components (including the controllable agent and the uncontrollable objects) to the performance (i.e., obtainable scores) of the agent.\\u201d But it doesn\\u2019t seem like this was assessed in any quantitative way. Without this understanding, it\\u2019d be difficult for an outsider to know which tasks are appropriate to use with this approach. I urge the authors to focus on expanding and quantifying the work they depict in Figure 8, which, although it begins to illuminate why FICM works for some games and not others, is still only a qualitative snapshot of 2 games. I still think this is a very interesting approach and look forward to future versions of this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"The authors appreciate the perspective shared by the reviewer. To address the second concern from the reviewer, we performed further experiments on the suggested six established hard exploration environments with original sparse reward settings, as in [1]. We compared our proposed method with forward dynamics (Random CNN) in [2], which is similar to one of the baselines \\\"Dynamics\\\" in [3]. \\n\\nThe figures of the six environments can be accessed at the following link, https://imgur.com/Jsp3eSm\\n\\nThese experiments are evaluated for 15M timesteps (~900 parameter updates compared to the experiments in [3]). Please refer to [3] for the results of RND. From the above figure, it can be observed that FICM performs comparably to the forward dynamics baseline in Gravitar, Pitfall, and Solaris, while both of them are not able to acquire any reward in \\u201cMontezuma\\u2019s Revenge\\u201d. However, our results reveal that FICM is able to achieve a score of up to 1000 in Venture within less than 15M timesteps (~900 parameter updates), while RND requires 20K~30K updates to reach the same level of scores. On the contrary, the forward dynamics baseline even fails to receive any reward in this environment.\\n\\nWe respectfully hope that our results and the above discussions could provide a different perspective for the reviewer to reconsider the evaluation. As we mentioned in our first post, every method has its own niche, while a single and unified algorithm is not our primary purpose and original intention. FICM contributes to the concept of employing flow prediction errors from the field of computer vision to generate intrinsic rewards, which has never been discussed in the literature before. Furthermore, the experimental results presented above are fully reproducible and verifiable. Our source code can be accessed at the following link, https://github.com/IclrPaperID2276/iclr_paper_2276\\n\\nWe would be glad to discuss further with the reviewer, and are willing to provide additional results should they are necessary. We look forward to hearing from the feedback of the reviewer.\\n\\n[1] Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471\\u20131479, 2016.\\n[2] Y. Burda, H. Edwards, D. Pathak, A. J. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. In Proc. Int. Conf. Learning Representation (ICLR), May 2019a.\\n[3] Y. Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation. In Proc. Int. Conf. Learning Representations (ICLR), May 2019b.\"}",
"{\"title\": \"Reply\", \"comment\": \"I would like to thank the authors for explaining their position on the questions raised in my review.\\n\\nHowever, my second major concern -- about the choice of environments for evaluating the method -- remains not fully addressed. To address it, I would suggest that the authors evaluate the method on more Atari environments. In particular, I would like to see results on the 6 established hard exploration environments: Gravitar, Montezuma Revenge, Pitfall!, PrivateEye, Solaris, Venture (according to Bellemare et al \\\"Unifying count-based exploration and intrinsic motivation\\\"). It would be great to see how the method performs in no-reward as well as original sparse-reward setting for those environments.\\n\\n\\\"From our perspective, carrying out experiments on tailored environments is not evil. Every method has its own niche. We do believe different types of intrinsic rewards have their best fit for different scenarios, and it is difficult to find one approach being suitable for every situation. As a result, a single and unified algorithm should not be the ultimate goal of research, and is absolutely not our primary purpose and original intention.\\\" -- still, demonstrating how a method works on a wider range of environments is the key for understanding the advantages and disadvantages of the method. Even if the method does not perform well, there is a big difference between deteriorating performance slightly or greatly.\"}",
"{\"title\": \"Official Blind Review #2\", \"comment\": \"Dear Authors\\nI appreciate your clear response. I also appreciate your effort in incorporating FICM and the importance of motion in the exploration of RL agents. I will discuss further with other reviewers and the AC, and hopefully, reassess my evaluation accordingly. \\n\\nCheers,\\nRev#2\\n\\n\\n(I think it would be in general helpful to incorporate your detailed response in your paper to make it more accessible (it might be quite challenging to do so due to short paper style of conferences). In order to evaluate your paper, I needed to read quite a few other papers in optical flow. It was totally fine to do that, I am not put it as a complaint and the lack of a detailed background did not affect my evaluation of your paper. )\"}",
"{\"title\": \"Response to Reviewer #2 (part 1/3)\", \"comment\": \"The authors appreciate the reviewer\\u2019s time and efforts for reviewing this paper, and would like to respond to the questions in the following paragraphs.\\n\\n[Comment]\\nThe authors should elaborate more on optical flow problem, Flownet, warping approach, and the term \\u201cattention area\\u201d.\\n[Response]\\nWe appreciate the reviewer\\u2019s thoughtful feedback. We agree with the reviewer and have prepared additional paragraphs at the end of this response post, including the background materials for optical flow, FlowNet, warping approach, as well as attention area. We would be glad to incorporate those paragraphs into our manuscript, and discuss with you should you have any further comments or suggestions regarding the sufficiency of the background material.\\n\\n[Comment]\\nIt would be helpful to have a better evaluation of this paper if the authors could clarify and motivate the choice of games in their empirical study. For example the empirical study in Figure 5.\\n[Response]\\nWe would like to thank the reviewer for raising this question, and are glad to share our perspectives with the reviewer. The selection criteria of our environments is determined by the relevance of motions of the foreground and background components (including the controllable agent and the uncontrollable objects) to the performance (i.e., obtainable scores) of the agent. As the primary theme of this work is to leverage flow features as intrinsic reward signals, we benchmarked our methodology on Atari and Super Mario Bros game environments characterizing sophisticated motions of objects. Taking the Atari game \\u201cEnduro\\u201d for example. The agent not only has to understand the motion of its controllable car, but is also required to perceive and comprehend the motions of the other cars, as their motions are directly related to the final score of this agent. BeamRider, on the other hand, is not considered as an environment satisfying the above property. According to our experiments, our method does assist the agents to explore better and deliver more satisfactory results in the environments satisfying the above criteria. As a result, instead of focusing on those hard-explored environments, the emphasis of this paper is on bringing to the community the existence and effectiveness of flow-based intrinsic rewards, and motivating researchers with a potential direction in their future endeavors. We have therefore dedicated significant portions of our manuscript to demonstrating and validating that FICM is able to master the environments featuring the above property, and is more effective than other intrinsic motivated approaches when motion features play a vital role in determining the performance of the agents.\\n\\nMoreover, as the necessity of taking complex motion features into account during the exploration phase of an agent becomes critically important for first-person perspective games, we benchmarked the proposed FICM on ViZDoom, and showed that FICM is naturally more capable of capturing motion features than the baseline methods in Section 4 of our manuscript. As human beings and animals inherently tend to be motivated, attracted, and encouraged by moving objects, we consider that our approach aligns with animal instinct, and believe that our work brings a different perspective to the reinforcement learning community.\\n\\nFurthermore, in order to provide a balanced analysis of FICM as a complete and comprehensive study, we additionally conducted another set of experiments on \\u201cBeamRider\\u201d to reveal the limitation of FICM and discussed its applicable domains in Section 4.3. Based on the motivations discussed above, we consider that the flow-based intrinsic reward is worth sharing with the community in ICLR. FICM contributes to the concept of employing flow prediction errors to generate intrinsic rewards, which has never been discussed in the literature before. Rather than finding a panacea for RL exploration, we consider that introducing different perspectives of intrinsic rewards to the existing set of approaches is more likely the correct way to proceed.\\n\\nWe hope that the above discussions have adequately responded to the reviewer\\u2019s concerns, and hope that the reviewer can take our perspective into consideration.\"}",
"{\"title\": \"Response to Reviewer #2 (part 2/3)\", \"comment\": \"[Comment]\\nIt would also be useful to explicitly explain the advances of this approach over the next frames approaches in stochastic environments. And also, if there is a shortcoming, what are those?\\n[Response]\\nThanks for raising this interesting question. We would like to address the reviewer\\u2019s concern in two different aspects.\\n\\nFirst, we assume that the stochastic environments mentioned by the reviewer correspond to those in which their state transitions are stochastic. In other words, each state transition is associated with a probability, not totally determined by the action performed by the agent. In such a case, we expect that FICM would still be able to learn and generate meaningful intrinsic rewards from the observations, as it does not require the actions performed by the agent for generating intrinsic rewards. What FICM requires, as discussed in Section 3 of our manuscript, are the current observation and the next observation of the agent. As a result, we believe that FICM would demonstrate robustness to stochastic environments. The determining factor of the agent\\u2019s performance in such environments would thus greatly rely on the underlying DRL method for learning the policy. On the contrary, ICM would probably not be able to deliver satisfactory performance for such environments. As the state transitions are unpredictable, the intrinsic curiosity modules have no clue to learn the state transition dynamics from the current observation and the action performed by the agent, thereby might cause large prediction errors. Therefore, poor performance caused by the stochastic environment might still be inevitable.\\n\\nSecond, we do have an analysis and discussion regarding the limitations of optical flow. This is why we incorporated additional paragraphs in Section 4.3 for discussing the applicable domains of FICM as a balanced discussion. It is not our paper\\u2019s objective to claim or argue that optical flow is omnipotent. Optical flow suffers from occlusions or textureless images, which have already been prevalently recognized by researchers in the domain of computer vision. However, it is still widely adopted in numerous researches as an effective tool for extracting information between consecutive frames. Our research similarly intends to leverage this tool in the domain of reinforcement learning. To validate that the prediction errors from an optical flow estimator can indeed serve as a satisfactory novelty indicator, we presented an experiment in Fig. 2 with a discussion to demonstrate that the prediction errors do gradually decrease over training iterations. This implies that FICM is able to learn and gradually become familiar with the transitions and the motions between consecutive observations.\\n\\n[Comment]\\nWhat do the authors think would happen when the action directly does not change the scene, at least immediately?\\n[Response]\\nWe would like to thank the reviewer for raising this interesting question. For the scenario mentioned by the reviewer, FICM would generate few intrinsic rewards under such a circumstance, as the transition between the current state and next state is negligible. The agent would therefore be motivated to explore other states. However, if unfamiliar uncontrollable moving objects suddenly appear in the current observation of the agent, FICM would generate intrinsic rewards to encourage the agent to explore the current state more.\\n\\n[Comment]\\nTypos and rephrasing suggestions.\\n[Response]\\nThe authors sincerely appreciate the reviewer\\u2019s kindness for pointing out typos and providing constructive rephrasing suggestions (e.g., the \\u201caim\\u201d issue). We will definitely revise the manuscript according to the suggestions in our final version.\"}",
"{\"title\": \"Response to Reviewer #2 (part 3/3)\", \"comment\": \"===Background materials===\\n[Optical flow estimation]\\nOptical flow estimation is a technique to evaluate the motion of objects between consecutive images. In usual cases, a reference image and a target image are required. The optical flow is represented as a vector field, where displacement vectors are assigned to certain pixels of the reference image. These vectors represent where those pixels can be found in the target image.\\n\\nIn recent years, a number of deep learning approaches running on GPUs dealing with large displacement issues of optical flow estimation have been proposed [1-3]. FlowNet [1] was the pioneer of constructing Convolution Neural Network (CNN) to solve optical flow estimation problem as a supervised task. The author proposed a correlation layer that provides matching capabilities. FlowNet 2.0 [2], an upgraded version of FlowNet, improves the performance in both quality and speed. They adopt a stacked architecture with the auxiliary path to refine intermediate optical flow, and introduce a warping operation which can compensate for some already estimated preliminary motion in the target image. Furthermore, they elaborate on small displacements by introducing a sub-network specializing in small motions. In this paper, we use a simplified version of FlowNet 2.0 to generate optical flow. For more details definition and computation of warping function, we recommend the reviewer can refer to the supplementary materials as provided in [2].\\n\\n[Attention area]\\nThe visualization method proposed in [4] is able to visualize the part on which the agent concentrates on current observation. It first selects a region from the original observation and blurs it into a perturbed one. Then, the perturbed observation would be fed to the agent to generate a probability distribution of action to be taken. A score of the importance of the selected region is calculated on the difference between this distribution and the original distribution based on the unperturbed observation. At last, the region with a higher score in observation is colored more brightly.\\n\\n[1] P. Fischer, A. Dosovitskiy, and E. IlgA. et al. FlowNet: Learning optical flow with convolutional networks. In Proc. IEEE Int. Conf. Computer Vision (ICCV), pp. 2758\\u20132766, May 2015.\\n[2] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 1647\\u20131655, Dec. 2017.\\n[3] Samuel Schulter, Paul Vernaza, Wongun Choi, and Manmohan Krishna Chandraker. Deep network flow for multi-object tracking. Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 2730\\u20132739, Jun. 2017.\\n[4] S. Greydanus, A. Koul, J. Dodge, and A. Fern. Visualizing and understanding atari agents. In Int. Conf. Machine Learning (ICML), pp. 1787\\u20131796, Jun. 2018.\"}",
"{\"title\": \"Response to Reviewer #3 (part 1/3)\", \"comment\": \"The authors appreciate the thoughtful feedback from the reviewer and would like to respond to the questions in the following paragraphs. Please note that we first address the two major concerns from the reviewer, which also cover our responses to a few questions raised in the detailed comments part provided by the reviewer. Then, we respond to the remaining questions raised in the detailed comments part.\\n\\n===To address two major concerns===\\n[Comment]\\nBetter motivating the approach in the paper would help. Why using the flow prediction error as a curiosity signal?\\n[Response]\\nWe appreciate the time and efforts of the reviewer to read through the paper thoroughly. As the reviewer has raised concerns about the motivations of FICM, we would definitely love to share our perspectives with the reviewer and expect a rigorous discussion afterward.\\n\\n\\uff37e believe that rapidly changing parts in two consecutive frames, i.e., motion features extracted by a flow predictor, do usually serve as an important indicator of information in an environment. As depicted in Fig. 1, the motions of Mario and the fire traps contain essential information for the agent to perform well in SuperMario Bros. Biologically, human beings and animals also tend to concentrate on motion features of objects. For instance, animals may not be able to memorize the exact appearance of the objects in their habitats, but do posses the capability to discover whether or not unfamiliar newcomers have intruded into their territories. It is a natural instinct that arouses an animal\\u2019s curiosity from motions of unfamiliar feature patterns appearing in its field of view. Our FICM is therefore inspired by the observations mentioned above, and is designed to focus on motion features of objects extracted from two consecutive frames by adopting optical flow estimation for evaluating the novelty of the frames.\\n\\nWe do agree with the reviewer\\u2019s concern about the limitations of optical flow. This is why we incorporated additional paragraphs in Section 4.3 for discussing the applicable domains of FICM as a balanced discussion. It is not our paper\\u2019s objective to claim or argue that optical flow is omnipotent. Optical flow suffers from occlusions or textureless images, which have already been prevalently recognized by researchers in the domain of computer vision. However, it is still widely adopted in numerous researches as an effective tool for extracting information between consecutive frames. Our research similarly intends to leverage this tool in the domain of reinforcement learning. To validate that the prediction errors from an optical flow estimator can indeed serve as a satisfactory novelty indicator, we presented an experiment in Fig. 2 with a discussion to demonstrate that the prediction errors do gradually decrease over training iterations. This implies that FICM is able to learn and gradually become familiar with the transitions and the motions between consecutive observations in spite of those potential problems.\\n\\nBased on the motivations discussed above, we consider that the flow-based intrinsic reward is worth sharing with the community in ICLR. FICM contributes to the concept of employing flow prediction errors to generate intrinsic rewards, which has never been discussed in the literature before. The concept is proposed to bring new insights to the research community, and provide a potential direction for future enhancements in the realm of intrinsic reward based exploration.\"}",
"{\"title\": \"Response to Reviewer #3 (part 2/3)\", \"comment\": \"[Comment]\\nThe choice of tasks seems not well-motivated and rather crafted for the proposed methods.\\n[Response]\\nWe understand the reviewer\\u2019s concerns. However, we do have different perspectives on this issue and would be glad to discuss our points of view with the reviewer in the following two aspects.\\n\\nFirst, the selection criteria of our environments is determined by the relevance of motions of the foreground and background components (including the controllable agent and the uncontrollable objects) to the performance (i.e., obtainable scores) of the agent. Taking the Atari game \\u201cEnduro\\u201d for example. The agent not only has to understand the motion of its controllable car, but is also required to perceive and comprehend the motions of the other cars, as their motions are directly related to the final score of this agent. BeamRider, on the other hand, is not considered as an environment satisfying the above property. Instead of focusing on those hard-explored environments, the main emphasis of this paper is on bringing to the community the existence and effectiveness of flow-based intrinsic rewards, and motivating researchers with a potential direction in their future endeavors. As a result, we have dedicated significant portions of our manuscript to demonstrating and validating that FICM is able to master the environments featuring the above property, and is more effective than other intrinsic motivated approaches when motion features play a vital role in determining the performance of the agents.\\n\\nSecond, even though we did not present experiments for all Atari games, we do believe that our current experiments sufficiently explain and demonstrate the effectiveness of our method. From our perspective, carrying out experiments on tailored environments is not evil. Every method has its own niche. We do believe different types of intrinsic rewards have their best fit for different scenarios, and it is difficult to find one approach being suitable for every situation. As a result, a single and unified algorithm should not be the ultimate goal of research, and is absolutely not our primary purpose and original intention. For example, although RND delivers superior performance in \\u201cMontezuma\\u2019s Revenge\\u201d, it performs poorly in the experiments presented in Fig. 5 of our paper. On the contrary, our method does assist the agents to explore better and deliver more satisfactory results in the environments satisfying the above criteria. In order to provide a balanced viewpoint of FICM, we further conducted another set of experiments on \\u201cBeamRider\\u201d to reveal the limitation of FICM and discussed its applicable domains in Section 4.3. Therefore, rather than finding a panacea for RL exploration, we consider that introducing different perspectives of intrinsic rewards to the existing set of approaches is more likely the correct way to proceed.\\n\\nWe hope that the above discussions have adequately responded to the reviewer\\u2019s concerns, and hope that the reviewer can take our perspectives into consideration.\\n\\n===To respond to the reviewer\\u2019s detailed comments===\\n[Comment]\\nIf you omit the rewards the question remains how to select hyperparameters of your method. Was the game reward used for selecting hyperparameters? If not, what is the protocol for their selection?\\n[Response]\\nWe would like to bring to the reviewer\\u2019s attention that the game rewards are not used to select hyperparameters for either the agent or the intrinsic module. The hyperparameters of the agents are aligned with those of the baselines in each experiment for fair comparisons. Please note that we did not select the hyperparameters by any specific protocol - we just use the same ones as the baselines. Our hyperparameters are provided in our supplementary material. If you are interested, we have already uploaded our source codes as well as the demonstration videos to the following sites. Our experimental results and statements presented in the manuscript are fully reproducible and verifiable.\", \"github\": \"https://github.com/IclrPaperID2276/iclr_paper_2276\", \"demo_video\": \"https://youtu.be/JL68QFNj_N8\\n\\n[Comment]\\nWhy are different solvers used for different tasks in this paper? PPO is normally significantly better than A3C. Why isn\\u2019t it used throughout the whole paper?\\n[Response]\\nWe would like to thank the reviewer for raising this question. Since we intended to reproduce the results of [2] and compare with them, we directly executed their officially released open-source codes, where the solver is A3C. We only replaced their intrinsic module by our own method for a fair comparison. The same situation applies to our comparisons with [1], where the solver is PPO.\"}",
"{\"title\": \"Response to Reviewer #3 (part 3/3)\", \"comment\": \"[Comment]\\nBut would it then be susceptible to spurious curiosity effects when the agent is drawn to motion of unrelated things? Like leaves trembling in the wind. ICM was proposed to eliminate those effects in the first place, but what is this paper\\u2019s solution to that problem? Furthermore, the experiments on BeamRider show that this concern is not a theoretical one but quite practical.\\n[Response]\\nWe would like to thank the reviewer for raising the question about \\u201cspurious curiosity\\u201d. Conventionally, researchers believe that uncontrollable parts (e.g., trembling leaves) in the environment cause spurious curiosity which may mislead an agent\\u2019s exploration. Researchers in the past few years have spent tremendous efforts on eliminating such impacts. However, we argue that spurious curiosity is not always caused by uncontrollable parts from an agent\\u2019s observations, and not removing them should not be a weakness. In fact, uncontrollable parts sometimes play key roles for effective exploration.\\n\\nUncontrollable parts are crucial for success in several games in which other objects\\u2019 behaviors are related to the agent\\u2019s score. For example, in \\u201cEnduro\\u201d, comprehending the other cars\\u2019 motions is the key to learn a good driving policy. Knowing more about their policies helps the agent make better decisions. However, filtering out uncontrollable parts, as ICM does, prohibits an effective exploration of the others\\u2019 acts. This is because the uncontrollable movements of the others might be ignored by ICM. As opposed to ICM, our method preserves the other objects\\u2019 motions, enabling effective exploration in games that require the involvement of them. It is worth noticing that in Fig. 5, our method outperforms ICM in \\u201cEnduro\\u201d by a drastic margin.\\n\\nOn the other hand, uncontrollable parts do hinder the performance of our method in some cases like \\u201cBeamRider\\u201d. In this game, constantly rolling decorated beams in the game screen are not related to the agent\\u2019s scores. Endlessly pursuing curiosity produced by those beams could mislead the exploration direction and thus might result in poor performance. In such a case, filtering out uncontrollable parts could be an answer since focusing on the agent\\u2019s motion is the key to success in this game.\\n\\nTo conclude, we believe that removing uncontrollable parts is not a panacea for all scenarios. In fact, whether or not eliminating those uncontrollable is problem-dependent and a tradeoff when designing intrinsic rewards.\\n\\n[Minor concerns] \\nThe authors sincerely appreciate the reviewer's kindness for pointing out our typos (e.g., \\\"5 instead of 8\\\" and \\\"sparse\\\") and readability issues, and providing constructive formatting and rephrasing suggestions. We will definitely revise the manuscript according to the suggestions in our final version.\\n\\n[1] Y. Burda, H. Edwards, D. Pathak, A. J. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. In Proc. Int. Conf. Learning Representation (ICLR), May 2019a.\\n[2] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In Proc. Int. Conf. Machine Learning (ICML), pp. 2778\\u20132787, May 2017.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"The authors appreciate the reviewer\\u2019s time and efforts for reviewing this paper and would like to respond to the questions in the following paragraphs.\\n\\n[Comment]\\nCompare FICM against simpler exploration baselines such as epsilon-greedy or entropy regularization.\\n[Response]\\nWe would like to thank the reviewer for raising this interesting question, and would like to bring to the reviewer's kind attention that in the original paper of our baseline \\\"ICM\\\" [1], the authors had provided a comparison against an \\u2018A3C\\u2019 baseline (using entropy regularization) with epsilon-greedy exploration method (Section 3 of [1]). According to the experimental results presented in Section 4 of [1], it has been demonstrated that ICM is superior to that baseline in a number of environments. This is the reason why we omit that baseline in our paper. As our primary interest and focus is prediction-based exploration methods using intrinsic reward signals (as discussed in Section 1 of our paper), we only compare our FICM with ICM [1], RND [2] and large-scale [3], concentrating on analyzing the pros and cons between our proposed method and the other prediction-based ones.\\n\\nHowever, we would still be glad to include additional comparisons against the suggested methods in the final version of our paper, if the reviewer considers that is informative for the readers to comprehend the paper.\\n\\n[Comment]\\nMore extensive comparisons between FICM and ICM across different datasets, for example, Super Mario Bros. and the Atari games, instead of only comparing FICM against ICM on ViZDoom.\\n[Response]\\nWe appreciate the suggestions from the reviewer and would like to share with the reviewer our additional experimental results of ICM using the same hyper-parameter settings described in Section 4.1 in the following figure. (figure link: https://imgur.com/5pPl8PV )\\n \\nIt is observed that ICM is only able to deliver comparable performance to our method in Atari game \\\"Seaquest\\\". We would definitely be glad to incorporate these new results in our manuscript in the revised version. \\n\\n[Comment]\\nReproducibility.\\n[Response]\\nThank you very much for the suggestions. We have already uploaded our source codes as well as the demonstration videos to the following sites. Our experimental results and statements presented in the manuscript are fully reproducible and verifiable.\", \"github\": \"https://github.com/IclrPaperID2276/iclr_paper_2276\", \"demo_video\": \"https://youtu.be/JL68QFNj_N8\\n\\nWe hope that we have adequately responded to your questions, and would be very glad to discuss with you if you have any further comments or suggestions.\\n\\n[1] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In Proc. Int. Conf. Machine Learning (ICML), pp. 2778\\u20132787, May 2017.\\n[2] Y. Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation. In Proc. Int. Conf. Learning Representations (ICLR), May 2019b.\\n[3] Y. Burda, H. Edwards, D. Pathak, A. J. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. In Proc. Int. Conf. Learning Representation (ICLR), May 2019a.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a novel way to formulate intrinsic reward based on optical flow prediction error. The prediction is done with Flownet-v2 architecture and the training is formulated as self-supervision (instead of the ground-truth-based supervised learning in the original Flownet-v2 paper). The flow predictor takes two frames, predicts forward and backward flows, then warps the first/second frame respectively and compares the warped result with real frame. The comparison error serves as the intrinsic reward signal. The results are demonstrated on 7 environments: SuperMario + 5 Atari games + ViZDoom. On those environments, the proposed method performs better or on-par with ICM and RND baselines.\\n\\nI am leaning towards rejecting this paper. Two key factors motivate this decision. \\nFirst, the motivation for this work is not fully clear: why would the error in flow prediction be a good driving force for curiosity? Optical flow has certain weaknesses, e.g. might not work well for textureless regions because it's hard to find a match. Why would those weaknesses drive the agent to new locations? \\nSecond, the choice of tasks where the largest improvement is shown (i.e. 5 Atari games) seems not well-motivated and rather crafted for the proposed method. Those 5 Atari games are not established hard exploration games.\", \"detailed_arguments_for_the_decision_above\": [\"[major concerns]\", \"Analysis is need on how the method deals with known optical flow problems: occlusion, large displacements, matching ambiguities. Those problems don't fully go away with learning and it is unclear how correlated corresponding errors would be with state novelty.\", \"\\\"Please note that ri is independent of the action taken by the agent, which distinguishes FICM from the intrinsic curiosity module (ICM) proposed in Pathak et al. (2017)\\\" - but would it then be susceptible to spurious curiosity effects when the agent is drawn to motion of unrelated things? Like leaves trembling in the wind. ICM was proposed to eliminate those effects in the first place, but what is this paper's solution to that problem? Furthermore, the experiments on BeamRider show that this concern is not a theoretical one but quite practical.\", \"\\\"CrazyClimber, Enduro, KungFuMaster, Seaquest, and Skiing\\\" - none of those Atari environments are known to be hard exploration games (which are normally Gravitar, Montezuma Revenge, Pitfall!, PrivateEye, Solaris, Venture according to Bellemare et al \\\"Unifying count-based exploration and intrinsic motivation\\\"). I understand that every game becomes hard-exploration if the rewards are omitted but then there is a question why those particular games. Moreover, if you omit the rewards the question remains how to select hyperparameters of your method. Was the game reward used for selecting hyperparameters? If not, what is the protocol for their selection? This is a very important question and I hope the authors will address this.\", \"\\\"These games are characterized by moving objects that require the agents to concentrate on and interact with.\\\" - this looks like tailoring the task to suit the method.\", \"Figure 6 - those results are not great compared to the results of Episodic Curiosity: https://arxiv.org/abs/1810.02274 . Maybe this is because of the basic RL solver (A3C vs PPO) but that brings up another question: why are different solvers used for different tasks in this paper? PPO is normally significantly better than A3C, why not use throughout the whole paper?\", \"[minor concerns]\", \"Figures are very small and the font in them is not readable. Figure 2 is especially difficult to read because the axes titles are tiny.\", \"\\\"complex or spare reward\\\" -> sparse\", \"\\\"However, RND does not consider motion features, which are essential in motivating an agent for exploration.\\\" - this is unclear, why are those features essential?\", \"\\\"We demonstrated the proposed methodology and compared it against a number of baselines on Atari games, Super Mario Bros., and ViZDoom.\\\" - please state more clearly that only 5 out of 57 Atari games are considered, here and in the abstract.\", \"\\\"Best extrinsic returns on eight Atari games and Super Mario Bros.\\\" - but only 5 games are shown, where are the other 3?\"], \"suggestions_on_improving_the_paper\": \"1) Better motivating the approach in the paper would help. Why using the flow prediction error as a curiosity signal?\\n2) Better motivating the choice of the environments and conducting experiments on more environments would be important for evaluating the impact of the paper.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Well motivated paper\\n\\nThe authors study the problem of exploration and exploitation in deep reinforcement learning. The authors propose a new intrinsic curiosity-based method that deploys the methods developed in optical flow. Following this algorithm, the agents utilize the reconstruction error in the optical flow network to come up with intrinsic rewards. The authors show that this approach boosts up the behavior of the RL agents and improves the performance on a set of test environments. \\n\\nA few comments that I hope might help the authors to improve the clarity of their paper. \\n\\n1) While the paper is nicely written, I would encourage the authors, of course, if they think necessary, to make the paper slightly more self-contained by explaining the optical flow problem, FlowNet, and warping approach. While a cruise reader might be required to either know literature in optical flow or go and study them along with this paper, it might be helpful for a bit more general readers to have these tools and approaches in access.\\n\\n2) Regarding the first line of introduction, I would recommend to rephrase it to one imply that the mentioned \\\"aim\\\" is one of the aims of the DRL study. \\n\\n3) In the fourth line of the intro, the authors mention that the current DRL methods are \\\"constraint\\\" to dense reward. I believe the authors' aim was to imply that these methods perform more desirably in dense reward settings rather than being constrained to such settings.\\n\\n4) I would also recommend to the authors to elaborate more on the term \\\"attention area\\\" Greydanue et al 2018.\\n\\n5) It would be helpful to have a better evaluation of this paper if the authors could clarify and motivate the choice of games in their empirical study. For example the empirical study in Fig 5.\\n\\n\\n6) While I find this study interesting and valuable, the novelty of the approach might fall short to be published at a conference like ICLR with a low acceptance rate. This does not mean that there is anything unscientific about this paper, in fact, the scientific value of this work is appreciated and this work adds a lot to the community. \\n\\n7) It would also be useful to explicitly explain the advances of this approach over the next frame predictions approaches in stochastic environments. And also, if there is a shortcoming, what are those.\\n\\n8) Also, what the authors think would happen when the action directly does not change the scene, at least immediately.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Pros\\nSolid technical innovation/contribution: \\n- The paper proposed a novel method FICM that bridged the intrinsic reward in DRL with optical flow loss in CV to encourage exploration in an environment with sparse rewards. To the best of my knowledge, this was the first paper proposed to use moving patterns in two consecutive observations to motivate agent exploration.\", \"balanced_view\": [\"The authors discussed both the advantages of FICM and settings that FICM might fail to perform well, and conducted experiments to better help the readers understand such nuances. Such balanced view should be valuable to RL communities in both academia and industry.\"], \"clarity\": \"- In general this was a very well-written paper, I had no difficulty in following the paper throughout. The proposed method (FICM) was clearly motivated, and the authors provided good coverage of related works. Notably, the authors reviewed two relevant methods upon which FICM was motivated, which made the paper self-contained.\\n\\n\\nCons\", \"experiments\": [\"Experiments were conducted only using a few recent results as baselines (ICM, forward dynamics, RND). It would be interesting to compare FICM against simpler exploration baselines such as epsilon-greedy or entropy regularization.\", \"I\\u2019d also like to see more extensive comparisons between FICM and ICM across different datasets, for example, Super Mario Bros. and the Atari games, instead of only comparing FICM against ICM on ViZDoom.\"], \"significance_of_the_innovation\": [\"The proposed exploration method seemed to be applicable with a particular RL setting: the environment changes could be represented through consecutive frames (e.g., video games), and optical flow could be used to interpret any object displacements in such consecutive frames. And as the authors discussed, even under such constraints the applicability of proposed method depends on how much changes of the environment were relevant to the goal.\"], \"reproducibility\": \"- Although the authors discussed the experiment setting in detail in supplements, I believe open-sourcing the code / software used to conduct the experiments would be greatly help with the reproducibility of the proposed method for researchers or practitioners.\\n\\n\\n\\n\\nSummary\\nA good paper overall, but the experiments were relatively weak (common for most ICLR submissions) and the novelty was somewhat limited.\"}"
]
} |
BJgZBxBYPB | Learning Underlying Physical Properties From Observations For Trajectory Prediction | [
"Ekaterina Nikonova",
"Jochen Renz"
] | In this work we present an approach that combines deep learning together with
laws of Newton’s physics for accurate trajectory predictions in physical games.
Our model learns to estimate physical properties and forces that generated given
observations, learns the relationships between available player’s actions and estimated
physical properties and uses these extracted forces for predictions. We
show the advantages of using physical laws together with deep learning by evaluating
it against two baseline models that automatically discover features from
the data without such a knowledge. We evaluate our model abilities to extract
physical properties and to generalize to unseen trajectories in two games with a
shooting mechanism. We also evaluate our model capabilities to transfer learned
knowledge from a 2D game for predictions in a 3D game with a similar physics.
We show that by using physical laws together with deep learning we achieve a better
human-interpretability of learned physical properties, transfer of knowledge to
a game with similar physics and very accurate predictions for previously unseen
data. | [
"Physical Games",
"Deep Learning",
"Physical Reasoning",
"Transfer of Knowledge"
] | Reject | https://openreview.net/pdf?id=BJgZBxBYPB | https://openreview.net/forum?id=BJgZBxBYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"wRD_LEFtNX",
"HyeH-orssB",
"H1gA0k4usB",
"Sylmf3musH",
"BJeAPiQOoS",
"S1ecWacacH",
"HkxD3tnBKH",
"ryxaWOnSKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744998,
1573767933226,
1573564373771,
1573563403477,
1573563237985,
1572871425977,
1571305903148,
1571305477021
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2275/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2275/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2275/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2275/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2275/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2275/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2275/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper aims to estimate the parameters of a projectile physical equation from a small number of trajectory observations in two computer games. The authors demonstrate that their method works, and that the learnt model generalises from one game to another. However, the reviewers had concerns about the simplicity of the tasks, the longer term value of the proposed method to the research community, and the writing of the paper. During the discussion period, the authors were able to address some of these questions, however many other points were left unanswered, and the authors did not modify the paper to reflect the reviewers\\u2019 feedback. Hence, in the current state this paper appears more suitable for a workshop rather than a conference, and I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"After rebuttal\", \"comment\": \"Thank you for this reponse, but it addresses only a single one of my questions, and it is not really an answer to my concerns. No other question was addressed.\\n\\nI will keep my rating.\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"We would like to thank the reviewer for their review and comments. We have taken into account remarks on the figures and the structure of the paper and will fix them in our revision.\\n\\n\\n\\u201cIn order words, I am not really sure what kind of scientific problem is solved by this work, and how this knowledge can help us to solve other problems, harder problems.\\u201d\\n\\n\\nOne of the goals of our paper was to design a network that would learn to predict the trajectory of a shot in games that follow Newton\\u2019s Physics when it has no direct access to the physics engine of the game. As an example task, we picked Science Birds - a clone of a popular game Angry Birds which requires player to shoot a bird from a slingshot. Predicting a trajectory directly from observations in this case is a non trivial task as it requires the agent to understand the relationship between the relative position of a bird and shot trajectory. Our method then can be combined with playing agents that would use trajectory prediction in making decisions. This is actually one of the problems agents face in the Angry Birds AI competition where they only get screenshots of the game but don\\u2019t have access to the physics engine and don\\u2019t know the physics parameters. \\n\\n\\nSimilar methods can be used for many other situations/problems where we know the physics equations that apply, but don\\u2019t know the required physics parameters of objects and the environment. As such, we believe that our work is important for solving these kinds of problems and also an important new approach. In our work we showed that rather than learning already known physics equations, we can learn the required physics parameters from the observations and use them successfully in predictions.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"We would like to thank the reviewer for their remarks and comments. We would like to provide an answer to the questions raised in your review:\\n\\n\\\"- What is a shooting mechanism? Is it something scientifically defined? otherwise, explain better: the parabolic trajectory is I guess more explanatory of what you are doing.\\\"\\n\\nShooting mechanism can be defined as any game object that launches another game object with some initial velocity and can be controlled by a player.\\n\\n\\u201c- What is the goal of your work? Estimating the parameters of an equation used in a game is not really interesting.. as we have to know the equation, it has to be simple, we have to extract the trajectory from the game... but there might be other related applications that could motivate this work.\\u201d\\n\\n\\nAs described in the comment for reviewer #4, estimating the parameters despite knowing the equation is the essential task we face whenever we want to predict consequences of physical actions. The goal of our work is to learn to predict the trajectories of the shot given the observations. In order to achieve it, we have designed a network that learns physical properties from the observed trajectories without any insight on the values of these properties. In the concrete example we consider, such a module could then be combined with playing agents in order to play games that follow Newton\\u2019s physics and have a shooting mechanism that \\u201ccreates\\u201d trajectories.\\n\\n\\\"- Do games always follow the physical laws? From my knowledge, some games change the physical laws in order to be more pretty, ect. in that case, your method will not work?\\\"\\n\\nIn this work we are focusing on solving the trajectory prediction task in games that do follow Newton's laws of motion, the general solution for the cases where these laws do not apply is not a focus of this paper and is left to the future work.\\n\\n\\\"- More generally, can you explain the difference for you between physical laws and physical properties?\\\"\\n\\nPhysical properties is something that can be measured by observation, physical law is something that was derived by scientific experiments and can be used to predict physical behavior if certain preconditions apply.\\n\\n\\\"- Explain what is 'MLP'; how many layers, what size, how did you defined it, ect.\\\"\\n\\n'MLP' or in another words multilayered perceptron is a feedforward artificial neural network. We provide the details on architecture of the used MLPs in Appendix.\\n\\n\\\"- Why the RelateNet has 2 distinct MLPs while InferNet uses the same for inferring V0 and Theta?\\\"\\n\\nBy our experiments we have determined that using 2 distinct MLPs in RelateNet showed better results than using a single MLP as in InferNet. \\n\\n\\\"- How did you extract the trajectories, ect. from the games?\\\"\\n\\nThe trajectories of objects were extracted directly from the physics engine of the game. In order to do so, we were tracking launched object throughout time starting from the moment in which the object was launched and until the moment in which object hits the ground. At each time step we were recording the position of a the launched object which resulted in a sequence of points. The resulted sequences were then padded with zeros or cut to the desired size.\\n\\n\\\"- Figure 6: How come in the two last images, we see different starting points?\\\"\\n\\nThe goal of the Baseline Models 1 and 2 was to predict the trajectory of the launched object. The last two images on Figure 6 demonstrate the inability of the two baseline models to predict the correct position of the object on y-axis (when applied to the Basketball dataset with no further training) even at the first time step (starting point in the graph).\"}",
"{\"title\": \"Response to reviewer #4\", \"comment\": \"We would like to thank the reviewer for their constructive comments and review. We would like to provide an answer to the questions raised in your review:\\n\\n\\u201cIt is not easy for me to understand the use-case of the proposed method. In which real-world scenarios we would have the exact motion equation, and why given that we know such an equation we would want to learn a mapping from the relative position to a trajectory\\u201d\", \"the_use_case_of_our_method_is_the_following\": \"We know physics and we know the underlying physics formulas that apply when observing a physical action. What we don\\u2019t know is the required physics parameters of objects and the environment we observe, for example mass, friction, temperature, air pressure, air resistance, gravity, etc. So despite knowing the physics formulas that apply, we cannot predict consequences of actions.\\n\\n\\nIn the example we use in the paper, a playing agent has to predict the trajectory of the shot in a game that follows Newtonian Physics. The agent only has access to the images of the game and no access to the physics engine, that is, the agent knows the underlying physics formula of the trajectory, but does\\u2019t know the relevant physics parameters of the objects and the environment. Predicting a trajectory directly from observations in this case is a nontrivial task as it requires the agent to understand the relationship between the strength of a shot and it\\u2019s trajectory. While in the real world physics parameters like gravity are (roughly) known, in game worlds these can and do have arbitrary values. \\n\\n\\u201cIf I understand correctly, the trajectories (the input to InferNet) were generated with known G,V0, theta, and (the 3 latent variables of InferNet). It is not clear to me why the authors don\\u2019t use these for the MSE loss used to train InferNet (rather than using the projectile motion equation). \\u201c\\n\\n\\nThe goal of our paper was to avoid using the known values and to test the ability of the network to discover such values from observations. As described above, the motivation for this approach is that typically these values are unknown when one has no direct access to the \\u201cphysics engine\\u201d (as it is the case in the real world and in some games).\\n\\n\\n\\u201c- Is the \\u2018projectile motion equation\\u2019 missing from Fig.2-right; is it used for inference? Is G from InferNet also input to RelateNet?\\u201d\\n\\n\\nThe goal of the RelateNet is to learn to predict the two values V0 and theta directly from the given in-game variables such as relative position of the bird. In order to train RelateNet we use the values V0 and theta that were predicted by InferNet from the observed trajectories. G predicted by InferNet is used as input to the RelateNet.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper proposes an architecture that encodes a known physics motion equation of a trajectory of a moving object. The modeled equation has 3 variables and the network works in a latent space- contrary to taking raw images. It uses an auxiliary network (named InferNet) to train the final one used at inference time (named RelateNet). The former aims to reconstruct the input sequence of positions representing trajectory, and has intermediate 3 latent variables that correspond to the 3 variables of the modeled equation, while as decoder it uses the modeled known equation itself. The latter is a mapping from the relative position of the object to 2 latent variables of the former InferNet, and is trained with MSE loss. At inference, RelateNet takes as input the relative position of the object, predicts 2 variables of the equation and finally uses the motion equation to calculate the trajectory.\", \"It is not easy for me to understand the use-case of the proposed method. In which real-world scenarios we would have the exact motion equation, and why given that we know such an equation we would want to learn a mapping from the relative position to a trajectory. In other words, it would be much more useful to learn the projectile motion equation itself. How does the proposed method handle input sequences which do not follow equation 5? To use this method do we need to know in advance the exact motion equation and its relevant \\u2018in-game variables\\u2019? In which cases would the former hold and in which cases would the latter be easy to obtain from raw pixels? Could the authors elaborate on it?\", \"If I understand correctly, the trajectories (the input to InferNet) were generated with known $G$, $V_0$ and $\\\\theta$ (the 3 latent variables of InferNet). It is not clear to me why the authors don\\u2019t use these for the MSE loss used to train InferNet (rather than using the projectile motion equation).\", \"In my opinion, the introduction and related work sections do not reflect what is proposed in the paper. As an example, paragraph 2 of the introduction refers to use-cases where we would like to learn dynamics that govern certain motions directly from observations, whereas the proposed method uses extracted positions as input, and handcrafts the motion equation. The third paragraph of page 2 mentions agents failing to solve a game with Newtonian physics, whereas the method in this paper does not demonstrate empirically a way that this architecture could be used by an agent.\", \"Is the \\u2018projectile motion equation\\u2019 missing from Fig.2-right; is it used for inference? Is G from InferNet also input to RelateNet?\", \"In summary, in my opinion, the technical novelty of this paper is limited as it uses MLP mappings that in some sense aim at learning the inverse of the equation that generated the data. Moreover, after reading the paper the use-case of the proposed method is not clear to me and the writing is unclear (see examples above and below).\", \"\\u2014 Minor \\u2014\", \"The term \\u2018in-game variables\\u2019 is used in a few places and is explained later in the text (Pg.5). I think that It would be helpful if it is explained in more detail the first time it is mentioned.\", \"I don\\u2019t understand the second sentence of the abstract.\", \"Pg1: build a relationships -> build relationships.\", \"Pg2: I don\\u2019t understand what the authors mean by \\u2018clone of Angry Birds.\\u2019\", \"Pg3: is $f_{associate}$ trained jointly or afterwards?\", \"Pg4: was MSE the loss used for $f_{simple}$?\", \"It would help adding sub-captions in Fig. 6.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a method for predicting the trajectories of objects in video (or phone) games by learning the physical parameters underlying the movements. To do so, the authors used some multi-layer perceptrons on the (x,y) trajectory in order to i) estimate the physical values of the equation (equation supposed to be known, here a parabolic trajectory); and then ii) predict the trajectories from new initial conditions.\", \"general_comment\": \"While the links between physics and machine learning is clearly interesting and trendy, I found the paper unclear, not well motivated and I think the work is not enough for a paper in ICLR. Yet, I think the authors did spend some time and this work might be suited for a workshop.\", \"positive_aspects\": [\"the authors really tried to focus on games that are used today.\", \"the results are showing that they do learn the parameters they wanted, as the new trajectories are indeed working.\"], \"remarks_and_questions\": [\"the writing of the paper is not enough to make it clear, and a lot of sentences are not readable. Starting at the second sentence of the abstract. Try to make small sentences, and add all the pronouns 'a', 'the', .... Avoid the too numerous 'and' and cut the sentences.\", \"What is a shooting mechanism? Is it something scientifically defined? otherwise, explain better: the parabolic trajectory is I guess more explanatory of what you are doing.\", \"What is the goal of your work? Estimating the parameters of an equation used in a game is not really interesting.. as we have to know the equation, it has to be simple, we have to extract the trajectory from the game... but there might be other related applications that could motivate this work.\", \"Related work: I don't really see the novelty of your work. You are saying that the physical properties have to be learned from experience, but you are actually relying on a known equation, and just tuning the 3 parameters. Do games always follow the physical laws? From my knowledge, some games change the physical laws in order to be more pretty, ect. in that case, your method will not work?\", \"More generally, can you explain the difference for you between physical laws and physical properties?\", \"Explain what is 'MLP'; how many layers, what size, how did you defined it, ect.\", \"in 3.1.1. you are spending quite a long time in explaining what an autoencoder is; I think you can go faster on this.\", \"Why the RelateNet has 2 distinct MLPs while InferNet uses the same for inferring V0 and Theta?\", \"How did you extract the trajectories, ect. from the games?\", \"Figure 6: How come in the two last images, we see different starting points?\", \"Table 1: can we have an idea of the errors in meters? and compared to the distance of the trajectory?\"], \"small_remarks\": [\"theta is not alpha. Please use a common notation.\", \"Figure 5, 6.. : the legends are clearly not visible. they might not be useful, but in this case you have to spend time in changing the tick labels so that we can read them.\", \"'build a relationships': no 's'\", \"... a lot of grammatical/ sentence problems\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The problem addressed by this paper is the estimation of trajectories of moving objects thrown / launched by a user, in particular in computer games like angry birds or basketball simulation games. A deep neural network is trained on a small dataset of ~ 300 trajectories and estimates the underlying physical properties of the trajectory (initial position, direction and strength of initial force etc.). A new variant of deep network is introduced, which is based on an encoder-decoder model, the decoder being a fully handcrafted module using known physics (projectile motion).\\n\\nI have several objections, which can be summarized by the simplicity of the task (parabolic trajectories without any object/object or object/environment collisions / interactions), the interest of the task for the community (how does this generalize to other problems?), and the writing and structuring of the paper. I will detail these objections further in the rest of the review.\\n\\nLearning physical interactions is a problem which has received considerable attention in the computer vision and ML communities. The problem is certainly interesting, but I think we should be clear on what kind of scientific knowledge we want to gain by studying a certain problem and by proposing solutions. The tasks studied by the community are mostly quite complex physical phenomena including multiple objects of different shapes and properties and which interact with each other. All these phenomena can be simulated with almost arbitrary precision with physics engines, and these engines are mostly also used for generating the data. In other words, the simulation itself is solved and is not the goal of this body of work. The goal is to learn differentiable models, which can be used as inductive bias in larger models targeting more general tasks in AI.\\n\\nCompared to this goal, the proposed goal is far too easy: learning projectile motion is very easy, as these trajectories can be described by simple functions with a small number of parameters, which also have a clear and interpretable meaning. The simplicity of the task is also further corroborated by the small number of samples used to estimate these parameters (in the order of 300). A further indication is the fact, that the decoder in the model is fully hardcoded. No noise modelling was even necessary, which further corroborates that a very simple problems is addressed.\\n\\nIn order words, I am not really sure what kind of scientific problem is solved by this work, and how this knowledge can help us to solve other problems, harder problems.\\n\\nMy second objection is with the written form of the paper. The paper is not well enough structured and written, many things are left unsaid. First of all, the problem has never been formally introduced, we don\\u2019t know exactly what needs to be estimated. What are the inputs, outputs? Is computer vision used anywhere? How are the positions of the objects determined if not with computer vision? How are the user forces gathered? What are \\u201cin game variables\\u201d mentioned multiple times in the document? No notation has been introduced, no symbols have been introduced (or too late in the document). For instance, there is no notation for the latent space of the encoder-decoder model.\\n\\nThe figures are not very helpful, as the labelling of the blocks and labels is very fuzzy. As an example, For InferNet, inputs and trajectories are \\u201cTrajectories\\u201d, so what is the difference? Of course we can guess that (inputs are measured trajectories, outputs are reconstructed trajectories), but we should not guess things when reading papers.\\n\\nThe figure for encoder-decoder model is very confusing, as the different arrows have different functional meanings and we have no idea what they mean. The outputs of the encoder and the MLP both point to the latent space and at a first glance the reader might think that they are concatenated, which raises several questions. Reading the text, we infer that first a model is trained using on one of the arrows (the one coming from the encoder) and ignoring the other one, and then the MLP is learned to reconstruct the latent space using the other arrow (the one coming from the MLP), but this is absolutely impossible to understand looking at the figure, which does not make much sense. We can infer all this from the text around equations (1) to (3), which is itself quite fuzzy and difficult to understand, in spite of the simplicity of the underlying maths.\\n\\nThe relationship of RelateNet and InferNet is not clear. While the task of InferNet is clear, the role of InferNet in the underlying problem is not clear and it has not been described how it interacts with RelateNet.\\n\\nIt is unclear how the transfer between science birds and basketball has been performed and what exactly has been done there.\\n\\nAs mentioned above, the role of \\u201cin game variables\\u201d is unclear. What are those? I suggest to more clearly define their roles early in the document and use terms from well-defined fields like control (are they \\u201ccontrol inputs\\u201d) or HCI (are they \\u201cuser actions\\u201d?).\\n\\nIn the evaluation section, we have baseline models BM1 and BM2, but they have never been introduced. We need to guess which of the models described in the paper correspond to these.\\n\\nThe related work section is very short and mostly consists of an enumeration of references. The work should be properly described and related to the proposed work. How does the proposed work address topics which have not yet been solved by existing work?\"}"
]
} |
SJeWHlSYDB | SPREAD DIVERGENCE | [
"Mingtian Zhang",
"David Barber",
"Thomas Bird",
"Peter Hayes",
"Raza Habib"
] | For distributions $p$ and $q$ with different supports, the divergence $\div{p}{q}$ may not exist. We define a spread divergence $\sdiv{p}{q}$ on modified $p$ and $q$ and describe sufficient conditions for the existence of such a divergence. We demonstrate how to maximize the discriminatory power of a given divergence by parameterizing and learning the spread. We also give examples of using a spread divergence to train and improve implicit generative models, including linear models (Independent Components Analysis) and non-linear models (Deep Generative Networks). | [
"divergence minimization",
"generative model",
"variational inference"
] | Reject | https://openreview.net/pdf?id=SJeWHlSYDB | https://openreview.net/forum?id=SJeWHlSYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"s51GZ-g4ee",
"H1xiH26qiB",
"rkeamfp9iB",
"BJlMtz3csB",
"BJe9q6g09r",
"B1ex_oAT9H",
"SJlHMmjiKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744969,
1573735490821,
1573732900842,
1573728889731,
1572896146288,
1572887400379,
1571693325360
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2274/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2274/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2274/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2274/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2274/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2274/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies spread divergence between distributions, which may exist in settings where the divergence between said distributions does not. The reviewers feel this work does not have sufficient technical novelty to merit acceptance at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for valuable reviews. We believe that we could fully address the concerns with the following arguments.\\n\\n1. In the case where the KL divergence is infinity between two probability models, it is not well defined in the sense it cannot be used for statistical inference. We illustrate this point in section 2 with the delta distribution example; the KL divergence will be infinity regardless of the location of the two distributions.\\n\\n2. Re the lack of theoretical comparison - the focus of the paper is on introducing a new divergence objective for applications of interest where distributions p and q have different support. From a theory perspective, we analyse how it relates to the widely used KL divergence as follows: In section 2, 3, we demonstrate how to augment the KL-divergence to produce the spread KL and under what sufficient conditions it is a valid divergence and will recover the true solution. \\nIn the MLE case, we compare and contrast KL and spread KL (see section 5.1 and appendix E). Our initial conclusion is that spread-KL preserves the favourable properties of MLE, under weaker conditions. Additional supporting theory can be future work and we believe we have provided a solid foundation, worthy of publication.\\n\\nRe the focus of the applications; situations where the likelihood is not defined, i.e., data are deterministic w/o observation noise - we argue that section 5.2.1, related to ICA, demonstrates exactly the comparison between spread divergence and the traditional MLE based EM algorithm, and how the performance is related to the amount of observation noise present. In figure 1. a, when observation noise is small, relative error explodes on the traditional methods (due to slow down/freezing), whereas our spread divergence method is unaffected.\\n\\nThe set of problems of interest where the likelihood is not defined is large. In applications like deterministic ICA, maximum likelihood slow feature analysis, deterministic POMDP - EM cannot work effectively and heuristics exist with no guarantees. We believe the spread divergence provides a general framework which can be used to solve these problems in an elegant way.\\n\\n3. The optimal choice of spread noise (kernel) will depend on the task. Hence, no, a Gaussian noise is not necessarily an optimal choice in general case.\\n\\nThis choice is a hyper-parameter in our method - and setting parameters for a given choice of spread noise during training. We are restricted on which family of distributions we can use given our analysis in section 3 depends on the stationary characteristic for proving spread is a valid divergence. A more exhaustive ablation of noise choice is future work, for which we provide a solid foundation:\\n\\nWe propose Gaussian and Laplace noise as convenient choices, which satisfy the stationary requirement and in practice were well behaved within our experiments. Empirically we compare the effectiveness of the Gaussian and Laplace choices on performance within section 5.2.2, where, qualitatively, we conclude that Laplace is a better choice (evidence to support that the Gaussian noise is a sub-optimal choice there). See paper for intuition provided.\\n\\nFor how to set the parameters for a given choice of spread noise, we provide a general strategy, which enables learning towards an optimal spread noise. The strategy is to maximize the power of a statistical test (or measure). For example, in the case of the delta-VAE in section 5.2.2, we maximise the discriminatory power of the spread divergence online wrt to the noise parameters, improving performance. Two complementary strategies are the Mean Transform and the Covariance Structure learning presented for the Gaussian case. These can be extended to other distributions, such as the Laplace, which have similar scale and location parameters.\\n\\n4. The same conclusion from the ICA experiment should hold for other problems where the slow down/freezing behaviour is present when an EM style likelihood approach is taken (even with the trick of adding small observation noise). This is caused when the posterior is not updated within the E-step. Other examples that we are aware of where this problematic phenomenon is present are policy learning in deterministic MDP/POMDP [2], maximum likelihood learning of SFA [slow feature analysis [3] (and other probabilistic matrix decomposition techniques). All of which are good candidates for a spread divergence application. Furthermore, the observation noise tricks of existing methods do not guarantee to recover the true data generating process, whereas our proposed spread divergence method can. We have added a point of clarification to section 5 of the paper.\\n\\n[1] M. Arjovsky et al., https://arxiv.org/abs/1701.07875\\n[2] T. Furmston, D. Barber, https://pdfs.semanticscholar.org/2ab5/475f67f5bdb6d4e411b8d7f3c56185b51847.pdf\\n[3] R. Turner et al.,http://www.gatsby.ucl.ac.uk/~turner/SFA/TSNCOMP2006v8.pdf\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for valuable reviews. We believe that we could fully address the concerns with the following arguments.\\n\\n1. \\\"The issues about JS:\\nIn the case of two distributions that have disjoint support, the JS divergence is always a finite constant. We think this is ill-defined since it is not a valid measure of divergence between two distributions and is not useful for statistical inference. It is a common belief for vanilla GAN (with JS divergence) that the source of instability during training is due to the disjoint support of the two distributions. Other distances are proposed to mitigate this effect [1].\\n\\n2. \\\"compare with a mixture of noise\\\"\\nIn the discrete noise case, it is known as the anti-freeze method [2]. It is a special case of spread divergence. As noted in section 2.1; because the linear operator is equivalent to the convolution. \\nHowever, in the continuous case mixture is different than spread; we can see this in the MLE setting, where if the data distribution is a delta function on the data points, the density of the resulting mixture at that point is still infinite (infinity times a constant is still infinity) so the MLE is still ill-defined, whereas the density of the spreaded distribution (with convolution noise) at that point is finite, (delta distribution will become a gaussian distribution). Therefore, we argue that the spread divergence is superior. \\n\\n3. \\\"Does spreading introduce spurious modes? \\\"\\nIn the discrete case, spread noise will add density to the area that has 0 probability, so it will create local modes and it is necessary to define a valid spread divergence (but not spurious). Therefore, below we answer to \\u201cdoes the spread noise introduce spurious local modes in the continuous case?\\u201d\\n\\nNo. For stationary noise (proposed in our paper), it will never introduce additional modes.\\nWe assume a density function f is differentiable in neighbourhood of the local mode. Since the mode is the local stationary point of the density function, $f\\u2019=0$ in the mode position. (This can be generalized to the case that $f$ is locally continuous in the local mode, so there exits a sub derivative which equals to 0.)\\nTo define a spread divergence, we convolve f by a stationary noise $g$ ($g>0$ everywhere by the requirement of spread divergence). The derivative of the convoluted distribution is given by $(g*f)\\u2019=g*f\\u2019$ (differentiation property of convolution, $*$ means convolution here). Since g>0 everywhere, so $g*f\\u2019=0$ if and only if $f\\u2019=0$ , therefore spread noise will never introduce modes. However, other noise such as \\u201cmixture noise\\u201d $(1-\\\\epsilon)p+\\\\epsilon noise$ may potentially introduce additional modes.\\n\\n4. \\\"does it change distribution sufficiency?\\\"\\nNo. The spread noise family introduced in the paper is a bijective operator (one to one mapping). Therefore, according to Fisher-Neymann theorem, it will not change the sufficiency of data statistics.\\n\\n5. \\\"statistical properties of using spread KL and potential applications\\\"\\nThe statistical properties for inference for spread MLE are discussed in section 5.1 and appendix E.1. Comparing to KL, spread KL maintains the asymptotic efficiency and consistency properties, but only needs weaker conditions. \\nFor potential ML applications; We have added a discussion in the end of section 5.2.1 of the revised paper.\\n\\n6. \\\"non-convolutional spreading\\\"\\nWe agree that non-convolutional spreading noise is interesting. However, this cannot be implemented easily for continuous systems in cases where we cannot evaluate explicitly the likelihood of the model. This means that one cannot directly use that method to train continuous implicit models using a modified EM approach. We will leave the analysis of non-convolutional spreading to be future work. \\n\\n\\n7. \\\" Any principles that can guide this optimization rather than black-box optimization?\\\"\\nOptimization of the spread noise hyperparameters is not necessary for simple problems - see section 5.2.1, we achieve significant improvement using spread divergence using a fixed spread distribution.\\nWe agree in higher dimensional, more difficult problems learning the spread noise can improve performance significantly. \\nWe provide a principled method in section 4 to learn the spread distribution in an online fashion that maximises the discriminatory power, which we do not consider a black-box optimisation technique. Similar techniques are widely used in the kernel domain (MMD). \\n\\n8. \\\"missing denominator of $\\\\sigma^2$\\\"\\nThanks for pointing out the small error. We have added the assumption $\\\\sigma^2=0.5$ within the revised paper.\\n\\n9. \\\"definition of TV distance\\\"\\nOur definition of TV was up to a constant; we have clarified within the revised paper.\\n\\n10. We thank the reviewer for pointing out the typos, which we have fixed.\\n\\n[1] M. Arjovsky et al. https://arxiv.org/abs/1701.07875\\n[2] T. Furmston, D. Barber, https://pdfs.semanticscholar.org/2ab5/475f67f5bdb6d4e411b8d7f3c56185b51847.pdf\"}",
"{\"title\": \"Thank you for the positive review. The desk reject suggestion is unfortunate.\", \"comment\": \"Thank you for the positive review. The desk reject suggestion is unfortunate - in attempting to release the code in a timely fashion to support our submission, the GitHub repository link https://github.com/zmtomorrow/spread_divergence_public used was associated to a personal Github account. We argue that the Github user is relatively inactive and utilises an anonymous alias, therefore, it would be difficult to identify the researcher behind the account. We claim it is as difficult to identify the researcher/their affiliations, as trying to find the paper in the public domain (given some other papers are openly endorsed on twitter and arxiv during the review process). Given this, we hope it does not exclude our submission and we have provided a new anonymous link.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a new divergence, called spread divergence, to distinguish probability models. The approach is motivated from the concern that traditional divergence such as f-divergence or KL divergence may not always exist, in which the spread divergence may be a substitute. Some empirical supports are provided for the proposed method. Below I will summarize my concerns.\\n\\n1. The spread divergence is proven no larger than the traditional divergence, so the paper claims this as an advantage of using spread divergence. My question is that if KL or f-divergence of two probability models is infinity, which means they distinguish the models very well, whether a new method is necessary (though it may provide a finite value). \\n\\n2. As a new method, it would be useful to thoroughly compare it with the traditional ones. There is a lack theoretical comparison with the KL or f-divergence. Some numerical examples are provided but seem not enough. For instance, the applications focus on the situations where likelihood is not defined, i.e., data are deterministic w/o observation noise. It is interesting to see other examples where likelihood is defined and how traditional methods perform.\\n\\n3. Kernel based spread divergence has been a major focus of this paper. It is interesting to see which kernel maximizes the spread divergence. Section 3.2 considers Gaussian kernel. Is this an optimal option?\\n\\n4. Section 5 compares EM and spread EM based on one experiment and claims the latter has smaller error. Does the same conclusion holds true in other examples?\\n\\nI believe the motivation of this paper is interesting. This would be a stronger paper if more theoretical and empirical analysis can be added.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"I think the paper must be Desk-rejected as the identity of the authors was revealed.\\nThis thing aside, the paper is an interesting contribution. The concept of spread divergence can be valuable in many context. The presentation is thorough and the theoretical part is correct. On the other hand, the examples are quite diverse and include a standard model (ICA) as well as modern deep generative models. Thus, it represents a valuable contribution worth of publication, if we ignore the identity revelation aspect.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduced a way to modify densities such that their support agrees and that the Kullback-Leibler divergence can be computed without diverging. Proof of concept of using the spread KL divergence to ICA and Deep Generative Models ($\\\\delta$-VAE) are reported based on the study of spread MLE.\", \"comments\": \"In Sec 1, mention that f should be strictly convex at 1. Also mention \\nJensen-Shannon divergence, a KL symmetrization, which is always finite \\nand used in GAN analysis.\\n\\nIn Sec 2, you can also choose to dilute the densities with a mixture: \\n(1-\\\\epsilon)p+\\\\epsilon noise.\\nExplain why spread is better than that? Does spreading introduce \\nspurious modes?, does it change distribution sufficiency? \\n(Fisher-Neymann thm)\\n\\nIn Formula 4, there is an error: missing denominator of \\\\sigma^2. See \\nAppendix D too.\\n\\nIn footnote 4, page 8, missing a 1/2 factor in from of TV (that is upper \\nbounded by 1 and not 2)\\n\\nKL is relative entropy= cross-entropy minus entropy. What about spread KL?\\nIn general, what statistical properties are kept by using the spread? \\n(or its convolution subcase?)\\n\\nIs spreading a trick that introduces a hyperparameter that can then be \\noptimized for retaining discriminatory power, or is there\\nsome deeper statistical theory to motivate it. I think spread MLE should \\nbe further explored and detailed to other scenarii.\\n\\nSpreading can be done with convolution and in general by Eq.3:\\n\\nThen what is the theoretical interpretation of doing non-convolutional \\nspreading?\\n\\n\\nA drawback is that optimization on the spread noise hyperparameter is \\nnecessary (Fig 3b is indeed much better than Fig 3a).\\nIs there any first principles that can guide this optimization rather \\nthan black-box optimization?\\n\\nOverall, it is a nice work but further statistical guiding principles \\nor/and new ML applications of spread divergences/MLE will strengthen the \\nwork.\\nThe connection, if any, with Jensen-Shannon divergence shall be stated \\nand explored.\", \"minor_comments\": \"In the abstract, state KL divergence instead of divergence because \\nJensen-Shannon divergence exists always.\", \"typos\": \"p. 6 boumd->bound\", \"bibliography\": \"Cramir->Cramer, and various upper cases missing (eg.\\nwasserstein ->Wasserstein)\"}"
]
} |
HyxgBerKwB | GraphQA: Protein Model Quality Assessment using Graph Convolutional Network | [
"Federico Baldassarre",
"David Menéndez Hurtado",
"Arne Elofsson",
"Hossein Azizpour"
] | Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.
Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible.
Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results.
GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency.
In this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA. | [
"Protein Quality Assessment",
"Graph Networks",
"Representation Learning"
] | Reject | https://openreview.net/pdf?id=HyxgBerKwB | https://openreview.net/forum?id=HyxgBerKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DQcenQv7mq",
"SkgUfoWcir",
"rJx90K-qor",
"Byxl9tb9jS",
"SkgPfYWcsS",
"ByelYd-qiH",
"SJxKk_b9or",
"HJli5s0ycr",
"SyeIDP2AKr",
"rJeIpEYCKr",
"B1gxZdZFdH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798744936,
1573686029581,
1573685714362,
1573685639553,
1573685519126,
1573685368274,
1573685216719,
1571969939220,
1571895133575,
1571882173686,
1570473975990
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2273/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2273/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2273/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2273/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces an approach for estimating the quality of protein models. The proposed method consists in using graph convolutional networks (GCNs) to learn a representation of protein models and predict both a local and a global quality score. Experiments show that the proposed approach performs better than methods based on 1D and 3D CNNs.\\n\\nOverall, this is a borderline paper. The improvement over state of the art for this specific application is noticeable. However, a major drawback is the lack of methodological novelty, the proposed solution being a direct application of GCNs. It does not bring new insights in representation learning. The contribution would therefore be of interest to a limited audience, in light of which I recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for carefully reading the paper and providing detailed comments that highlight its positive points. Here, we try to answer the two main concerns raised.\\n\\n_______________________________________________________________________\\n1) \\u201cThe novelty is minimal\\u201d\\n\\nWe understand the reviewer\\u2019s concern about technical novelty in general, but, in our opinion, this paper should be evaluated from an application perspective rather than for its technical novelty.\\nWe have minor *technical* novelties regarding 1) the graph representations we use and 2) the co-optimization of local and global scores. We certainly understand that these are not enough for a *technical* paper. However, as an *application* paper, our main novelty is to use a simple recent method (GCN) that is well motivated given the problem and that outperforms previous methods which have been developed for more than a decade. If our main strength had been the introduction of technical novelty, we would have followed a different evaluation strategy for a conclusive argument, mainly testing the method in different scenarios, while being less thorough in each experiment. On the other hand, as positively indicated by all reviewers, we chose to focus on a thorough analysis of this novel application, in terms of ablation studies of features and model components, multiple datasets, and different evaluation metrics.\\n\\n_______________________________________________________________________\\n2) \\u201cthe problem is of interest only to specialists in this domain\\u201d\\n\\nWe try to address this concern, shared with reviewer 2, from four different viewpoints.\\n\\na) Most applications that are now commonly used as a benchmark in machine learning were initially niche domains, such as visual activity recognition, face verification, speaker recognition, sentiment analysis, etc. Our general argument is that \\u201cniche\\u201d is not synonymous with \\u201cunimportant\\u201d and that machine learning research should balance between pursuing state of the art on well-established benchmarks and expanding its horizons to less known fields.\\n\\nb) Although QA can be seen as a niche problem, it is actually an integrated part of most structure prediction pipelines and it has two major downstream applications. First, it can significantly increase the reliability of the predictions. As an example, one can look at the PconsFam database (pconsfam.bioinfo.se) where the structure is predicted for ~8000 Pfam families of unknown structure. Without QA it is virtually impossible to identify which of structures are correct, however, thanks to QA methods ~500 families can be assigned a >90% probability of being correct. Secondly, QA methods based on the evaluation of a single model are a key element in the development of end-to-end protein folding pipelines which are recently gaining popularity. \\n\\nc) In addition to the QA application presented here, we believe that our method and our representation can be transferred to other bioinformatics applications. First, as the graph representation relies only on the contact between residues (in contrast to the 1D and 3D representation), it should be straightforward to apply this method to the direct evaluation and refinement of contact maps. Given the recent improvement in learning-based contact (and distance) prediction, we foresee that an integration of our method with these algorithms could lead to important developments. Another task where our method could prove useful is protein-protein docking. Once again, the description of the problem as a graph is straightforward, but the task poses additional challenges since incorrect examples vastly outnumber the correct examples.\\n\\nd) The results can be interesting for an audience beyond BioInformatics. Graph Neural Networks (GNN) is a popular technique with several variants appearing in recent top machine learning conferences. This paper introduces a new graph-based benchmark for regression tasks. It introduces large datasets corresponding to the biannual CASP challenge and lays down a strong baseline for future development. Furthermore, the paper proposes new feature representations accompanied by thorough ablation studies, which can be useful for the general GCN audience, as pointed out by Reviewer 2.\\n\\n_______________________________________________________________________\\n\\nFinally, we thank the reviewer for evaluating this work as \\u201ca good application paper showing the application of a known technique to solve a problem in a new domain\\u201d. In that regard, we would also like to note that this is an application paper aligned with the ICLR call for papers that explicitly lists \\u201ccomputational biology\\u201d as a relevant application domain.\"}",
"{\"title\": \"Response to Reviewer 2, part 3\", \"comment\": \"_______________________________________________________________________\\n6) \\u201cQA application is a bit of a niche problem in bioinformatics.\\u201d\\n\\nWe try to address this concern, shared with reviewer 1, from four different viewpoints.\\n\\na) Most applications that are now commonly used as a benchmark in machine learning were initially niche domains, such as visual activity recognition, face verification, speaker recognition, sentiment analysis, etc. Our general argument is that \\u201cniche\\u201d is not synonymous with \\u201cunimportant\\u201d and that machine learning research should balance between pursuing state of the art on well-established benchmarks and expanding its horizons to less known fields.\\n\\nb) Although QA can be seen as a niche problem, it is actually an integrated part of most structure prediction pipelines and it has two major downstream applications. First, it can significantly increase the reliability of the predictions. As an example, one can look at the PconsFam database (pconsfam.bioinfo.se) where the structure is predicted for ~8000 Pfam families of unknown structure. Without QA it is virtually impossible to identify which of structures are correct, however, thanks to QA methods ~500 families can be assigned a >90% probability of being correct. Secondly, QA methods based on the evaluation of a single model are a key element in the development of end-to-end protein folding pipelines which are recently gaining popularity.\\n\\nc) In addition to the QA application presented here, we believe that our method and our representation can be transferred to other bioinformatics applications. First, as the graph representation relies only on the contact between residues (in contrast to the 1D and 3D representation), it should be straightforward to apply this method to the direct evaluation and refinement of contact maps. Given the recent improvement in learning-based contact (and distance) prediction, we foresee that an integration of our method with these algorithms could lead to important developments. Another task where our method could prove useful is protein-protein docking. Once again, the description of the problem as a graph is straightforward, but the task poses additional challenges since incorrect examples vastly outnumber the correct examples.\\n\\nd) the results can be interesting for an audience beyond BioInformatics. Graph Neural Networks (GNN) is a popular technique with several variants appearing in recent top machine learning conferences. This paper introduces a new benchmark which naturally suits graphs. It introduces large datasets corresponding to the biannual CASP challenge and lays down a strong baseline for future development. Furthermore, as pointed out in the review, the paper proposes new feature representations accompanied by thorough ablation studies, which can be useful for the general GCN audience.\"}",
"{\"title\": \"Response to Reviewer 2, part 2\", \"comment\": \"_______________________________________________________________________\\n4) \\u201cFormulas in section 2.3 are cryptic for audience unfamiliar with GCN and it is not specific to this application\\u201d\\n\\nWe agree with the reviewer that our implementation is not specific to protein quality assessment (we kept it general on purpose). However, by reviewing recent graph network literature, we noticed that the research community is far from reaching an agreement on what the standard formulation of a message-passing GCN should be (the graph-based submissions in this current edition of ICLR speak for themselves). This is not uncommon and has happened in the past with e.g. convolutional layers that nowadays we assume to be standardized. For this reason, we decided to briefly discuss the algorithmic implementation that we use in our method. \\nSection 2.3 is meant as a reference for the reader that is already familiar with GCN variants and had to be kept brief due to space constraints. In the same paragraph, we cite Battaglia et al., which serves as a reference for our implementation, and which we encourage to consult for a thorough investigation.\\n\\nTo make the paper more accessible, we slightly modified section 2.3 and added a lengthy explanation of message-passing layers in the appendix section C.1, where all algorithmic steps are motivated and illustrated.\\n\\n_______________________________________________________________________\\n5) \\u201cFigure 3b) shows that there is a cluster of predicting 0\\u201d\\n\\nThat\\u2019s a keen observation. We double checked the predictions for individual targets and identified target T060 to be the problem. Figure 16 in the appendix clearly shows that the model defaults to predicting a small constant value for all decoys of T060, while predicting reasonable scores for all other targets. As a sanity check, we compared the plots of GraphQA with those of GraphQA-RAW, i.e. the model trained without self information, dssp and partial entropy features (in the repository they are located at results/allfeatures/CASP12/global_gdtts_funnel.pdf and results/residueonly/CASP12/global_gdtts_funnel.pdf respectively). It turns out that the model trained on \\u201craw\\u201d amino acid features does not output the same degenerate predictions as its counterpart (the predictions are not perfect, but definitely better than a constant). We suspect that some error in the data pipeline might have produced misleading features for T060, e.g. the multiple sequence alignment program that extracts self information and partial entropy, or the DSSP program that computes secondary structure features. We added this remark to the paper.\"}",
"{\"title\": \"Response to Reviewer 2, part 1\", \"comment\": \"We thank the reviewer for detailed and constructive feedback that definitely increases the quality of our work. Here we separately address the concerns raised.\\n\\n_______________________________________________________________________\\n1) \\u201cMethodological novelty is low -- this is a straightforward application of GCN\\u201d \\n\\nWe understand the reviewer\\u2019s concern about technical novelty in general, but, in our opinion, this paper should be evaluated from an application perspective rather than for its technical novelty.\\nWe have minor *technical* novelties regarding 1) the graph representations we use and 2) the co-optimization of local and global scores. We certainly understand these are not enough for a *technical* paper. However, as an *application* paper, our main novelty is to use a simple recent method (GCN) that is well motivated given the problem and that outperforms previous methods which have been developed for more than a decade. If our main strength had been the introduction of technical novelty, we would have followed a different evaluation strategy for a conclusive argument, mainly testing the method in different scenarios, while being less thorough in each experiment. On the other hand, as positively indicated by all reviewers, we chose to focus on a thorough analysis of this novel application, in terms of ablation studies of features and model components, multiple datasets, and different evaluation metrics.\\n\\nFinally, we thank the reviewer for noting that \\u201cresults show a decent improvement over the state of the art in this particular application\\u201d. In that regard, we would also like to note that this is an application paper aligned with the ICLR call for papers that explicitly lists \\u201ccomputational biology\\u201d as a relevant application domain. \\n\\n_______________________________________________________________________\\n2) \\u201cThe objective of QA is a bit suspect [...] using experimentally resolved protein structures\\u201d\\n\\nThis is a thoughtful remark about the limitations of QA that transcends this work\\u2019s research question. Here we provide explanations as well as additional results to alleviate this concern. \\nOur evaluation setup uses \\u201cold\\u201d datasets (CASP 7-10, 2007-2013) for training and the most recent datasets (CASP 11-13, 2015-2019 and CAMEO) for testing. As pointed out, all proteins in these datasets share a common factor that is instrumental for quantitative evaluations, namely that experimental determination of protein structure is feasible (e.g. they can be crystallized and scanned under an x-ray microscope). Other than this source of bias, which is explicit and outside our control, we believe that the scale and diversity of the targets considered for testing ensures a sufficient level of generalization. CASP 11, 12, 13 and CAMEO, in fact, portray a large spectrum of non-disordered proteins, e.g. ranging from very short to very long chains, from stand-alone to part of a complex, from hydrophobic to hydrophilic. A QA method that achieves good performances across these diverse datasets has the potential to correctly score computationally-modeled decoys of proteins whose true structure is unknown.\\nFinally, as an additional study, we include a performance comparison between transmembrane and soluble proteins (section D.2). Predictably, GraphQA performs better on soluble proteins, which are more numerous in the training set, but it also scores transmembrane proteins to an acceptable degree.\\n\\n_______________________________________________________________________ \\n3) \\u201cSeparation encoding is done as a one-hot vector.\\u201d\\n\\nThanks for bringing up this interesting question, it was definitely something worth looking into. There are actually two types of distances in play in the edges of our protein graphs: the spatial distance and the sequential distance (or separation). Our initial approach was to encode the spatial distance using a single RBF kernel and the separation as a categorical variable over biologically-motivated bins. \\nAs suggested by the reviewer, we conducted additional studies with several variants of this choice. For the spatial distance we tried: 1) removing it, 2) using the scalar value in Angstrom, 3) encoding the distance using 32 RBF kernels with unit variance. For the separation we tried: 1) removing it, 2) using the scalar separation (integer), 3) using a categorical encoding.\\nOur findings, which are now included in the updated revision (section D.1), are the following. Categorical separation performs better than a scalar value for LDDT scores, while the effect on GDT_TS is minimal. For spatial distances, RBF encoding performs marginally better than the other two, both on LDDT and GDT_TS scores.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for carefully reading the paper and for suggesting action points to make bibliography more complete and our experiments more convincing for the audience. Here we address the two main concerns.\\n\\n_______________________________________________________________________\\n1) \\u201cno experimental result on CASP13\\u201d\\n\\nWe did not include CASP 13 in the initial version since recently-published techniques use CASP 11/12 as a benchmark. Clearly, the recent CASP 13 represents an important dataset and is great of interest for many researchers. As suggested by the reviewer, we have now tested our model on publicly available targets of CASP13 and we report a comparison with other top participants of the challenge (section F.3). This comparison can be unfair since GraphQA is only trained on CASP 7-10, while other participants have likely (re)trained their models on all previous CASP datasets as well as other datasets. However, even without retraining, we achieve performances that are in line with the results presented for CASP 11 and 12.\\n\\nTo strengthen our experimental evidence, we have also tested our model on the CAMEO dataset as well. Metrics and plots are reported in section F.4.\\n\\n_____________________________________________________________________\\n2) \\u201cReferences are missing or misplaced at some places.\\u201d \\n\\nWe revised the mentioned paragraph to explicitly mention protein design and added additional references for structure prediction, namely:\\n- \\u201cDistance-based protein folding powered by deep learning\\u201d \\n- \\u201cHigh precision in protein contact prediction using fully convolutional neural networks and minimal sequence features.\\u201d \\n\\nWe would be grateful if the reviewer could share any additional reference that is missing or misplaced so that we can further improve on this point.\"}",
"{\"title\": \"To all reviewers\", \"comment\": \"We sincerely thank the reviewers for the thoughtful comments, which led us to further discuss our work and to improve the paper with additional experiments and explanations.\\n\\nHere\\u2019s the list of updates in the revised version of the paper, for most of which we could only find room in the appendix due to the space constraint:\\n- Updated some references in the main text\\n- Additional test datasets, CASP13 and CAMEO (appendix F)\\n- Additional description of our GCN implementation (appendix C.1)\\n- Comparison of different representation for sequential and spatial distance (appendix D.1)\\n- Performance comparison of transmembrane vs soluble proteins in (appendix D.2)\\n\\nFor the sake of the followup discussions, we individually respond to the points raised by each reviewer. This means that our response can be sometimes repetitive (for overlapping concerns) across different reviewers. We apologize for this redundancy.\\n\\nOn top of the individual responses, we would like to comment on the relevance of our work. We suggest that our application paper should mainly be evaluated based on the following merits that it possesses: \\na) the novelty of the method within the domain of the application,\\nb) the relevance of the paper to the venue,\\nc) the quality of the experiments,\\nd) the significance of the results.\", \"we_believe_we_cover_these_aspects_as_follows\": \"a) this is the first time that Graph Networks are used for protein model quality assessment,\\nb) \\u201ccomputational biology\\u201d is listed as a relevant application field in ICLR call for paper,\\nc) as indicated by reviewer 1 and 2, we provide thorough ablation studies of both feature representations and architectural components using several controlled runs per setup, which makes our experiments informative and reliable for follow-up works,\\nd) finally, as indicated by all three reviewers, our simple model works noticeably better than prior works, on several datasets, and according to various evaluation metrics.\\n\\nTo support this point, there are many application papers that have been considered relevant at top ML conferences for the aspects listed above. Here we report one representative paper each from the latest ICLR, NeurIPS and ICML conferences, that are closest to our work (without claiming that the opposite cases do not exist).\\n\\na) Similar OpenReview discussion regarding the technical novelty of a protein application paper, published at ICLR 2019: \\u201cHuman-level Protein Localization with Convolutional Neural Networks\\u201d, https://openreview.net/forum?id=ryl5khRcKm \\nb) Graph Networks applied to a niche field, but with significant improvements and thorough analysis, published at ICML 2019: \\u201cCircuit-GNN: Graph Neural Networks for Distributed Circuit Design\\u201d, https://icml.cc/Conferences/2019/Schedule?showEvent=4826\\nc) Another niche application using a general gated graph recurrent network with noticeable performance published at NeurIPS 2019: \\u201cDevign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks\\u201c, https://neurips.cc/Conferences/2019/Schedule?showEvent=14038\\n\\nFurthermore, we argue that thoroughness and conclusiveness of experiments should outweigh technical novelties for an application paper. To support this, we kindly refer the reviewers to an ICLR2020 submission which raises concern around accepted papers that introduce technical novelties to GCNs but lack thorough experiments: \\u201cA Fair Comparison of Graph Neural Networks for Graph Classification\\u201d, https://openreview.net/forum?id=HygDF6NFPB\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This manuscript describes a new deep learning method for the prediction of the quality of a protein 3D model in the absence of the experimental 3D structure of the protein under study. The major idea is to model a protein 3D model using a graph. That is, each residue in a protein is modeled as a node and one edge is added to connect two residues if they are spatially close to each other. Based upon this graph representation, the manuscript describes a graph convolutional neural network (GCN) to predict both local (i.e., per residue) and global quality. The authors showed that this GCN method works well on the CASP11 and CASP12 data. Unfortunately, there is no experimental result on CASP13 models, which significantly reduce my interest on this paper.\", \"minor_concerns\": \"References are missing or misplaced at some places. For example, in the 1st sentence of the 4th graph, \\\"While computational protein folding has recently received attention...\\\", only protein design papers are cited. Some representative protein structure prediction papers shall be cited here.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes use of graph convolutional networks (GCN) for quality assessments (QA) of protein structure predictions. In particular, protein structure prediction is a very active area of research witnessing steady progress during the previous decade or so. To estimate the quality of the prediction, many experimentally resolved structures are needed. However, experimental structure determination is expensive and many protein families are notoriously hard to experiment with. Thus, current estimates of the quality of the protein structure prediction models are incomplete and biased. As a result, there is an interest in guessing quality of protein structure predictions on protein families that are not characterized experimentally. Previous papers already proposed several neural network architectures for the QA task. This paper shows that GCN applied on a graph of a predicted protein structure achieves higher accuracy than the previously proposed neural networks\", \"strengths\": [\"The proposed GCN outperforms other neural network baselines.\", \"The protein representation is reasonable.\", \"The paper is reasonably well written with a nice overview of the related work.\", \"Ablation studies are well done, in clean graphs. This can be useful for other authors who work with GCNs.\"], \"weaknesses\": [\"Methodological novelty is low -- this is a straightforward application of GCN\", \"The objective of QA is a bit suspect for the sole reason that the training and testing is performed using experimentally resolved protein structures. This data set is biased and there are no guarantees that the reported accuracy will hold over a vast range of protein families that are not structurally characterized.\", \"Separation encoding is done as a one-hot vector. This could probably be passed as a scalar value. Would be nice to have comparison between 1-hot vs scalar in the experimental results\", \"Formulas in section 2.3 are cryptic for audience unfamiliar with GCN and it is not specific to this application.\", \"Figure 3b) shows that there is a cluster of predicting 0 where the ground truth is bigger than 0.6.\", \"Overall, this is a borderline paper. There is little methodological novelty and the QA application is a bit of a niche problem in bioinformatics. However, the results show a decent improvement over the state of the art in this particular application, so this paper might be of importance for a limited audience interested in this problem. Giving this work a benefit of the doubt the entered rating is a weak accept.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Proteins are sequences of Amino Acids. Identifying the 1-D sequence of a protein is straightforward. Each protein folds to a 3D structure. Determining the 3D structure of a protein (the protein folding problem) is expensive and hard. It is well known that the function of a protein is determined by its 3D structure. Several computational methods have been proposed for protein structure prediction, but none of these models perform well in all circumstances. Different models perform well for different kinds of proteins. The current work deals with evaluating the different models to determine which model is likely to perform better on which protein.\\n\\nThe authors use Graph Convolutional networks with messaging to solve this problem of evaluating protein quality. Their method is evaluated using the Global Distance Test Total Score and Local Distance Difference Test (which is done at a residue level). They use several node and edge level features like DSSP, Partial Entropy and Self Intro.\", \"pros\": \"Their methods perform better than comparable methods using 1D and 3D CNNs. An ablation study is conducted to show the importance of various features. The paper is well written and the source code is provided for reproducibility. Overall, this is a good application paper showing the application of a known technique to solve a problem in a new domain.\", \"cons\": \"The novelty is minimal and the problem is of interest only to specialists in this domain.\"}",
"{\"comment\": \"We would like to issue the following important corrections and apologize to the reviewers if the current version has caused any misunderstanding. They will be corrected in the next version.\\n\\nSection 3.1, the last part of the second paragraph should read as:\\nOf these, we focus on R_target and R_model, which respectively measure the ability to rank decoys by quality and to distinguish the correctly-predicted parts of a model from those that need improvement. A description of these and other metrics can be found in appendix E.\\n\\nTable 1, last row: GraphQA_RES refers to the GraphQA_RAW version mentioned in the text.\", \"title\": \"Errata corrige\"}"
]
} |
rygeHgSFDH | Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN) | [
"Peter Sorrenson",
"Carsten Rother",
"Ullrich Köthe"
] | A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST. | [
"disentanglement",
"nonlinear ICA",
"representation learning",
"feature discovery",
"theoretical justification"
] | Accept (Spotlight) | https://openreview.net/pdf?id=rygeHgSFDH | https://openreview.net/forum?id=rygeHgSFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4ZXllCVZ4",
"HkemnUnqoH",
"BylKIoj5oS",
"Byx4zDN_oS",
"HJejK84_sB",
"Hye788EOiB",
"SJlju2tx9r",
"H1eAihsatr",
"rylG-P22FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744903,
1573729963029,
1573727056887,
1573566220436,
1573566083328,
1573566027202,
1572015219095,
1571826853866,
1571763961616
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2272/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2272/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2272/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2272/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2272/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2272/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2272/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2272/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper builds on the recent theoretical work by Khemakhem et al. (2019) to propose a novel flow-based method for performing non-linear ICA. The paper is well written, includes theoretical justifications for the proposed approach and convincing experimental results. Many of the initial minor concerns raised by the reviewers were addressed during the discussion stage, and all of the reviewers agree that this paper is an important contribution to the field and hence should be accepted. Hence, I am happy to recommend the acceptance of this paper as an oral.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your reply and clarifications.\", \"comment\": \"I have read the comments and the appendix is certainly improved.\\n\\nThe new added section 4.4.3 admits that selection of $u$ is not clear. I think this section and the subsequent one could also be merged into the conclusions, as a limitation and subject for future work. \\n\\nI realize that my final comment was incomplete and should have read \\n\\\"Arguably, the key contribution of the current paper is in 3.1. and 3.2, but these are rather hastily written and quite short. Overall, the balance of known results and new contributions should be better balanced and highlighted.\\\"\\n\\nbut I feel this is also addressed rafter reading the other replies.\\n\\nI am now more confident that the paper would be a valuable contribution to ICLR.\"}",
"{\"title\": \"Reply to authors\", \"comment\": \"Thank you very much for your thorough rebuttal and for addressing many of my issues.\\n\\n1. The new section is great and very clear. But I would somehow have expected to see it in the Theory section, perhaps at the end?\\n\\n3. I see thanks. I was more looking for a Beta-VAE baseline which would fail to find the two-dimensional manifold embedded into the ten dimensions. But this feels out of scope for now, so this could be left to further work.\\n\\n4. Great addition thanks, and Figure 15 in the Appendix is a great demonstration of the effects. I guess it'd be even better if the spectrum itself would be sufficient (i.e. without resorting to sampling from the model to check), but this is promising.\\n\\n5. Yes I saw Figure 6/7, but I was looking for something more drastic with stronger gaps in the Ground truth supports (and where increasing data points would not be sufficient. I'm aware this is a completely different setting which departs from your support assumptions however...). But now looking at Figure 7 more carefully, the reconstructed space does indeed look rather different from the ground truth, so the effect is already quite strongly there, hence I see your point.\"}",
"{\"title\": \"Reply to Reviewer #4\", \"comment\": \"Thank you for your thoughtful comments. We address each one in turn:\\n\\n1. We have added a section (4.4.3) explaining our justification for the use of the digit labels for EMNIST and point out that other data sets may not have such an obvious candidate for u.\\n\\n2. We agree that there is no canonical set of variables for EMNIST. For this reason, we allow the reader to see all 22 of the variables our model discovered in Appendix F, figures 10 to 15, and trust that they find them convincing enough to agree that they are at least approximately correct.\\n\\n3. We do not expect that a VAE would perform worse on the experiments on toy data. The result we wish to emphasise is that the model discovers the two-dimensional manifold embedded in ten dimensions. Our justifications for using invertible networks rather than VAEs are given in the Introduction (final paragraph before summary, starting with \\u201cWe introduce a variant...).\\n\\n4. We have amended the main text to make this justification clear. See Section 4.4.2, paragraph starting with \\u201cThe model encodes information\\u2026\\u201d\\n\\n5. We performed exactly such an experiment where a \\u2018gap\\u2019 is present when less data is available but becomes filled as the number of data points is increased. It can be found in Appendix E, figures 6 and 7. \\n\\n6. We have modified the captions in Figure 4 as suggested.\\n\\n7. Thank you for bringing these works to our attention. We agree that they are relevant and have included them in the related work section.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Thank you for your thoughtful comments.\\n\\nYou are correct that the latent dimension of the model must be chosen at least as large as the dimension of the true generating latent space. This is one of the reasons why we prefer invertible neural networks / normalizing flows over VAEs, as in such models, the dimension of the latent space is necessarily the same as the dimension of the data, in order to preserve bijectivity. Therefore, we never choose the dimension of the model\\u2019s latent space, so in this context, an experiment where the latent space is too small is not possible.\", \"regarding_the_magnitude_of_the_standard_deviations_of_noise_dimensions\": \"we have added a section (3.3) explaining the role of volume-preservation in our model. Since the EMNIST data set is relatively noise-free, noise does not play a large role in explaining the data, hence we expect it to also play a small role in the derived latent space, quantified by its standard deviation. With more noisy data we might observe non-informative dimensions which have a similar standard deviation to the informative dimensions.\"}",
"{\"title\": \"Reply to Reviewer #1\", \"comment\": \"Thank you for your thoughtful comments.\\n\\nWe agree that the paper as submitted was not sufficiently self-contained. We have addressed this by reproducing the necessary proofs from Khemakhem et al. (2019) in the appendices, alongside our extension of them, so that our article can now be read as a standalone piece.\\n\\nRegarding selection of the variable u (side information): the major requirement that u must fulfill is that it must condition the distribution of the latent space variables z, as detailed in equation (1). We have added a section (4.4.3) explaining our justification for the use of the digit labels for EMNIST and point out that other data sets may not have such an obvious candidate for u.\\n\\nWe have removed the canonical parameters of a multivariate Gaussian (formerly equations (2) to (4)).\\n\\nThank you for bringing to our attention our omission of SVAE. We have added it to the related work section.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper extends recent work by Khemakhem et al on nonlinear ICA to allow for unknown number of generative factors. This is tackling an important problem in the field of generative modeling, where one would like to extract the generative factors of a dataset that independently control its features (i.e. disentanglement).\\n\\nThe paper is very clearly written, does a great job at motivating the problem and presenting the recent results from Khemakhem et al, before extending them with some simple theorems and demonstrating their application on toy data + EMNIST.\\nI think that this research direction is extremely promising, and obtaining a theoretical understanding of when/why disentangling could work would be particularly valuable to the field, and I would lean towards acceptance so that this work gets more attention.\\n\\nIt is slightly surprising however that their empirical results seem to indicate that these theoretical conditions do not seem to be necessary, which should be investigated further (and might help illustrate the dichotomy in some claims and results obtained in the disentangling literature recently).\\n\\n1.\\tIt was unclear to me how one should/would decide what to use for $u$ or what to leave out to be factorized by the method. \\n\\tThis may be out of scope for this current work, but one way to answer that would be to leverage datasets with more fully labelled factorised data (e.g. dSprites [1]) and present how one should leverage these with $u$, in order to identify the original generative factors.\\n2.\\tRelated, using EMNIST was interesting, but given the lack of \\u201caccepted\\u201d generative factors to be recovered, it is hard to tell if the 22 variables found are \\u201ccorrect\\u201d or more similar to using a generative model which would entangled the data more (and hence would falsely introduce extra latent variables).\\n3.\\tThe toy dataset, with its random RealNVP network to produce the data, was less \\u201cmixed\\u201d than I expected, looking at Figure 2B. In its projected view, the clusters are still rather easy to identify by eye, which surprised me somehow? Could you comment a bit more on how difficult is the task, or present a VAE baseline that would fail to explain that data?\\n4.\\tI did not see how the threshold for selecting 22 latent variables on EMNIST was set? Was the 23rd latent variable significantly less informative? The spectrum on Figure 3A was not precise enough to assess this fairly and the single mention of \\u201cmeasured by the standard deviations of test data transformed to the latent space\\u201d was too vague to reverse-engineer the decision.\\n5.\\tIt was interesting to read about the observations of when this method should fail. It would be interesting if a dataset with explicit \\u201cgaps\\u201d would be constructed to analyze this case.\\n6.\\tFigure 4 might benefit from mentioning \\u201cwhat\\u201d each variable controls for directly in their individual caption / on top of them, instead of having to read this through in the full figure caption.\\n7.\\tThe current model in the end is modeling the latent state using a mixture of gaussians (although these now have a theoretical connection to the true generative factors). How much does this differ to existing generative models using VAEs with a mixture of gaussian prior [2, 3]?\", \"references\": \"[1] dSprites dataset: https://github.com/deepmind/dsprites-dataset \\n[2] https://arxiv.org/abs/1611.02648\\n[3] https://arxiv.org/abs/1902.03717\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Based on a recent work on identifying the joint distribution over observed and latent variables, the paper presents an extension of it, where the dimension of the latent space is not assumed to be known. Moreover, the authors propose a new neural network architecture based on previous propositions (RealNVP and NICE) to perform this identification.\\n\\nThe paper's presentation is relatively clear, although all the theoretical results are relegated to the appendix. The extension to unknown latent space dimension seems to be quite straightforward, given the recent work that this paper is based on. However, the experimental results performed on EMNIST are quite convincing and the results are interesting. \\nOverall, I think this paper would be interesting to the ICLR audience.\\n\\nIt seems currently that there is a need to choose a dimension to be large enough (i.e., larger than the true dimension of the latent space). I would have appreciated some discussion (and some experiments) if this dimension is chosen too small. \\nAlso, when the dimension is chosen large enough, the non-informative learned latent variables are determined by looking at their standard deviations. Do they always have to be small compared to the informative latent variables?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper builds upon the recent theoretical framework on nonlinear ICA, put forward in recent work Khemakhem et al. (2019) that draw a lot of attention. The latter work provides an extension of the basic nonlinear ICA that is closely related to a VAE with a conditional factorized prior, essentially introducing side information (with assumed extra observables u) to resolve non-identifiability issues.\\n\\nThis paper proposes an invertible architecture related to RealNVP and NICE, coined as GIN: the General Incompressible-flow network. \\n\\nOn the positive side,\\n\\nA key feature of the proposed methodology is model selection (such as determining the model order) that is in general a hard problem even in linear latent variable models. \\nThe performance is illustrated on the EMNIST dataset, as well as carefully constructed synthetic experiments. The semantic descriptions of each captured dimension, as detailed in the appendix, is particularly interesting. \\nThe experimental section is quite extensive. \\n\\nOn the negative side, \\n\\nThe paper is largely based on the results of a recent technical report (Khemakhem et al. (2019)) and is not self contained, hence rather hard to digest. Even the proofs in the appendix and the intuition requires the reading of this longer technical report. \\n\\nIt is also not at all clear where the side information (variables u) is coming from. On EMNIST a natural candidate is the digit labels (and this turns out to be is indeed the case in the experimental section) but it is not very clear what conditions need to be satisfied. What prevents us selecting simply a subset of observations x as u? \\n\\nLacking an explicit motivation, the exercise of writing the canonical parameters of a multivariate Gaussian in (2) - (4) is not very informative. This needs to be better motivated.\", \"the_prior_structure_resembles_quite_closely_the_general_hierarchical_priors_svae_proposed_in_https\": \"//arxiv.org/abs/1603.06277. It would be also informative to discuss the links with this approach.\\n\\nArguably, the key contribution of the current paper is in 3.1. and 3.2, but these are rather hastily written and quite short. Overall, the balance of known results and new contributions\"}"
]
} |
rkxgHerKvH | DEEP GRAPH SPECTRAL EVOLUTION NETWORKS FOR GRAPH TOPOLOGICAL TRANSFORMATION | [
"Liang Zhao",
"Qingzhe Li",
"Negar Etemadyrad",
"Xiaojie Guo"
] | Characterizing the underlying mechanism of graph topological evolution from a source graph to a target graph has attracted fast increasing attention in the deep graph learning domain. However, there lacks expressive and efficient that can handle global and local evolution patterns between source and target graphs. On the other hand, graph topological evolution has been investigated in the graph signal processing domain historically, but it involves intensive labors to manually determine suitable prescribed spectral models and prohibitive difficulty to fit their potential combinations and compositions. To address these challenges, this paper proposes the deep Graph Spectral Evolution Network (GSEN) for modeling the graph topology evolution problem by the composition of newly-developed generalized graph kernels. GSEN can effectively fit a wide range of existing graph kernels and their combinations and compositions with the theoretical guarantee and experimental verification. GSEN has outstanding efficiency in terms of time complexity ($O(n)$) and parameter complexity ($O(1)$), where $n$ is the number of nodes of the graph. Extensive experiments on multiple synthetic and real-world datasets have demonstrated outstanding performance. | [
"deep graph learning",
"graph transformation",
"brain network"
] | Reject | https://openreview.net/pdf?id=rkxgHerKvH | https://openreview.net/forum?id=rkxgHerKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LhoQJAOpJm",
"rkete7wIjr",
"rJxus2Q-jH",
"r1gYsF--iB",
"r1g_0Jxl5S",
"SygNO0FTtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798744874,
1573446385336,
1573104800367,
1573095841440,
1571975119624,
1571819115930
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2271/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2271/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2271/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2271/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2271/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers kept their scores after the author response period, pointing to continued concerns with methodology, needing increased exposition in parts, and not being able to verify theoretical results. As such, my recommendation is to improve the clarity around the methodological and theoretical contributions in a revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Addition of more analysis with additional evaluation metric as suggested by Reviewer #2\", \"comment\": \"Dear Reviewer #2, following Item 4 in your comments, we have added more analysis on the performance evaluation of our method and comparison methods on all four real-world datasets. This new evaluation is based on the R2 score, which is a widely-used metric for prediction performance evaluation. Please see them in Table 5 in the Appendix, along with the discussions in Appendix A3.\\n\\nBriefly speaking, as shown in Table 5, our method GSEN achieves the best performance in 3 out of 4 datasets and is still highly competitive in the remaining one dataset. Our GSEN also achieves the best performance on average with a large margin when compared with other methods.\"}",
"{\"title\": \"Authors' Response to Reviewer #2\", \"comment\": \"We appreciate the comments from the reviewer very much.\\n\\n1. For the first concern. Actually our paper is neither targeted at nor has claimed to handle all types of graph evolution process. Instead, we only aim at those situations whose eigenvectors do not change or do not change much. And it can be seen in our Lemma 3.1, our method has good expressiveness for such situations, with significant contributions in the following aspects: \\n\\ni. Such situations, where eigenvectors do not change much during graph evolution, widely exist in the real world. First, there are hundreds of commonly-used graph kernels (e.g., diffusion kernels) who are aiming to explain those phenomena whose eigenvectors keep unchanged during evolution, as exemplified in Table 1. Beyond that, our method is an end-to-end framework that achieves higher expressiveness over all of them. Second, in many (and ever-increasing) domains, people observed that the eigenvectors do not change much. For example, in the \\u201cbefriending process\\u201d in a social network and in structural to functional connectivity transformation in the neuroscience domain. Moreover, our experimental results in 15 datasets with different types of graph process demonstrates the effectiveness of the proposed method for various types of graph evolution process.\\n\\nii. Our method enjoys huge efficiency advantages for the situations when eigenvectors do not change. Namely, the complexity is reduced to linear to the graph size, compared with the (at least) quadratic complexity of traditional methods.\\n\\niii. It is convenient to verify if a specific application is suitable for our method because it is easy to see whether a graph evolution has the eigenvectors unchanged or not. More importantly, our method can help to identify if an evolution process has an unchanged eigenvector or not. Specifically, when our methods achieve high prediction performance, then the process tends to be spectral evolution process with no (or little) change in eigenvectors. This is because as shown in Lemma 3.1, if our model, which enforces low error in Equation (1), also leads to low error in $\\\\|F(U\\\\Lambda U^T) \\u2013 L\\u2019\\\\|^2$, then U^T L\\u2019 U is close to a diagonal matrix.\\n\\n2. For the second concern, we believe our contribution is significant. This paper is much more than proposing merely a new kernel, but instead propose an end-to-end framework that can fit any existing and (unknown) kernels as well as their compositions and summations. This is radically different because the previous graph kernels are still prescribed models that require human labors to tailor a predefined kernel to targeted data by prior knowledge or heuristics. But our model is the first which can purely rely on the data and demonstrate our dominating expressiveness over traditional kernels by extensive experiments on 15 datasets and by theoretical discussions in Lemma 3.2. Also, we believe that being as the first work for deep spectral graph translation, it opens a new window for the deep graph learning community.\\n\\n3. For the third concern, we want to clarify that our method is generic to whichever it is $L$ or $A$ (this is why we mentioned \\u201cwithout loss of generalizability\\u201d in Lemma 3.1, but we will explicitly mention this in the revision. This means we can also use $A$ instead of $L$. The reason why we prefer $L$ a little bit in this paper is because in deep graph learning domain, graph Laplacian $L$ seems more commonly used and its eigen-decomposition has higher popularity.\\n\\n4. For the last concern of the reviewer, we will have no problem to add more analysis using more metrics, such as RMSE. We are working on that and try our best to give the updates by the end of the rebuttal session. The reason why we did not use link prediction is due to the nature of our real-world applications. Specifically, for our brain network application, researchers in the domain all achieve the prediction of functional connectivity all at once. Similarly, the other application on the authentication networks also focuses on whole graph generation. Following the domains\\u2019 inherent setting will not only ensure that our experiment is practically meaningful but also ensure we are able to compare with the state-of-the-art methods in their domain. Correlation is commonly used for graph evolution papers, such as:\\n\\nAbdelnour, Farras, et al. \\\"Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure.\\\"\\u00a0Neuroimage\\u00a0172 (2018): 728-739.\\n\\nHoney, C. J., et al. \\\"Predicting human resting-state functional connectivity from structural connectivity.\\\"\\u00a0Proceedings of the National Academy of Sciences (PNAS)\\u00a0106.6 (2009): 2035-2040.\\n\\nXiaojie Guo, Liang Zhao, Cameron Nowzari, Setareh Rafatirad, Houman Homayoun, and Sai Dinakarrao. Deep Multi-attributed Graph Translation with Node-Edge Co-evolution. The 19th International Conference on Data Mining (ICDM 2019), Beijing, China.\"}",
"{\"title\": \"Response to the comments on our experiments\", \"comment\": \"Thanks a lot for the reviewer's comments. We devoted extensive efforts to the experiments and hence deeply believe that our experiments on 15 datasets (4 real-world datasets + 11 synthetic datasets) against 7 comparison methods are sufficient to demonstrate the effectiveness and efficiency of our methods.\", \"more_details_are_as_follows\": \"First, the experiments on four real-world datasets demonstrate that the proposed model outperforms all the comparison methods in accuracy and efficiency significantly. Specifically, Table 3 shows that our method achieved the best performance on 3 out of 4 real-world datasets when being compared with 7 other state-of-the-art methods, and achieved the second-best in the remaining one dataset. Our method also obtained the best overall performance. Moreover, our method is 40 to 1000 times faster than the most competitive comparison methods as shown in Table 4. As also mentioned by the reviewer \\\"the biggest advantage of our method is scalability in terms of time and complexity\\\", we believe our above real-world experiments are sufficient to support it.\\n\\nSecond, the synthetic dataset also showed that our method achieved the highest performance in all the datasets. As explained in the caption of Table 2 and in Section 5.2.1, the results marked as \\\"GS\\\" are the performance achieved by the \\\"gold standard\\\" (i.e., the prediction model is the data generator itself). So the purpose of the experiment is to validate how close our method's performance is to the gold standard's. And it can be seen that our method achieved the best performance (i.e., closest to gold standard's) in 8 out of 11 synthetic datasets. And our overall performance is 50% higher than the best performer (i.e., C-DGT) among the comparison methods. This strongly demonstrates that our end-to-end methods are effective in various datasets even with different types of transformations.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed a method to model the graph topological evolution from the spectral domain by developing a new generalized graph kernel. The new graph kernels cover many existing graph kernels as well as their combination and composition as special cases. The idea of spectral graph translation and its integration with deep learning is interesting, especially considering that most previous spectral graph neural networks only transform the graph signal instead of graph structures. However, I do have some concerns of papers.\\n\\n1.\\tMy major concern is the soundness of keeping eigenvectors unchanged in the evolution. Although the authors claim that in previous studies eigenvectors are found stable in evolution, it is very counter-intuitive, and I am not sure it is the case for all types of graphs. Let us look at the proof of Lemma 3.1, obviously $L^\\\\prime$ does not necessarily have the same eigenvectors, and $U^TL^\\\\prime U$ is not a diagonal matrix, so this loss is actually very large in many cases. That is to say, the evolution model does not have enough expressive power to recover the $L^\\\\prime$.\\n2.\\tSpectral graph translation looks interesting, but the main idea comes from Kunegis et al. (2010). Despite of a new designed graph kernel and adding nonlinear activations, the contributions seem not so significant.\\n3.\\tIn Kunegis et al. (2010), they consider the evolution of adjacency matrix $A$, but in this paper the authors use the Laplacian matrix $L$. If there any reason to make this choice? Also, I think some of the conclusions (e.g. stable eigenvectors) in Kunegis et al. (2010) may not work since $L$ is used instesd of $A$.\\n4. The correlation metric is acceptable, but it will be much better if the authors can do more analysis. For example, why not add the link prediction task as in Kunegis et al. (2010)? BTW, is correlation analysis used before in previous graph evolution papers? (if so, please add a reference)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a spectral graph neural network based on a graph kernel to predict graph evolution. The overall idea is interesting and the biggest advantage is scalability of the framework to large graphs in terms of time and parameter complexity.\\nThe major drawback of the paper is the lack of experimentation with real datasets. Based on the results from four datasets they used, the efficacy of their proposed method is unclear. The synthetic datasets are hard to admit in this case.\", \"note\": \"I could not verify the theory in detail yet.\"}"
]
} |
HkxJHlrFvr | Angular Visual Hardness | [
"Beidi Chen",
"Weiyang Liu",
"Animesh Garg",
"Zhiding Yu",
"Anshumali Shrivastava",
"Jan Kautz",
"Anima Anandkumar"
] | The mechanisms behind human visual systems and convolutional neural networks (CNNs) are vastly different. Hence, it is expected that they have different notions of ambiguity or hardness. In this paper, we make a surprising discovery: there exists a (nearly) universal score function for CNNs whose correlation with human visual hardness is statistically significant. We term this function as angular visual hardness (AVH) and in a CNN, it is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category. We conduct an in-depth scientific study. We observe that CNN models with the highest accuracy also have the best AVH scores. This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples. We find that AVH displays interesting dynamics during training: it quickly reaches a plateau even though the training loss keeps improving. This suggests the need for designing better loss functions that can target harder examples more effectively. Finally, we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training tasks.
| [
"angular similarity",
"self-training",
"hard samples mining"
] | Reject | https://openreview.net/pdf?id=HkxJHlrFvr | https://openreview.net/forum?id=HkxJHlrFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"e5hzbLrzJ",
"B1l6DMEhsS",
"HylNHnQ2sH",
"SJxZePXFjS",
"H1x6Jm6_or",
"H1lWTzTdsH",
"r1geIMa_sB",
"S1xahZ6uoS",
"BJeifxHT5r",
"SkgCP9CRFH",
"HJeVWRoCKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744845,
1573827173039,
1573825596185,
1573627625219,
1573602020862,
1573601977145,
1573601863869,
1573601717290,
1572847634841,
1571904102322,
1571892731841
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2270/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2270/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2270/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new measure for CNN and show its correlation to human visual hardness. The topic of this paper is interesting, and it sparked many interesting discussions among reviews. After reviewing each others\\u2019 comments, reviewers decided to recommend reject due to a few severe concerns that are yet to be address. In particular, reviewer 1 and 2 both raised concerns about potentially misleading and perhaps confusing statements around the correlation between HSF and accuracy. A concrete step was suggested by a reviewer - reporting correlation between accuracy and HSF. A few other points were raised around its conflict/agreement with prior work [RRSS19], or self-contradictory statements as pointed out by Reviewer 1 and 2 (see reviewer 2\\u2019s comment). We hope authors would use this helpful feedback to improve the paper for the future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of the Revision\", \"comment\": \"We sincerely thank all the reviewers for providing constructive suggestions and helping us improve the paper! We list the major changes we have done according to the recommendations in the following:\\n\\n1. We made a revision to our introduction, especially the first and second paragraphs to better motivate our work and improve the logic flow. [Introduction]\\n\\n2. We added a paragraph in the introduction to discuss Reviewer_3's suggested theoretical foundation. [Introduction]\\n\\n3. We provide more details about the correlation testings. [Section 3 Hypothesis 3]\\n\\n4. We have moved all the experiments and discussions about image degradation to the appendix. [Appendix A.2 & A.5]\\n\\n4. We added a section to better illustrate the difference between AVH and model confidence. We also plot the quantitative difference between model confidence and AVH in that section. [Appendix C]\\n\\n5. We added the figures of the training dynamics for model confidence, which help confirm the different between AVH and model confidence. [Appendix A.3(Figure 21)]\\n\\n6. We added training dynamics experiments on CIFAR10 and CIFAR100 to confirm that our observation of the norm and angle is not a phenomenon that is only on ImageNet. [Appendix A.4 (Page 20)]\\n\\n7. We added 2 other correlation coefficient testings, Pearson and Kendall Tau. We also added the testings on all four models. [Appendix A.1 (Page 15-16) ]\\n\\n8. We fixed the typos and citation issues.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you for your recognition of our work. We appreciate your constructive suggestions, especially mentioning the existing theoretical results which can be extremely useful for strengthening the motivation of our work. We have added that in our introduction.\\n\\n**Question: The introduction is not well-written, especially the second paragraph\\n\\n**Action Taken: We have revised our logic and writing of our introduction addressing all your problems. Thanks for pointing it out.\\n\\n**Question: In table 1, I am not sure whether the author should assume that all audiences have some background on z-score, although I can understand it. I would also encourage the authors to use other correlation metrics with more intuitive explanations (e.g., correlation coefficients).\\n\\n**Response: We do provide the correlation coefficient in the second column of Table 1. \\n\\n**Action Taken: We have made the column name more explicit and add more details about the testing under hypothesis 3.\\n\\n**Question: For the experiment, I would like to recommend authors adding the following experiments.\\n\\n**Action Taken: \\nWe have added the experiments of training dynamics on both CIFAR10 and CIFAR 100 in Appendix A.4 (Page 20). \\nThe experiments provide similar supportive results to our claims in the training dynamics section. \\n\\nWe have added Pearson and Kendall Tau correlation coefficients.\\n\\nWe have added four tables in Appendix A1 (Page 15-16) for all three types of correlation coefficients for four different models (AlexNet, VGG19, ResNet50, and DenseNet121). The experiments also provide similar supportive results to our hypotheses. We found out that better models, like ResNet and DenseNet, correlates stronger with Human Selection Frequency. Also, the difference between the model confidence correlation coefficient and AVH correlation coefficient with HSF is also larger for better models. It helps verify our claim that AVH is an indicator of model generalizability. \\n\\nThanks again for all the suggestions which help us improve the completeness of our paper.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thanks for your recognition on our work. With our extensive empirical study, AVH is shown to be able to better identify the visual hardness than model confidence. AVH is generally useful in a number of applications, not limited to self-training for domain adaptation. Potential applications include robust learning against noisy labels, semi-supervised learning, etc.\\n\\n**Question: In the analysis of the dynamics of training, the authors compared the AVH with feature norm. How about the dynamics of model confidence? Is it similar to the feature norm?\\n\\n**Response: The dynamics of the model confidence during training is similar to model accuracy, and its curve is indeed close to the curve of feature norm. We have put the figures of the model confidence (on AlexNet, VGG19, and ResNet50) in Appendix A.3.\\n\\nThe difference between AVH and model confidence lies in the feature norm and its role during training. To illustrate the difference, we consider a simple binary classification case where the softmax score (i.e., model confidence) for class 1 is $\\\\frac{\\\\exp(W_1X)}{\\\\sum_i\\\\exp(W_iX)}=\\\\frac{\\\\exp(\\\\|W_1\\\\|\\\\|X\\\\|\\\\cos(\\\\theta_{W_1,X}))}{\\\\sum_i\\\\exp(\\\\|W_i\\\\|\\\\|X\\\\|\\\\cos(\\\\theta_{W_i,X}))}$ where $W_i$ is the classifier weights of class $i$, $X$ is the input deep feature and $\\\\theta_{W_i,X}$ is the angle between $W_i$ and $X$. To simplify, we assume the norm of $W_1$ and $W_2$ are the same, and then the classification result is based on the angle now. Once $\\\\theta_{W_1,X}$ is smaller than $\\\\theta_{W_2,X}$, the network will classify the sample $X$ as class 1. However, in order to further minimize the cross-entropy loss after making $\\\\theta_{W_1,X}$ smaller than $\\\\theta_{W_2,X}$, the network has a trivial solution: increasing the feature norm $\\\\|X\\\\|$ instead of further minimizing the $\\\\theta_{W_1,X}$. It is obviously a much more difficult task to minimize $\\\\theta_{W_1,X}$ rather than increasing $\\\\|X\\\\|$. Therefore, the network will tend to increase the feature norm $\\\\|X\\\\|$ to minimize the cross-entropy loss, which is equivalent to maximizing the model confidence in class 1. In fact, this also matches our empirical observation that the feature norm keeps increasing during training. Most importantly, one can notice that AVH will stay unchanged no matter how large the feature norm $\\\\|X\\\\|$ is. Moreover, this also matches our empirical result that AVH easily gets saturated while model confidence can keep improving. Therefore, AVH is able to better characterize the visual hardness and is also a more robust indicator to visual hardness than model confidence, since it is trivial for the network to increase the feature norm.\\n\\n**Action Taken: We have put the figures of the model confidence (on AlexNet, VGG19, and ResNet50) in the Appendix A.3. We also update an additional Appendix C to further illustrate the difference between AVH and model confidence. \\n\\n**Question: The curves of different levels of hardness are missing in Fig.14.\\n\\n**Action Taken: The caption in Figure 14 is confusing. It is the average over all the samples not different hardness levels. Thanks for pointing it out and we have corrected it.\"}",
"{\"title\": \"Responses to individual questions (Part 3)\", \"comment\": \"**Question 7): The middle row in Figure 3 has a small range of ~1e-4. Is that expected? Can you provide some simple arguments? The closeness of the initial and final values of AVH in the AlexNet plot also concerns me.\\n\\n**Response: Yes. AVH is basically an angle normalized by the sum of all angles. Because ImageNet has 1000 classes, it is reasonable that AVH is in the range of 1e-4-1e-3. For AlexNet, a) the model is comparatively a poor model and b) AVH does not necessarily need to decrease too much to have a high impact on the accuracy because as long as the angle between the embedding and the correct class the smallest among all other angles, it will produce a correct prediction (it can be very close to the decision boundary). \\n\\n**Question 8): How is the visualization in Figure 1 generated? It is not immediately clear to me how the high dimensional space is projected into 2D. My concern is that though suggestive in the figure, the category weights w_c, in general, do not spread out evenly. Do they? I would suggest reporting the angular separation of the category weights (maybe by showing them in a CxC matrix).\\n\\n**Response: Figure 1 does *NOT* use any projections or visualization tools (e.g., T-SNE, etc.) because the embedding dimension is originally 2-dimension. Specifically, we directly set the embedding layer (the layer before the classifiers) of CNN to 2 dimensions. That means we use 10 classifiers (each classifier is also 2-dimensional) to classifier 2-dimensional features learned by a CNN. Therefore the category weights do spread out evenly.\\n\\n**Action Taken: We revise the caption for Figure 1 to make it more clear. We will also upload the code to reproduce the visualization.\\n\\n**Question 9): In Figure 2, what happens to the dark stripes? Are there no data points with the specified range of HSF values?\\n\\n**Response: Yes there are no data points for the values within those ranges. \\n\\nWe hope the above response can help the reviewer better understand our paper. We sincerely thank the reviewer again for all the questions and suggestions, which greatly improve the clarity of our work.\"}",
"{\"title\": \"Responses to individual questions (Part 2)\", \"comment\": \"**Question 4): \\u201cWhat is human visual hardness (HVH)? How is HSF related to HVH? Why is being selected from a pool of images (in the procedure described in [RRSS19]) a good measure of HVH?\\u201d\\n\\n**Response: Human visual hardness is a measure of how hard it is for humans to classify a particular picture correctly. Human selection frequency is a proxy for this measure. It is a reasonably good proxy because humans tend to select the pictures which can be easily identified as a given label and might miss the ones which are hard for them to identify from a pool of pictures. \\n\\n**Action Taken: We add the definition of HVH and detailed discussion about how we relate HVH and HSF in the second paragraph of our Introduction.\\n\\n\\n**Question 5): \\u201cSince the class logit is exactly <x, w_c>, with arccos being a decreasing function, I expect AVH to behave very much like the opposite of model confidence (Definition 2). And this seems to be confirmed in Table 1 (performing a confidence calibration on validation set might increase this further). I wonder how AVH is different from model confidence _qualitatively_ and consequently what insights do we gain (or should we expect to gain) by studying AVH instead of model confidence?\\u201d\\n\\n**Response: First of all, Table 1 shows that AVH correlates (significantly) more closely to human selection frequency than model confidence, supporting our claim that AVH can better reflect the human visual hardness. Also, we use Spearman's correlation coefficients which is a statistical measure of the strength of a monotonic relationship between paired data.\", \"there_are_several_obvious_insights_we_can_gain_from_avh\": \"First, AVH demonstrates that angles serve as a more calibrated similarity measure that better reflects human perception. However, one can also see that angles are generally difficult to optimize in current neural networks, indicating that current loss functions or even network architectures are not well designed to optimize the angles. These observations open up a new research direction to improve our network architectures and objective functions towards better optimization of angular similarity instead of the inner product. \\n\\nSecond, AVH reveals an interesting inductive bias in CNNs: features learned by CNNs tend to be discriminative on a hypersphere manifold. The discovery of such inductive bias could be useful for improving the accuracy and robustness of CNNs in all kinds of applications.\\n\\nLast, AVH produces a more calibrated and accurate visual hardness measure, facilitating the applications like self-training for domain adaptation (shown in Section 5 in our paper), robust learning against noisy labels, etc. We have already shown in our paper that AVH can significantly boost the state-of-the-art performance in domain adaptation by outperforming the previous state-of-the-art result (in ICCV 2019) by more than 3%.\\n\\n\\n**Question 6): Degradation levels (DL) are mentioned early on but the experiments and figures were not shown in the main text (deferred till Appendix). What is the rationale? \\n\\n**Response: Because the DL results are consistent with HSF results and because of the space limit, we attach the results in the appendix.\\n\\n**Action Taken: We move everything about DL to the appendix and only link the similar results in the related work and section 3.1 notation and setup.\"}",
"{\"title\": \"Responses to individual questions (Part 1)\", \"comment\": \"**Question 1a): \\u201cThe presentation is confusing, and at times self-contradictory. For example, in Section 1, the paper asserts that \\u201cthe two systems differ in what they view as hard examples that appear ambiguous or uncertain\\u201c but then proceeds to claim that AVH being a CNN-derived quantity (more on this in Con2) correlates well with HSF.\\u201d\\n\\n**Response: Thanks for pointing out the issue. We have revised our introduction to improve clarity. Here we specifically mean that the softmax probability output, which is a popular confidence measure adopted by CNNs, does not match well with HSF. But this does not contradict the conclusion that CNNs can derive other quantities that correlates well with HSF.\\n\\n**Question 1b): \\u201cIn fact, [RRSS19] (heavily cited here) seems to suggest exactly the opposite that harder examples for humans are also hard for CNNs. This is not very surprising as high accuracies of these CNNs imply their agreement with human judgment: we are learning a dataset labeled by humans.\\u201d\\n\\n**Response: Thanks for raising the concern. As mentioned in the beginning, this paper focuses on the better measurement of sample-level confidence instead of dataset-level accuracies. In light of this, the reviewer\\u2019s conclusion that \\u201chigh accuracies of these CNNs imply their agreement with human judgment: we are learning a dataset labeled by humans.\\u201d seems irrelevant to this work. In addition, from their experiments (section 4 in [RRSS19]) as well as the direct confirmation from the authors, by saying \\u201cimages selected less frequently by the MTurk workers are harder for the models\\u201d, they are referring to correlation with dataset-level accuracy (CNNs do make mistakes on images ambiguous to human) instead of sample-level confidence (CNNs being over-confident on wrong samples). Thus their conclusion is not contradicting the technical correctness of this work.\\n\\n**Question 2): \\u201cAVH is a function of a particular CNN (architecture and parameter values) and the target class label _in addition_ to the input image. These dependencies make AVH a measure of the ambiguity of an image very problematic. Granted that the paper presents evidence that AVH correlates with HSF for a number of _trained_ models but they will be of different values. \\u201d\\n\\n**Response: AVH is not defined only for some particular architectures of CNN. As long as the architecture has embedding layer and classifier, AVH can be computed. In addition, as the reviewer mentioned, we use four different CNN architectures for all the experiments to show the consistency of our observations. \\n\\nIf we understand the reviewer\\u2019s confusion correctly, it means \\u201cfor one particular picture, different models have different AVH scores is a problem\\u201d. This is actually an advantage of AVH instead of a problem. AVH scores are not the same since different models have different generalization abilities and naturally tend to show different levels of confidence. We provide more details in Observation 4 on Page 7.\\n\\n**Question 3): \\u201cThe work is not well self-contained. HSF, the core quantity studied is not introduced with sufficient details.\\u201d\\n\\n**Response: We have a formal definition of HSF in \\u201cPage 3 Definition 3\\u201d. \\n\\n**Action Taken: We add the definition of HVH and detailed discussion about how we relate HVH and HSF in the second paragraph of our Introduction.\"}",
"{\"title\": \"Clarification to some misunderstandings\", \"comment\": \"We sincerely thank the reviewer for the great efforts in helping us to improve clarity. We appreciate the constructive feedback as it helps the paper to better impact a wider range of audiences. \\n\\nHowever, there is a major misunderstanding from the reviewer which we would like to clarify first. The essence of such misunderstanding lies in confusing \\u201cdataset-level accuracy\\u201d with \\u201csample-level model confidence\\u201d. Let us first provide a simple example here: There are only two labels 1 and 0. For image A, 3 out of 5 humans (HSF 0.6) vote for label 1; for image B, 5 out of 5 humans (HSF 1.0) vote for label 1. CNN only sees [A, 1] and [B, 1]. There is no information about how humans think the images are easy (HSF 1.0) or hard (HSF 0.6) passed to CNN. CNNs learn a dataset labeled by humans but the explicit hardness/confidence information is never revealed to them. Despite this AVH, which is coming purely from CNN correlated very well with HSF. While other popular measures like the softmax do not correlate well. So the fact that such a measure exists and we can compute it is interesting in itself. \\n\\nWe are not entirely sure what the reviewer means by the accuracy of CNN and hardness/confidence of the classifier are the same thing. Accuracy is an average over the whole data, in which confidence/hardness is only defined as instance level. So there is no accuracy for a particular instance (it is right or wrong) and hardness is only defined for the instance. These are unrelated terms. For example, is that one model can have very high overall accuracy (say 0.99) on a dataset but the confidence score (softmax) on one example can be 0.6. We hope that this point is fairly clear.\\n\\nBefore we delve into answering the reviewer's questions. We will first explain how AVH can be vastly different with model confidence for the reviewer to better understand AVH.\\n\\n**Difference between AVH and Model Confidence**\\n\\nThe difference between AVH and model confidence lies in the feature norm and its role during training. To illustrate the difference, we consider a simple binary classification case where the softmax score (i.e., model confidence) for class 1 is $\\\\frac{\\\\exp(W_1X)}{\\\\sum_i\\\\exp(W_iX)}=\\\\frac{\\\\exp(\\\\|W_1\\\\|\\\\|X\\\\|\\\\cos(\\\\theta_{W_1,X}))}{\\\\sum_i\\\\exp(\\\\|W_i\\\\|\\\\|X\\\\|\\\\cos(\\\\theta_{W_i,X}))}$ where $W_i$ is the classifier weights of class $i$, $X$ is the input deep feature and $\\\\theta_{W_i,X}$ is the angle between $W_i$ and $X$. To simplify, we assume the norm of $W_1$ and $W_2$ are the same, and then the classification result is based on the angle now. Once $\\\\theta_{W_1,X}$ is smaller than $\\\\theta_{W_2,X}$, the network will classify the sample $X$ as class 1. However, in order to further minimize the cross-entropy loss after making $\\\\theta_{W_1,X}$ smaller than $\\\\theta_{W_2,X}$, the network has a trivial solution: increasing the feature norm $\\\\|X\\\\|$ instead of further minimizing the $\\\\theta_{W_1,X}$. It is obviously a much more difficult task to minimize $\\\\theta_{W_1,X}$ rather than increasing $\\\\|X\\\\|$. Therefore, the network will tend to increase the feature norm $\\\\|X\\\\|$ to minimize the cross-entropy loss, which is equivalent to maximizing the model confidence in class 1. In fact, this also matches our empirical observation that the feature norm keeps increasing during training. Most importantly, one can notice that AVH will stay unchanged no matter how large the feature norm $\\\\|X\\\\|$ is. Moreover, this also matches our empirical result that AVH easily gets saturated while model confidence can keep improving. Therefore, AVH is able to better characterize the visual hardness and is also a more robust indicator to visual hardness than model confidence, since it is trivial for the network to increase the feature norm. This is a fundamental difference between model confidence and AVH.\\n\\nWe update an additional *Appendix C* to better illustrate the difference between AVH and model confidence. We also plot the quantitative difference between model confidence and AVH.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Main Contribution:\\n\\nThis paper is trying to bridge the gap between CNN and the human visual system by proposing a metric (angular visual distance) and validate that this metric is correlated to the human visual hardness and this metric has a stronger relation compared to the softmax score which has been viewed as a metric measuring the hardness of images in CNNs. Furthermore, this paper proposed a reasonable explanation for this observation, i.e., the norm is possibly not correlated to the human visual hardness and validate through the experiment. Finally, this paper shows that this metric is also useful in other applications.\", \"innovative_part\": \"The metric proposed in this paper is based on an interesting and also innovative observation that samples in each class will concentrate in a convex cone in the embedding space (e.g., shown in Figure 1) and the norm has no information on the visual hardness. I like this observation since several existing theoretical results have similar implications although in far simpler settings. For example, [1] shows that for LINEAR model with logistic loss, gradient descent converges to the maximum margin classifier while the norm (corresponding to ||x||_2 in this paper) diverges to infinity with log(T) rate. If we are looking into the Figure 1, we will see that ten convex cones almost form an equal partition of the two-dimensional space and this indicates that the classifier is very similar to the classifier with the maximum margin in the angular space (NOT in the Euclidean space). The observation is quite intuitive and has strong theoretical foundation, which is the main reason that I vote for the acceptance of this paper.\", \"drawbacks\": \"This paper also have several drawbacks but I do believe they can be addressed very easily. \\n\\n1. The introduction is not well-written, especially the second paragraph. I strongly recommend modifying the introduction. \\n\\nFor the first three sentences of the second paragraph, do you mean that CNNs are constructed based on some properties of the human visual systems and thus they should have had some connections but they actually fundamentally differ in practice? Otherwise, if these two are fundamentally different with each other, what is the point of showing some connections between them? \\n\\nFor the sentence \\\"we use this dataset to verify our hypothesis\\\", what is the hypothesis? Do you mean the hypothesis that human visual hardness should have had connections to the classifying hardness for CNNs?\\n\\n For the sentence \\\"Given a CNN, we propose a novel score function that has strong correlation with human visual hardness\\\", I am not sure whether the word \\\"strong\\\" can be used here. \\n\\n2. In table 1, I am not sure whether the author should assume that all audiences have some background on z-score, although I can understand it. I would also encourage the authors to use other correlation metrics with more intuitive explanations (e.g., correlation coefficients). \\n\\n3. For the experiment, I would like to recommend authors adding the following experiments.\\n\\n3.1) Show that on other datasets (e.g., CIFAR 10, 100), AVH converges fast to a plateau while the norm constantly diverges to infinity. \\n3.2) Introducing several other measurements to show the correlation. \\n3.3) I also would like to see similar results in Table 1 for different models. \\n\\n\\n\\n\\n\\n\\n[1] Soudry, Daniel, et al. \\\"The implicit bias of gradient descent on separable data.\\\" The Journal of Machine Learning Research 19.1 (2018): 2822-2878.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper defined Angular Visual Hardness (AVH) based on the angle between image feature embedding and the weights of the target class. The authors compared the correlation between AVH and human selection frequency with model confidence and feature norm. The results show that both AVH and model confidence have correlation, but AVH has a stronger correlation than model confidence. Differently from the conjecture of [41], feature norm was not correlated with human selection frequency.\\n\\nNext, the training dynamics of AVH are analyzed. The results show that feature norm keeps increasing during training, whereas AVH hits a plateau very early even when the accuracy or loss is still improving. Also, AVH correlates human selection frequency across different deep models, and it also correlates the model\\u2019s accuracies. \\n\\nAs an application of AVH, the authors applied it to sample selection of self-training for domain adaption. The proposed selection method based on AVC could improve a state-of-the-art self-training method, of which sample selection is based on model confidence of CNN. \\n\\nOverall, the experimental contribution of this paper is good, and the experimental conditions seem to be correct. \\n\\nMinor problems. \\nIn the analysis of the dynamics of training, the authors compared the AVH with feature norm. How about the dynamics of model confidence? Is it similar to the feature norm? \\n\\nThe curves of different levels of hardness are missing in Fig.14.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes, when given a CNN, an image and its label, a measure called angular visual hardness (AVH). The paper shows that AVH correlates with human selection frequency (HSF) [RRSS19].\", \"pros\": \"1. The authors experimented with a representative set of trained models in my opinion. (More on this in Con2)\\n2. In Section 6, the authors acknowledge a substantive counter-example/argument. (More on this in Con2) \\n\\nCons (I put the ones that weighed the most on my decision first):\\n1. The presentation is confusing, and at times self-contradictory. For example, in Section 1, the paper asserts that \\u201cthe two systems differ in what they view as hard examples that appear ambiguous or uncertain\\u201c but then proceeds to claim that AVH being a CNN-derived quantity (more on this in Con2) correlates well with HSF. In fact, [RRSS19] (heavily cited here) seems to suggest exactly the opposite that harder examples for humans are also hard for CNNs. This is not very surprising as high accuracies of these CNNs imply their agreement with human judgment: we are learning a dataset labeled by humans. (More on this in Que2)\\n2. AVH is a function of a particular CNN (architecture and parameter values) and the target class label _in addition_ to the input image. These dependencies make AVH a measure of the ambiguity of an image very problematic. Granted that the paper presents evidence that AVH correlates with HSF for a number of _trained_ models but they will be of different values. \\n3. The work is not well self-contained. HSF, the core quantity studied is not introduced with sufficient details. (See Que1 and Sug1)\\n\\nSome possible mistakes/typos:\\n1. Feature vector x and class weight w in general do not lie on S^n. Indeed your definition of A(u, v) only relies on u, v being nonzero.\\n2. There is a missing { above Definition 1.\\n3. In Definition 1, \\u201cfor any x\\u201d -> for any (x, y). \\n4. In References, [10] is a duplicate of [11].\\n5. The captions in Figures 5, 6, and 7 in Appendix A might be wrong. They say ||x|| whereas the y-axis in the plots is labeled AVH.\\n\\nQuestions (I listed the important ones first):\\n1. What is human visual hardness (HVH)? How is HSF related to HVH? Why is being selected from a pool of images (in the procedure described in [RRSS19]) a good measure of HVH?\\n2. Since the class logit is exactly <x, w_c>, with arccos being a decreasing function, I expect AVH to behave very much like the opposite of model confidence (Definition 2). And this seems to be confirmed in Table 1 (performing a confidence calibration on validation set might increase this further). I wonder how AVH is different from model confidence _qualitatively_ and consequentially what insights do we gain (or should we expect to gain) by studying AVH instead of model confidence?\\n3. Degradation levels (DL) are mentioned early on but the experiments and figures were not shown in the main text (deferred till Appendix). What is the rationale? \\n4. The middle row in Figure 3 has a small range of ~1e-4. Is that expected? Can you provide some simple arguments? The closeness of the initial and final values of AVH in the AlexNet plot also concerns me.\\n5. How is the visualization in Figure 1 generated? It is not immediately clear to me how the high dimensional space is projected into 2D. My concern is that though suggestive in the figure, the category weights w_c in general do not spread out evenly. Do they? I would suggest reporting the angular separation of the category weights (maybe by showing them in a CxC matrix).\\n6. In Figure 2, what happens to the dark stripes? Are there no data points with the specified range of HSF values?\\n\\nMinor issues (factored little to none in my decision):\\n1. There are 60+ citations but their relevance to the current seems questionable in many cases. Many of them are accompanied by little or no technical comparison when they are mentioned. In particular, in Section 2 on the related work from psychology/neuroscience, little specifics are discussed to contextualize the current work. \\n2. Many arguments come across as (highly) speculative and imprecise. As a result, I find the reasoning and logical story diluted and hard to follow.\\n3. The comparison with feature norm seems poorly motivated. The other quantities, namely AVH and model confidence, both depend on the class label. \\n4. The term hardness has a rich history and connotation in the algorithmic analysis literature. I would suggest using a different term, as the hardness of a problem usually reflects some intrinsic aspects of its structure and not dependent on some algorithm.\", \"suggestions\": \"1. If DL is not important to the core results, it will help simplify and focus the presentation by leaving them out entirely.\\n2. Try to be more concise and more precise in the presentation. It might also benefit from more formalism wherever possible, and more procedural details, when human studies or notions is involved. The latter seems to be a lesson from [RRSS19] (in regard to reproducibility).\\n\\nIn summary, I do not recommend accepting the current article. \\n\\n(To authors and other reviewers) Please do not hesitate to directly point out my misunderstandings. I am open to acknowledging mistakes and revising my assessment accordingly.\"}"
]
} |
HJgySxSKvB | Deep Relational Factorization Machines | [
"Hongchang Gao",
"Gang Wu",
"Ryan Rossi",
"Viswanathan Swaminathan",
"Heng Huang"
] | Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with high-dimensional sparse data. However, FMs assume each sample is independently observed and hence incapable of exploiting the interactions among samples. On the contrary, Graph Neural Networks (GNNs) has become increasingly popular due to its strength at capturing the dependencies among samples. But unfortunately, it cannot efficiently handle high-dimensional sparse data, which is quite common in modern machine learning tasks. In this work, to leverage their complementary advantages and yet overcome their issues, we proposed a novel approach, namely Deep Relational Factorization Machines, which can capture both the feature interaction and the sample interaction. In particular, we disclosed the relationship between the feature interaction and the graph, which opens a brand new avenue to deal with high-dimensional features. Finally, we demonstrate the effectiveness of the proposed approach with experiments on several real-world datasets. | [
"fms",
"sparse data",
"samples",
"feature interaction",
"important",
"learning",
"due",
"unique ability",
"feature interactions"
] | Reject | https://openreview.net/pdf?id=HJgySxSKvB | https://openreview.net/forum?id=HJgySxSKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jgMXv76K27",
"Skev-GNHqH",
"r1eyeua6tB",
"HJleRvQaFH",
"r1x93hOWtH",
"B1gblq9cur"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment"
],
"note_created": [
1576798744814,
1572319743497,
1571833830939,
1571792840269,
1571028146508,
1570576873431
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2269/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2269/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2269/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2269/AnonReviewer2"
],
[
"~Xiang_Wu1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to combine FMs and GNNs. All reviewers voted reject, as the paper lacks experiments (eg ablation studies) and novelty. Writing can be significant improved - some information is missing. Authors did not respond to reviewers questions and concerns. For this reason, I recommend reject.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper tries to combine FMs and GNNs to capture both sample and feature interactions. First, feed the feature from FMs to GNNs. Second, build high-order interactions.\", \"strength\": \"[1] This paper tries to solve an interesting question and the idea is simple and intuitive\\n[2] Experiments show that the proposed approach outperforms a number of baselines\", \"my_comments\": \"[1] main concern: lack of ablation study. It would be great to analyze the effects of different components and illustrate/visualize the learned features to see what's the difference and why/how such difference help\\n[2] another concern is complexity. It would be great to see learning curve and computational time analysis\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to combine the graph neural networks and factorization machines. First, the authors propose a relational feature interaction component (RFO) tp deal with the categorical features. This component first uses the factorization machine to project the features to h^FI(x), then it uses an aggregation operation to get the prediction y^RFI. To explore high-order correlations, the authors further propose to calculate a concurrence graph, on which RFI propagates the embedding vectors to get relational high-order correlations. To further model high-order sample interactions, this work then presents a special graph convolutional operation that considers the element-wise products of the encoded features.\", \"my_comments_are_as_follows\": [\"The idea of integrating the GNN and FMs is interesting and intuitive. However, the proposed method is simple.\", \"As the work proposes a new model architecture, a graphical illustration and the pseudo-code is necessary for the audiences.\", \"Some parts are useless for the whole paper. For example, 'a straightforward method is to combine...' (Eq. 3). The discussion of the simple RFI-component is also needless since the paper mainly proposes the high-order version. These paragraphs should be simplified or removed.\", \"The graph convolution operation in Eq. (7) first considers all the element-wise products of the embedding vectors, which is the same as the original FM. Then the authors use G, the concurrence graph, to propagates the embedding vectors. One concern is that the original FMs also consider the graphical information in G, i.e. the concurrence relation. Will the GNN technique improve the usage of this topological information of the features?\", \"An ablation study is necessary to show the contribution of the proposals. For example, comparing the naive RFI and high-order RFI; the performance with and without RFI/SI components.\", \"I would be grateful if the authors provide the running time comparison of the proposed method.\"]}",
"{\"comment\": \"Since we don't know the meaning of features of this dataset, we cannot construct the graph. If we know the meaning of features, we can construct the graph in terms of the practical property of features. For instance, in our Company-X-CTR data, each campaign is configured with different attributes, such as targeting countries, targeting device types, user segment rules, etc. Advertisers might change only one or two attributes and launch another campaign. This new campaign and its original campaign share a lot of common information so that they are highly correlated. Thus, it will be beneficial to capture the relationship between different campaigns when making prediction. Specifically, this kind of new campaigns and their original campaigns share the same campaign_placement_id. Thus, we can construct the graph in terms of the shared campaign_placement_id. More specifically, two campaigns are connected if they share the same campaign_placement_id. After obtaining the graph, we can use our proposed method to make prediction.\", \"title\": \"The graph is not available for Criteo dataset.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose generalize the FM to consider both interaction between features and interaction between samples. For the interaction between features, the authors propose to use graph convolution to capture high-order feature interactions. Moreover, the authors construct a graph on the instances based on similarity. Then a GCN is applied to the sample graph where the feature embedding is shared between the two components. Experiments are carried out on four datasets with tasks of link prediction and regression. Comparison to several baselines demonstrate the superior performance of the proposed method.\", \"strength\": \"1. The idea of utilizing GCN on the feature co-occurrence graph is interesting and innovative. The idea could possibly be combined with other variants of Deep FM models.\\n2. It is an interesting idea to combine sample similarity together with feature co-occurrence for better prediction accuracy.\", \"weakness\": \"1. Many descriptions in the paper are not very clear. First, the authors only mention how prediction is carried out with trained parameters. However, there is no description of the training process like what is the target used for the two components. What is the training procedure? Are the two components trained jointly? Second, the authors provide little description on how the sample similarity graph is constructed excepts for the Ad campaign dataset. Third, it is not clear how is the link prediction evaluation carried out. From the size of the graph, the authors seem to include both user and item in the graph. However, the user and item has disjointed feature set. It is not clear how the GCN is computed for the heterogenous nodes in the graph. Moreover, how is link prediction carried out, by taking inner product (cosine similarity) of the final representation.\\n2. For equation (8) in section 4.1, why we need to compute h_i^{RFI}. This should be the feature representation of sample i. However, the average is computed without include sample i itself. Also, are the neighbors defined in the sample similarity graph? Should we use the sample interaction in section 4.2 to capture that?\\n3. Though it is interesting idea to use graph convolution on the feature occurrence graph, it would be much better if the authors could provide more intuition on the output of the GCN. It would be helpful to study a few simple cases like without non-linearity. Is it a generalization to high-order FM without non-linearity? Also, it would be interesting to see experiments results using the graph convoluted feature representation directly for final representation. Also, some visualization of the learned feature embedding also helps.\\n4. The authors should carry out ablation study for different components of the model. Moreover, it would be much better if the authors can carry out experiments on some widely used recommendation datasets and use standard evaluation metrics for ranking.\"}",
"{\"comment\": \"It is a relatively old dataset, but it is much easier for us to understand your progress if you can provide numbers on this benchmark. The kaggle link is at:\", \"https\": \"//www.kaggle.com/c/criteo-display-ad-challenge\", \"title\": \"Can you run your model on the Criteo dataset?\"}"
]
} |
HJeANgBYwr | Towards Scalable Imitation Learning for Multi-Agent Systems with Graph Neural Networks | [
"Siyu Zhou",
"Chaitanya Rajasekhar",
"Mariano J. Phielipp",
"Heni Ben Amor"
] | We propose an implementation of GNN that predicts and imitates the motion be- haviors from observed swarm trajectory data. The network’s ability to capture interaction dynamics in swarms is demonstrated through transfer learning. We finally discuss the inherent availability and challenges in the scalability of GNN, and proposed a method to improve it with layer-wise tuning and mixing of data enabled by padding. | [
"Graph Neural Networks",
"Scalability",
"Swarms",
"Imitation"
] | Reject | https://openreview.net/pdf?id=HJeANgBYwr | https://openreview.net/forum?id=HJeANgBYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DBUcoBMeL5",
"rklr5kJniH",
"SJgFgy13oB",
"Byg-cACsiS",
"HJeeFJLRYB",
"B1x2QZWRFB",
"BJlkO_dstB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744784,
1573805965055,
1573805808905,
1573805705062,
1571868535572,
1571848484062,
1571682407503
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2268/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2268/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2268/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2268/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2268/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2268/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a graph neural network based approach for scaling up imitation learning (e.g., of swarm behaviors). Reviewers noted key limitations in the discussion of related work, size of the proposed contribution in terms of model novelty, and evaluation / comparison to strong baselines. Reviewers appreciated the author replies which resolved some concerns but agree that the paper is overall not ready for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Responses\", \"comment\": \"We agree the introduction and explanation of our model are partly based on the standard practices of GNN. However, we point out that the emphasis and significance of our paper is the scalability of GNN based networks, and the introduction of our model is to show a working model. We dedicated half our paper to the analysis on the origins of difficulties in scalability, and the exploration of solutions. We proposed a padding method that allows mixed input data of graphs with different sizes, and in turn, enables \\\"generalization\\\" to some extent through training on mixed data.\"}",
"{\"title\": \"Responses\", \"comment\": \"We thank the reviewer's input on alternative works, but argue with respect that the comparison between our work and RL based approaches does not apply.\\n\\nOur paper introduces a working model using standard ideas of GNN. Without claiming the superiority of our model to all other approaches, we establish the credibility of our model through comparison with a few alternative models and graphical demonstration of our model's prediction. Using the working model, we discuss the main contribution of our paper, and dedicate half of its length to the analysis of the scalability issue of the whole class of GNNs defined with these standard equations.\"}",
"{\"title\": \"Responses to Comments 2-7\", \"comment\": \"2. When we introduced the functions in a GNN, by \\\"functions universal to all nodes and edges\\\", we mean that the set of functions on each node's neighborhood is the same for all nodes. The function that computes pair-wise interactions between two nodes through an edge is also the same for all edges.\\n\\n4. Our model outperforms Kipf's GNN model largely because of its lighter weight architecture. Kipf's model attempts to first infer the types of each edge through an encoder before assigning a different edge function for the decoder. Edge type inference is prone to failure in our experience with Kipf's model. Our model is supplied with the connectivity matrix for the tasks at hand, so the edge types (whether there is a connection or not) is known to the network, as is shown in the algorithm block. We also use different layer structures for each function in GNN, whereas Kipf elected to share the same MLP structure for all three functions, which makes it harder to tune. \\n\\n5. For Markov processes, prediction should not depend on earlier history states, so $T_w$ greater than 1 should not improve the performance in principle. As far as the experiments are concerned, all three data sets are first-order deterministic Markov process, so the choice of $T_w$ is trivial in the scope of this paper. However, model is also applicable to higher-order Markov processes, in which the dependency on $T_w$ is an interesting topic to explore.\\n\\n6 & 7. We admit the model comparison is a weak part of the paper. We argue that the main focus and contribution is the scalability of GNN, using motion prediction of multi-agent motion as a ground for discussion. To our best knowledge at the time of submission, no paper has concretely addressed the scalability issue of GNN, or they have simply glanced over the natural scalability of GNN noting the locality of the functions. The model comparison part of paper is not intended as an evidence that our implementation of GNN is superior to all of its counterparts, but to show a working model against baselines. With a model that works substantially well, we then move on to analysing and tackling the difficulties in scalability.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors have proposed a GNN implementation that predicts and imitates the motion behaviors from observed swarm trajectory data. I have the following major comments on the paper:\\n1. I think the authors should discuss more on the related works. They should clearly mention their contributions and difference compared to the prior works.\\n2. What do the authors mean by 'functions universal across all nodes and edges'? Are these same functions for all nodes and edges?\\n3. I feel that the section 2.3 explaining 1D convolutions is slightly separated from the previous and following sections.\\n4. Why does the proposed GNN model performs better than Kipf's GNN for predicting and imitating the motion behaviors? What is the component that makes the difference?\\n5. How does the model depend on the history window length T_w? Also I am curious how does the model depend on the number of GNN layers used?\\n6. I am not quite satisfied with provided experimental evaluation of the paper. It is not clear to me why Kipf's GNN does not perform better than the proposed model (which simply aggregates the edge states). Also it would be interesting to try some other GNN models, such as ClusterGCN, GCI etc.\\n7. Also I want to see a fair comparison in the paper. If the main contribution is a GNN model, then the model should be compared with the recent GNN models, and if the main contribution is predicting the motion behavior from observed swarm trajectory data, it should be compared with MAS based approaches. While reading the paper, it appeared to me that the authors proposed a graph-based method for solving MAS task and they compared with an old GNN method, which is not justified.\", \"i_also_have_some_minor_comments_as_follows\": \"1. In page 1, \\\"re-finement\\\" should be \\\"refinement\\\"\\n2. In page 3, \\\"... my be found in Battaglia ...\\\" my should be may.\\n3. In page 4, \\\"... sample rate is affects ...\\\" should be \\\"... sample rate affects ...\\\"\\n4. In page 6, \\\"... a agent ...\\\" should be \\\"... an agent ...\\\"\\n5. In page 7, \\\"... swarms of with larger ...\\\" either of or with\\n6. In page 8, Table 3 ending parenthesis is missing.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This work considers sequence prediction problems in a multi-agent system (MAS), which I think is different from imitation learning problems where agents try to mimic experts\\u2019 behavior given histories or states. In that sense, I think the title of this work should be changed so that readers are not confused.\\n\\nThe main idea of this work is to use (1) graph neural networks (GNNs) to learn the abstract information among multiple agents, (2) use 1D convolution to extract historical features of each agent, (3) and minimize the MSE loss function between true and predicted states to make expected successor states of multiple agents more accurate. The experiments show training in a small number of agents can be generalized and transferred to the setting in which there is a large number of agents. \\n\\nAlthough the proposed algorithm is practically useful, I believe the submission is premature to be accepted at a conference due to (1) the lack of comparison with existing works on multi-agent (reinforcement and imitation) learning and (2) the lack of novelty (It seems that the proposed method simply combines existing neural networks and applies it to multi-agent behavior prediction.). There\\u2019s a huge recent development on multi-agent RL and IL regarding the scalability of MARL (MF-MARL, https://arxiv.org/pdf/1802.05438.pdf), coordinated multi-agent imitation learning (https://arxiv.org/abs/1703.03121), multi-agent GAIL (https://arxiv.org/abs/1807.09936), etc, which should be considered as related literature. In addition to them, there\\u2019s a paper on arXiv that uses GNN for MARL (https://arxiv.org/abs/1810.09202), which may be deeply related to this work as well. \\n\\nI\\u2019ll definitely increase my score if I underestimate the quality of this work.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Review for \\\"Towards Scalable Imitation Learning For Multi-Agent Systems with Graph Neural Networks\\\".\\n\\nThe paper proposes a new time series model for learning a sequence of graphs. \\n\\nI vote to reject the paper for three reasons.\\n\\n1. Lack of significance. Algorithm 1 is essentially supervised learning of an autoregressive model with a GNN. The GNN definition is expanded out in the pseudocode, but seems to be completely standard. Moreover, Section 2.5 is also completely standard - you are using a sum of standard MSE losses. It should be much more concise. Also, there is typically no point scaling the loss by D if you use Adam (because the scaling they will eventually cancel out).\\n\\n2. Poor awareness of prior work. What you call \\\"scalability issue caused by poor extrapolation\\\" is normally called poor generalization. Improving generalization is an established problem in ML. You should cite some of that work. See [1] and follow the references.\\n\\n3. Poor experimental evaluation. Your experiments show that the competing GNN method (what you call Kipf's GNN) didn't converge in certain settings. Robust GNN implementations exist that converge for a wide range of reasonable graphs. You should have used one of them.\\n\\nAs things stand, because the criticisms concern the core of the submission, I have doubts the quality of the paper can be improved enough within the ICLR revision time to obtain an \\\"accept\\\" score. However, I wanted to encourage you not to give up. The building blocks that you have in the paper are very relevant and can be a basis for impactful work. You also write well (despite some minor issues). I hope these skills will help you make great submissions in the future!\", \"minor_points\": \"1. I would not call what the paper is doing \\\"imitation learning\\\". Imitation learning normally means you have a controllable system and want to learn a policy from expert demonstrations. What this paper does is commonly referred to as learning an autoregressive time series model. \\n2. Avoid colorful language. For example, the sentence: \\\"the apparent resemblance of the corrected distributions in the final output may inspire an inductive bias as part of training objective\\\" is unclear. \\n\\n[1] https://papers.nips.cc/paper/7176-exploring-generalization-in-deep-learning.pdf\"}"
]
} |
BkxREeHKPS | On the Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks | [
"Jakub Świątkowski",
"Kevin Roth",
"Bastiaan S. Veeling",
"Linh Tran",
"Joshua V. Dillon",
"Jasper Snoek",
"Stephan Mandt",
"Tim Salimans",
"Rodolphe Jenatton",
"Sebastian Nowozin"
] | Variational Bayesian Inference is a popular methodology for approximating posterior distributions in Bayesian neural networks. Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance. In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization. For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibits strong low-rank structure after convergence. This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance. What's more, we find that such factorized parameterizations are easier to train since they improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence. | [
"variational Bayes",
"Bayesian neural networks",
"mean field"
] | Reject | https://openreview.net/pdf?id=BkxREeHKPS | https://openreview.net/forum?id=BkxREeHKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fJW8r359dn",
"rylP-SK3jr",
"r1lC94K3iS",
"SJlwMNK2iB",
"S1xrRMiJcr",
"HJxXIP82YB",
"B1xP-y9tFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744755,
1573848318952,
1573848214502,
1573848079185,
1571955404851,
1571739467360,
1571557118899
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2267/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2267/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2267/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2267/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2267/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2267/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to reduce the number of variational parameters for mean-field VI. A low-rank approximation is used for this purpose. Results on a few small problems are reported.\\n\\nAs R3 has pointed out, the main reason to reject this paper is the lack of comparison of uncertainty estimates. I also agree that, recent Adam-like optimizers do use preconditioning that can be interpreted as variances, so it is not clear why reducing this will give better results.\\n\\nI agree with R2's comments about missing the \\\"point estimate\\\" baseline. Also the reason for rank 1,2,3 giving better accuracies is unclear and I think the reasons provided by the authors is speculative.\\n\\nI do believe that reducing the parameterization is a reasonable idea and could be useful. But it is not clear if the proposal of this paper is the right one. Due to this reason, I recommend to reject this paper. However, I highly encourage the authors to improve their paper taking these points into account.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"[R3.1]\\nWhile the observation is somewhat interesting, currently it is only verified in a narrow range of network architectures, and it's unclear if the observation and the proposed method will still be useful on network architectures used in real-world applications. As such, I believe this work would be more suitable as a workshop presentation. \\n\\nMore specifically, the models considered are MLP on MNIST, LeNet on CIFAR-100 and LSTM on IMDB. These choices are not practical, as the reported performance indicates (e.g. 45% accuracy on CIFAR-100); as such these results cannot support the claim that the proposed low-rank parameterization could be useful in practice: while MFVI can be useful on some model architectures, it lead to pathologies on others, especially on smaller networks. See Fig.1 in [1] for an example and [2] for a possible explanation. Also note that the reported accuracy on MNIST is ~2% worse than the typical values in BNN papers using a comparable setting, e.g. [3]. These facts unfortunately lead to the doubt that the proposed low-rank parameterization could only match the performance of MFVI when MFVI is not that useful.\\n\\n[R3.1 our response]\\nEncouraged by the reviewer, we investigated the behaviour on a larger ResNet-18 model (provided by https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/cifar10_bnn.py) and demonstrate that the approximation quality claimed in our paper holds also for convolution layers and for the deeper ResNet-18 model (see Figure 3 in Section 2 of the updated paper). We are therefore confident the finding is of broad applicability and plan to include further experiments in the final version of the paper. \\n\\nThe 2% gap in the MNIST accuracies is due to the difference in the training procedure we used and the procedures which are commonly used in other MFVI papers. We train the models until full ELBO convergence, without early stopping (which is commonly used in the MFVI literature). Early stopping can increase the validation accuracy by e.g. 2% compared to the accuracy at full convergence. The reason for this is that the MFVI models tend to start under-fitting as training progresses when using the full contribution of the KL term. This is due to the fact that the posterior variances increase to reach the prior variance and introduce large amount of noise which reduces how well the model can fit the training data. This is a limitation of the MFVI approach more generally and not specific to our method, but we agree that it is important to understand and address this shortcoming of the MFVI approach in future research. \\n\\n[R3.2]\\nAnother major concern is that I'm not sure if the proposed low-rank variational would actually save parameters in practice, since the variance parameter in MFVI could already be stored as the preconditioners in Adam-like optimizers [4-5]. \\n\\n[R3.2 our response]\\nWe agree that if preconditioning is feasible then the training time memory savings are irrelevant. However, after training the preconditioner is typically discarded and for test-time inference our method does approximately halve the required model size as shown in Tables 2 and 5.\\n\\n[R3.3]\", \"suggestions_for_future_improvement\": [\"Re-do the experiments using more complex network architectures, and optionally, on larger datasets / more complex tasks (e.g. image segmentation as in [6]).\", \"Also, consider setups more commonly used in previous BNN papers, e.g. VGG/ResNet on CIFAR-10 has been used in [4,7,8]. CIFAR-100 could be sufficiently complex as a BNN benchmark, but few papers reported results on it.\", \"Report the quality of the learned uncertainty, either directly as in [9] or using performance of downstream tasks, e.g. RL and adversarial robustness as in [4,8].\", \"[R3.3 our response]\", \"As the reviewer suggested, we extended the experiments using a more complex network architecture (ResNet) trained on a commonly used CIFAR-10 benchmark. In the future, we also plan to extend the analysis to report the quality of the learned uncertainty as the reviewer suggested.\"]}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"[R2.1]\\n- The authors restrict their analysis to dense layers only. Moreover, it remains conceptually unclear, why and when the proposed model should be useful and represent a good approximation of the full rank model. \\n\\n- The experimental study is restricted to small models, e.g. in the case of CIFAR a quite shallow version of LeNet which achieves only ~45% validation accuracy. It remains unclear, whether the observed low rank structure of the variance matrices (of the dense layers) will scale to deeper models that could achieve competitive validation accuracies.\\n\\n[R2.1 our response]\\nEncouraged by the reviewer, we investigated the behaviour on a larger ResNet-18 model (provided by https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/cifar10_bnn.py) and demonstrate that the approximation quality claimed in our paper for the dense layers of the MLP, CNN and LSTM models holds also for the convolution layers and of the deeper ResNet-18 model (see Figure 3 in Section 2 and Appendix C of the updated paper). We are therefore confident the finding is of broad applicability and plan to include further experiments in the final version of the paper. \\n\\nExploring the theoretical justification for the observed phenomena remains an interesting future work. In particular, we are currently considering simpler settings, e.g., shallow Bayesian neural networks with linear activation functions, where some derivations can be obtained in closed forms, which constitutes a good starting point for a theoretical analysis.\\n\\n[R2.2] \\nWhen measuring the impact of the low rank tying of the posterior variances, the authors compare to the full rank model only. I am missing the \\\"point estimate\\\" baseline for these models. For if the the positive impact of the Bayesian inference approach with the full rank model is small when compared to the point estimate, then, as a consequence, the impact of the low rank tying must be small when compared to the full rank model.\\n\\n[R2.2 our response]\\nThe goal of our paper is not to prove the positive impact of the Bayesian inference or the low rank tying over the point estimate models. In contrast, we start from the observation that scaling up mean-field and making it more robust is still a challenging problem ([0]). We try to tackle this problem by reducing the number of parameters to optimize---while maintaining a comparable predictive behavior---and obtaining less noisy gradient estimates.\\n\\n[R2.3] \\nThe numbers reported in Table 3 (second group of experiments) raise some questions not addressed by the authors. It remains unclear, why the low rank model with ranks 1,2,3 gives better accuracies on CIFAR than the full rank model. The same holds for the LSTM experiment. This may indicate some kind of \\\"overfitting\\\" and should have been analysed by the authors in order to \\\"disentangle\\\" possible overfitting issues from the question of validity of the proposed low rank model.\\n\\n[R2.3 our response]\\nWe are confident that the results are not influenced by the \\\"overfitting\\\" issues. By varying k of the k-tied Normal, we are varying the parameterization of the posterior distribution, not the original model. Restricting the posterior approximation will not lessen overfitting. In fact, the point estimate (zero variance) will overfit most of all posterior approximations. We hypothesise that the improvement in the CNN model from using the low-rank model can be attributed to the improved learning dynamics (higher gradient signal-to-noise ratio).\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"[R1.1]\\nThe paper proposes a low-rank approximation to the diagonal of Gaussian mean field posterior which reduces the number of parameters to fit. They show that the predictive performance doesn't drop much compared with the full covariance but the number of parameters is significantly reduced. \\n\\n[R1.1 our response] \\nTo be precise we are not comparing our method to the full covariance, which would not be tractable in our settings (the full covariance scales quadratically with respect to the total number of network parameters). Instead, we are comparing our method to the full parametrization of the diagonal covariance, i.e., the standard mean-field approximation. Our method is a further factorization of the diagonal covariance in the parameter space. See Figure 6 for a visual representation of this explanation.\\n\\n[R1.2] \\nWhy Matrix normal distribution is related to k-tied Normal distribution when k=1. When k=1, the rank of UV^\\\\top is 1. The covariance is matrix normal is U\\\\otimes V, whose rank is rank(U)rank(V). Also MN has a full covariance for the Gaussian approximation, not mean field. MN is only equal to 1-tied when only diagonal row and column covariances are considered. If that's what the paper means, k-tied is only compared to MN with diagonal row and column covariances. Better make this point clear. \\n\\n[R1.2 our response] \\nWe thank the reviewer for the comment. We changed the sentence below from our paper as suggested by the reviewer to make the comparison with the Matrix normal distribution clearer:\\n\\\"<MN distribution> is related to our k-tied Normal distribution when k = 1\\\" -> \\\"<MN distribution> is related to our k-tied Normal distribution with $k=1$ when MN uses diagonal row and column covariances.\\\"\\nWe have also added a figure (see Figure 6) to clarify this explanation.\\n\\n[R1.3] \\nFigure 4 should show running time instead of training step. I don't think low-rank approximation is gonna influence the convergence that much. It should affect the evaluation speed of each step. \\n\\n[R1.3 our response] \\nWe measured the evaluation speed of each step for a simple model with 2 dense layers [0]. For this model, the measurements show that the k-tied Normal posterior does not decrease the evaluation speed when compared to the standard parameterization of the GMFVI. This experiment can be replicated using the Colab notebook in [1]. We updated the end of section 3.2 and Appendix D to include these results. \\n\\nMore generally, we do not expect a significant change in the evaluation speed of each step when using the k-tied Normal posterior. The biggest additional operation per step when using the k-tied Normal posterior compared to the standard parameterization is the UV^T multiplication to materialize the posterior standard deviations matrix A, where U \\\\in R^{m \\\\times k}, V \\\\in R^{m \\\\times k} and A \\\\in R^{m \\\\times n}. The time complexity of this operations is O(kmn). This time complexity is usually negligible when compared to the complexity of data-weight matrix multiplication with time complexity of O(bmn), where b is the batch size. \\n\\n[0] https://www.tensorflow.org/tutorials/keras/classification\\n[1] https://colab.research.google.com/drive/14pqe_VG5s49xlcXB-Jf8S9GoTFyjv4OF\\n\\n[R1.4]\\nI think the trick the paper uses is a practical one but not significantly novel enough for the ICLR community. It feels like a standard trick people would do when fitting parameters for large matrices, i.e. exploring the low-rank structure and fitting the factorized matrices. Matrix normal is more significant since it reduces the number of parameters as well as maintaining a full covariance matrix with structures. If just focusing on the diagonal covariance, it already throws away the full covariance. Low-rank won't help much with improving the posterior distribution.\\n\\n[R1.4 our response]\\nWe acknowledge that MN covers more expressive posterior distribution. However, the goal of our paper is not to investigate richer posterior distributions, beyond mean-field. Instead, we start from the observation that scaling up mean-field and making it more robust is still a challenging problem ([0]). We try to tackle this problem by reducing the number of parameters to optimize---while maintaining a comparable predictive behavior---and obtaining less noisy gradient estimates.\\n\\n[0] Practical Deep Learning with Bayesian Principles\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a low-rank approximation to the diagonal of Gaussian mean field posterior which reduces the number of parameters to fit. They show that the predictive performance doesn't drop much compared with the full covariance but the number of parameters is significantly reduced.\\n\\n1. Why Matrix normal distribution is related to k-tied Normal distribution when k=1. When k=1, the rank of UV^\\\\top is 1. The covariance is matrix normal is U\\\\otimes V, whose rank is rank(U)rank(V). Also MN has a full covariance for the Gaussian approximation, not mean field. MN is only equal to 1-tied when only diagonal row and column covariances are considered. If that's what the paper means, k-tied is only compared to MN with diagonal row and column covariances. Better make this point clear. \\n\\n2. Figure 4 should show running time instead of training step. I don't think low-rank approximation is gonna influence the convergence that much. It should affect the evaluation speed of each step. \\n\\nI think the trick the paper uses is a practical one but not significantly novel enough for the ICLR community. It feels like a standard trick people would do when fitting parameters for large matrices, i.e. exploring the low-rank structure and fitting the factorized matrices. Matrix normal is more significant since it reduces the number of parameters as well as maintaining a full covariance matrix with structures. If just focusing on the diagonal covariance, it already throws away the full covariance. Low-rank won't help much with improving the posterior distribution.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper considers variational Bayesian inference (learning) for neural networks assuming that both the prior distributions and the posterior distributions of the network weights are factorising Gaussians. It is well known that the respective optimisation task for the parameters of the posterior weight distributions (ELBO) is tractable by stochastic gradient ascent. The authors propose to simplify (restrict) the model even further, by assuming that the posterior variances for fully connected layers, (when seen as a matrix) have low rank. It is pretty obvious that the ELBO optimisation task remains tractable in this case.\", \"The authors perform two types of experiments to show that the proposed simpler model does not decrease the performance of the network, when compared to the full rank factorising model. They first show that the learned variances of dense layers indeed exhibit a low rank structure for three models learned on the corresponding data (MLP on MNIST, LeNet on CIFAR and LSTM on IMDB). Then, in a second step, they consider the proposed rank constraint during learning and show that with rank k >= 2 the models are able to achieve performance competitive with the full rank model and, moreover, exhibit better signal to noise ratio for the gradients during learning.\", \"The paper is well written, clearly structured and technically correct. However, in my opinion, its novelty and new insights are restricted, which is why I suggest to reject the paper. The reasons for this are the following.\", \"The authors restrict their analysis to dense layers only. Moreover, it remains conceptually unclear, why and when the proposed model should be useful and represent a good approximation of the full rank model.\", \"The experimental study is restricted to small models, e.g. in the case of CIFAR a quite shallow version of LeNet which achieves only ~45% validation accuracy. It remains unclear, whether the observed low rank structure of the variance matrices (of the dense layers) will scale to deeper models that could achieve competitive validation accuracies.\", \"When measuring the impact of the low rank tying of the posterior variances, the authors compare to the full rank model only. I am missing the \\\"point estimate\\\" baseline for these models. For if the the positive impact of the Bayesian inference approach with the full rank model is small when compared to the point estimate, then, as a consequence, the impact of the low rank tying must be small when compared to the full rank model.\", \"The numbers reported in Table 3 (second group of experiments) raise some questions not addressed by the authors. It remains unclear, why the low rank model with ranks 1,2,3 gives better accuracies on CIFAR than the full rank model. The same holds for the LSTM experiment. This may indicate some kind of \\\"overfitting\\\" and should have been analysed by the authors in order to \\\"disentangle\\\" possible overfitting issues from the question of validity of the proposed low rank model.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper showed the (diagonal) variance parameters in mean-field VI for BNNs exhibit a low-rank structure, and that training from scratch using such a low-rank parameterization lead to comparable performance as well as increased SNR of the gradient.\\n\\nWhile the observation is somewhat interesting, currently it is only verified in a narrow range of network architectures, and it's unclear if the observation and the proposed method will still be useful on network architectures used in real-world applications. As such, I believe this work would be more suitable as a workshop presentation. \\n\\nMore specifically, the models considered are MLP on MNIST, LeNet on CIFAR-100 and LSTM on IMDB. These choices are not practical, as the reported performance indicates (e.g. 45% accuracy on CIFAR-100); as such these results cannot support the claim that the proposed low-rank parameterization could be useful in practice: while MFVI can be useful on some model architectures, it lead to pathologies on others, especially on smaller networks. See Fig.1 in [1] for an example and [2] for a possible explanation. Also note that the reported accuracy on MNIST is ~2% worse than the typical values in BNN papers using a comparable setting, e.g. [3]. These facts unfortunately lead to the doubt that the proposed low-rank parameterization could only match the performance of MFVI when MFVI is not that useful.\\n\\nAnother major concern is that I'm not sure if the proposed low-rank variational would actually save parameters in practice, since the variance parameter in MFVI could already be stored as the preconditioners in Adam-like optimizers [4-5].\", \"suggestions_for_future_improvement\": \"* Re-do the experiments using more complex network architectures, and optionally, on larger datasets / more complex tasks (e.g. image segmentation as in [6]).\\n* Also, consider setups more commonly used in previous BNN papers, e.g. VGG/ResNet on CIFAR-10 has been used in [4,7,8]. CIFAR-100 could be sufficiently complex as a BNN benchmark, but few papers reported results on it.\\n* Report the quality of the learned uncertainty, either directly as in [9] or using performance of downstream tasks, e.g. RL and adversarial robustness as in [4,8].\\n\\n# References\\n[1] Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors\\n[2] Overpruning in Variational Bayesian Neural Networks\\n[3] A Unified Particle-Optimization Framework for Scalable Bayesian Sampling\\n[4] Noisy Natural Gradient as Variational Inference\\n[5] Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam\\n[6] Bayesian Uncertainty Estimation for Batch Normalized Deep Networks\\n[7] Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification\\n[8] Function Space Particle Optimization for Bayesian Neural Networks\\n[9] Can You Trust Your Model\\u2019s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift\"}"
]
} |
r1laNeBYPB | Memory-Based Graph Networks | [
"Amir Hosein Khasahmadi",
"Kaveh Hassani",
"Parsa Moradi",
"Leo Lee",
"Quaid Morris"
] | Graph neural networks (GNNs) are a class of deep models that operate on data with arbitrary topology represented as graphs. We introduce an efficient memory layer for GNNs that can jointly learn node representations and coarsen the graph. We also introduce two new networks based on this layer: memory-based GNN (MemGNN) and graph memory network (GMN) that can learn hierarchical graph representations. The experimental results shows that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data.
| [
"Graph Neural Networks",
"Memory Networks",
"Hierarchial Graph Representation Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=r1laNeBYPB | https://openreview.net/forum?id=r1laNeBYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9r9v4rlI3Z",
"SkgoeVO2jS",
"Hkgk-7wqoH",
"BJlnBpQtir",
"Byxl0jmYoS",
"HkgA4sXKjS",
"H1gxH9XKiS",
"BJleWuQFjB",
"BygMPvXtjH",
"BJeuzS4ucB",
"B1xaBdyN9S",
"rJgM2hZeqB",
"B1xbJPHpYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744726,
1573843954941,
1573708535452,
1573629252316,
1573628872242,
1573628725650,
1573628471604,
1573627895833,
1573627737845,
1572517136147,
1572235332803,
1571982506075,
1571800793352
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2266/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2266/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2266/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2266/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Four reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for the insightful feedback.\"}",
"{\"title\": \"Updated review based on rebuttal and other reviews\", \"comment\": \"Thanks for the clarification to my questions. Based on the authors' reply and other reviews, I have changed my rating to \\\"Weak accept\\\". Also updated the review above.\"}",
"{\"title\": \"Paper revised to address the reviews\", \"comment\": \"First of all, we would like to thank the reviewers for their insightful and detailed comments.\", \"we_revised_the_paper_to_address_these_feedbacks_as_follows\": \"1- We throughly proof-read the paper.\\n2- We added more details to Section 3 (i.e., method).\\n3- We added two more datasets and showed that we achieve state-of-the-art results on them too.\\n4- We cited the suggested papers.\\n5- We revised the paper to address the missing details in all sections including Appendix.\\n\\nFinally, We removed baseline results of [1] from Table 1 because we noticed they are using an augmented version of the initial features for all datasets. We and the other baselines report the results based on the following initial features:\\nIf dataset has initial node attributes, use them as the initial node features.\\nIf dataset does not have initial node attributes but contains node labels, use the one-hot representation of the node labels as the initial node features.\\nOtherwise, use a fixed length vector of ones as the initial node features.\\nWhereas in [1], they concatenate initial node attributes, node labels, node degree, and deterministic clustering coefficients to construct the initial node features.\\n\\nReferences\\n[1] Bianchi et al. Mincut pooling in Graph Neural Networks. Arxiv 2019.\"}",
"{\"title\": \"Response to Review #3 - part 1\", \"comment\": [\"We thank the reviewer for the insightful review and comments on our work! We address the questions and comments as follows:\", \"We revised section 3 to add the requested details. In short: the keys are initialized randomly and then are updated epoch-wise w.r.t the unsupervised loss (i.e., KL divergence). For the initialization, we also tried preloading the memory (i.e., similar to [1]) using centroids computed by K-Means over the initial node embeddings to warm-start them but we did not observe any significant improvement over the randomly selected keys. We optimize all model parameters except the keys in each batch w.r.t Eq. (12). That means parameters are updated by the gradients of the cross-entropy loss in each batch and also are updated by the gradient of the KL divergence at the end of each epoch. Keys, on the other hand, are only updated at the end of each epoch by the gradient of the KL divergence. Updating the centroids with the same frequency as the network parameters can destabilize the training [2, 3, 4]. This is why we update them epoch-wise.\", \"We stack the matrices to form a tensor of size $h \\\\times n_l \\\\times n_{l+1}$ where $h$ is the number of heads (i.e., depth in standard convolution analogy), and $n_l \\\\times n_{l+1}$ is the size of the cluster assignment matrix $C$ (i.e., height and width in standard convolution analogy). In other words, we treat each head as a separate channel. As you mentioned, because there is no spatial structure, we use $[1 \\\\times 1]$ convolution to aggregate the $C_{j,j}$s across channels and therefore the convolution behaves as a weighted pooling that reduces the heads to one head. We then pass the aggregated matrix to a softmax function that is applied in a row-wise fashion. We updated the paper to address this by changing Eq. (2) and explaining the convolution in more details.\", \"We added the results on Reddit-binary graph classification benchmark (i.e., predicting community type in Reddit discussions) to the revised paper. The GMN model archives state-of-the-art accuracy of 86.39% on this dataset. Per request from Reviewer #2, we also added the results on Tox21 graph classification benchmark. The results of these two datasets are reported in Appendix 3, Tables 6 and 7, respectively.\", \"We speculate the DD dataset is unique among the studied benchmarks as it relies more on local information. This is explicitly shown in [11] where the authors conclude that the most important features to train an SVM on this dataset are surface features which have local behavior. This is why the GMN model which pays more attention to global information is outperformed by our MemGNN model which captures local interactions through message passing. We added this to the revised paper (conclusion section).\", \"We agree that in work such as Neural Turing Machines [5], Memory Networks [6], and Sparse Access Memory [7], a memory component is explicitly defined as a decoupled unit from the model parameters with soft read/write access. However, in most recent work such as Key-Value Memory Layers [8], the memory layer is treated as another layer interleaved and stacked with other standard layers such as self-attention or FFN layers. Following this line of inquiry, we also call the layer as a memory layer because although it is a feed-forward layer but it introduces key-value like memory to the network.\", \"Regarding [9], we can also formulate our memory layer with matrix representation as it is explicitly mentioned in the paper: \\u201cIndeed, several memory augmented networks such as Neural Turing Machine \\u2026. can be formulated in similar ways with P being the external memory and U is the collection of read heads.\\u201d This work is suggesting a general improvement to deep models (feed-forward, attention, memory, recurrent, etc.) by using matrix representation instead of vector representation. We, on the other hand, are specifically introducing a memory layer that can be stacked with other standard layers and emulates a key-value memory. That said, we welcome any naming suggestions for this layer.\"]}",
"{\"title\": \"Response to Review #3 - part 2\", \"comment\": \"- We tried to address this in Section 1. Specifically, the initial GNNs were mostly based on RNNs that suffered from high computational overhead. The GCNs came next and were more successful. However, they use: (1) structure-dependent aggregation, and (2) use a predetermined normalization constant (e.g., it is proportional to node degrees in original GCN model). Attention-based GNNs on the other hand learn the contribution of neighbors and achieve best performance on node representation learning. These models, however, are inherently flat and hence do not exploit the hierarchical structures within the graphs. And because of this reason they perform poorly in graph classification task in which they use simple arithmetic pooling to directly compute the graph embedding from all node embeddings. To address this, recent work introduced end-to-end pooling layers by defining the pooling layer as another neural network. All this work places focuses only on local information in which information is propagated via message passing. We, on the other hand, let the attention mechanism decide each node should attend to which nodes and we do not constraint the node selection to any explicit neighborhood definition. We speculate this results in stronger graph embedding compared to previous work (i.e., similar effect is observed when comparing Transformers to RNNs such as LSTMs). Please note that a good review paper explaining the evolution of GNNs can be found in [10].\\n\\n\\nReferences\\n[1] Felix Hill et al. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. arXiv 2015.\\n[2] Kaveh Hassani et al. Unsupervised multi-task feature learning on point clouds. ICCV 2019.\\n[3] Junyuan Xie et al. Unsupervised deep embedding for clustering analysis. ICML 2016.\\n[4] Mathilde Caron et al. Deep Clustering for Unsupervised Learning of Visual Features. ECCV 2018.\\n[5] Alex Graves et al. Neural turing machines. arXiv 2014.\\n[6] Jason Weston et al. Memory networks. ICLR 2015.\\n[7] Jack Rae et al. Scaling memory-augmented neural networks with sparse reads and writes. NeurIPS 2016.\\n[8] Guillaume Lample et al. Large Memory Layers with Product Keys. arXiv 2019.\\n[9] Kien Do et al. Learning deep matrix representations. arXiv 2017.\\n[10] Zonghan Wu et al. A Comprehensive Survey on Graph Neural Networks. arXiv 2019.\\n[11] Paul D. Dobson et al. Distinguishing Enzyme Structures from Non-enzymes Without Alignments. J. Mol. Biol. .2003\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We would like to thank the reviewer for insightful and detailed comments. Our responses are as follows:\\n\\n- To be clear, there is a difference between decoupling the training and training jointly but with different update frequencies. We optimize the keys w.r.t the unsupervised loss and optimize the queries w.r.t both supervised and unsupervised losses. We simply do this to stabilize the training. Updating the cluster centroids with the features at the same time will lead to trivial solutions [2, 3]. Also, note that there is a strong interaction between keys (updated by unsupervised gradients in each epoch) and queries (updated by unsupervised gradients in each epoch and by supervised gradients in each batch). Every time that keys are updated, it affects the update of the queries and decoupling will ignore this interaction. \\n\\nIf one were to train in two stages, as suggested by the reviewer: (1) the results would be sub-optimal as features and centroids are not jointly learned (i.e., see [1,2,3,4]) and (2) doing so would not allow hierarchical representation learning. For example, say there were two levels of hierarchy. If we decouple the training, we will need four stages of training: first learn node embeddings, then apply the clustering, learn the node embeddings on the new graph, cluster again. And because we limit the interaction between parameters by layer-wise pre-training, the learned features will be sub-optimal. Our integrated approach avoids this.\\n\\nIn regards to novelty, we respectfully disagree. We introduce a memory-layer that learns representation and coarsen the graph at the same time. As far as our knowledge is concerned, ours is the first algorithm to do that. We also show that message passing is not required for learning good representation. More importantly, we show that allowing the network to decide the importance of the nodes and the neighborhood achieves better results compared to confining it to explicit neighborhoods.\\n\\n- Eq. (1) and (3) are correct. The C_{i,j} soft assignment matrix is normalized row-wise to represent the probability of a query belonging to each centroid or key. These probabilities should add to one. However, there is no need for column-wise normalization since some important keys might gather large portion of information from a few queries (i.e., summation greater than one) but weak keys might collect near zero information from the queries (i.e., summation smaller than one). Similar formulation is used in baselines [5]. \\n\\nThanks for noticing this. It is an oversight on our end. We corrected Eq. (4) as follows: \\n$$\\\\textbf{V}^{(l)}= \\\\textbf{C}^{(l)\\\\top}\\\\textbf{Q}^{(l)} \\\\in \\\\mathbb{R}^{n_{l+1} \\\\times d_{l}}$$\\nWhere $\\\\textbf{W} \\\\in \\\\mathbb{R}^{d_{l} \\\\times d_{l+1}}$.\\nWe also found the same mistake with Eq. (6) and corrected it as follows:\\n$$\\\\textbf{Q}^{(0)}=\\\\text{LeakyReLU} \\\\left(\\\\left[\\\\text{LeakyReLU}(\\\\textbf{AW}_0) \\\\parallel \\\\textbf{X}\\\\right]\\\\textbf{W}_1\\\\right)$$\\nwhere $\\\\textbf{W}_0 \\\\in \\\\mathbb{R}^{n \\\\times d_{in}}$ and $\\\\textbf{W}_1 \\\\in \\\\mathbb{R}^{2d_{in} \\\\times d_0}$.\\n\\n- We revised section 3 and added the requested details. The keys are initialized randomly and are updated epoch-wise w.r.t the unsupervised loss (i.e., KL divergence). For the initialization, we also tried preloading the memory (i.e., similar to [1]) using centroids computed by K-Means over the initial node embeddings to warm-start them but did not observe any improvement over the randomly selected keys. We optimize all model parameters except the keys in each batch w.r.t Eq. (12). That means parameters are updated by the gradients of the cross-entropy loss in each batch and also are updated by the gradients of the KL divergence at the end of each epoch. Keys, on the other hand, are only updated at the end of each epoch by the gradients of the KL divergence. Updating the centroids with the same frequency as the network parameters can destabilize the training [2, 3, 4]. That is the reason why we update them epoch-wise.\\n\\n- Yes, and it has been shown by others as well that it can be optimized end-to-end using mini-batch SGD [1].\\n\\n- We added a hyper-parameter subsection to the revised paper. The hyper-parameters for number of keys, the number of heads, number of layers, the size of the hidden dimension, and the batch size for each and every benchmark is reported in Appendix 2, Table 3. Also, we investigate the importance of the parameters in section 4.3.\\n\\n- We cited the suggested paper in the revised paper.\\n\\nReferences\\n[1] Junyuan Xie et al. Unsupervised deep embedding for clustering analysis. ICML 2016.\\n[2] Kaveh Hassani et al. Unsupervised multi-task feature learning on point clouds. ICCV 2019.\\n[3] Mathilde Caron et al. Deep Clustering for Unsupervised Learning of Visual Features. ECCV 2018.\\n[4] Elie Aljalbout et al. Clustering with Deep Learning: Taxonomy and New Methods. arXiv 2018.\\n[5] Zhitao Ying et al. Hierarchical graph representation learning with differentiable pooling. NeurIPS 2018.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We would like to thank the reviewer for the thoughtful comments. We address them as follows:\\n\\n- We added the results on Tox21 graph classification benchmark from [1] to the revised paper. We achieve a state-of-the-art AUC-ROC of 0.828 on this dataset. Per request from Reviewer #3, we also added the results on Reddit-binary graph classification benchmark (i.e., predicting community type in Reddit discussions). The GMN \\nmodel archives state-of-the-art accuracy of 86.39% on this dataset.The results of these two datasets are reported in Appendix 3, Tables 6 and 7, respectively. \\n\\n- We added the error bars in Tables 2, 5, and 7 in the revised paper. These include baselines reported in [1] which contain well-documented error bars. For the other three classification baselines we did not add the error bars because they are not reported in the baseline papers.\\n\\n- Thanks for noticing the typo. We proof-read the paper in the revised version.\\n\\nReferences\\n[1] Wu, et al.. Moleculenet: a benchmark for molecular machine learning. Chemical science 2018.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for your supportive comments. We would like to address the points as follows:\\n\\n1- We revised section 3 to add the requested details. In short: the input query is shared among the key heads. Therefore applying an input query to a layer with for example 10 key heads results in 10 assignment matrices (i.e., C). We stack the matrices to form a tensor of size $h \\\\times n_l \\\\times n_{l+1}$ where $h$ is the number of heads (i.e., depth in standard convolution analogy and 10 in this case), and $n_l \\\\times n_{l+1}$ is the size of the cluster assignment matrix $C$ (i.e., height and width in standard convolution analogy). In other words, we treat each head as a separate channel. Because there is no spatial structure, we use $[1 \\\\times 1]$ convolution to aggregate the $C_{j,j}$s across channels and therefore the convolution behaves as a weighted pooling that reduces the heads to one head. We then pass the aggregated $C$ to a non-linearity.\\n\\n2- We added this work to section 2. Similar to this work, we use \\u201ccontent addressable memory\\u201d and use Student\\u2019s t-distribution as a kernel to measure the similarity. However, because the number of memory words is fairly small in our case (i.e., number of cluster centroids), we do not in practice need to apply the tricks mentioned in this paper for faster retrieval and updates. For example, in a few datasets that we experimented on, the maximum number of keys in each array were 32 which can be updated efficiently without requiring using approximate nearest neighbor to select the top keys to be updated.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a memory layer for Graph Neural Networks (GNNs) and two deep models for hierarchical graph representation learning. The proposed memory layer models the memory as a multi-head array of keys with a soft clustering mechanism and applies a convolution operator over the heads. The proposed models are experimentally evaluated on seven graph classification and regression tasks.\\n\\nGenerally, the paper is technically justified. The proposed technique is well motivated and properly presented. A novel clustering-convolution mechanism is proposed for memory augmentation and graph pooling. However, there are still some rebuttal requests. 1- Some details are insufficient. For the multi-head mechanism, it is not stated clearly whether for each head an independent query is computed or a shared query is used for all heads. \\n2- Additionally, a related work published in NIPS 2016 should be cited and discussed. \\nJack Rae et al. Scaling memory-augmented neural networks with sparse reads and writes.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors introduce a method for adding memory layers to a graph neural network which can be used for representation learning and pooling. Two variants, the MemGNN and GMN are proposed which use the memory layer. The authors evaluate their models over 7 datasets that cover classification and regression tasks. They obtain SOTA on 6/7 datasets; perform ablation analysis and introspect the clusters for chemical significance.\\n\\nOverall this paper is well written and easy to read. The motivation, equations and illustrations are clear and helpful. The model is technically novel, building up from existing approaches in a progressive way. Given the generality of the approach, the impact is also likely to be high.\\n\\nIn order to bolster their results the authors may run their approach on a few other datasets in Wu et. al. 2018.\", \"minor_issues\": [\"Provide error bars for the tables\", \"Sec 4.2 typo : \\u201cdatastes\\u201d\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to use memory network (self-attention) to summarize information from graphs. The proposed approach \\\"soft clusters\\\" the node embeddings into coarser representation and eventually fed to a final MLP for classification. The empirical results on some standard datasets show promising gains of the algorithm.\\n\\nThe proposed approach stacks a few layers of self-attention on top of either some features of the nodes (including projected edge connections, parametrized by W_0) or the node embeddings of some form of graph neural network. And this stacking seems to be simple combination of existing approaches, without fully integrating them. In fact, the training process is also separated: \\\"task-specific loss are back-propagated batch-wise while the gradients of the unsupervised loss are applied\\nepoch-wise\\\". It makes me wonder whether we can just separate it into two stages, i.e., first learn a node embedding using graph neural network, then learn this self attention transformation. Due to the above issues, I feel the novelty of the approach is limited and incremental.\\n\\nAnother issue with the paper is that the notations seem to be messed up and some concepts are not explained clearly. For example, the C_{i,j} soft assignment matrix is normalized row-wise, then Eqn (3) seems very suspicious, because it averages the queries using weights along the other direction, thus not normalized. Also, the dimension of the MLP weights do not align well with inputs, for instance in Eqn (4), it should be written as V^(l) W.\\n\\nThere are more questions that are not clearly specified in the current manuscript. For example, where does the keys K come from? From the text description, it seems to be cluster results, and do you do the clustering on every gradient update? Or are they learned from scratch? The distribution P defined in Eqn (11) also seems to be difficult to optimize since it depends on C_{i,j} and is connected to different entries. Is simple SGD sufficient to optimize over P? Moreover, in the experiment section, it is unknown how many layers of self-attention is applied and what are the important parameters. For better comparison, the experiment section should include some estimate of parameter size as well.\\n\\nThe experiment results seem interesting since the approach indeed achieves good performance across many datasets. Also the visualized keys are interesting as well because it captures some meaningful patterns from the data.\", \"there_is_some_related_work_that_you_should_cite\": \"Hanjun Dai, Bo Dai and Le Song. Discriminateive Embeddings of Latent Variable Models for Structured Data. International Conference on Machine Learning (ICML) 2016.\\n\\n\\n============================================================\\nBased on the authors' reply and other reviews, I have changed my rating to \\\"Weak accept\\\".\\n\\nThe authors' reply has clarified the important detail of how the key matrix is learned and now it is clear that the algorithm is not just two-stage separate learning. In light of this clarification, I think the proposed algorithm is novel enough and the jointly training mechanism is also beneficial for the state-of-the-arts results reported in the experiments.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents \\\"memory layer\\\" to simultaneously do graph representation learning and pooling in a hierarchical way. It shares the same spirit with the previous models (DiffPool and Mincut pooling) which cluster nodes and learn representation of the coarsened graph. In DiffPool, Graph convolutional Neural Networks (GCNs) with 2-6 iterations of \\u201cMessage passing\\u201d is used to learn node embedding, followed by graph pooling. By contrast, the proposed model circumvent the inefficiency of using message passing by using their proposed memory layer.\", \"pros\": [\"The paper is generally written well, but some important details are missing.\", \"Clear visualization of the results.\", \"Useful ablation study.\"], \"cons\": \"* Missing information, which can be critical for the success of the model:\\n(a) the estimation of the keys, and \\n(b) how does the convolutional layer in Eq (2) work, given that the input for it is concatenation of matrices, which has no spatial structure?\\n* Experiments on graph classification lack diversity, where CoLLAB is the only non-chemical dataset in the experiment. \\n* The paper argues that interactive message passing is not efficient. But do you have any explanation on why MemGNN with message passing in initial embedding learning performs better than GMN without message passing in D&D dataset?\\n* I have some reservation for calling something \\\"memory\\\", which is meant to store information for later processing. For this work, the network is a feed-forward architecture for processing graphs, where the middle layers (the queries) are matrices, which can be studied on their own right (e.g., see [1]).\\n\\nAt this point, the ideas for graph representation are plentiful, but there have not been a coherent story on how and why new architectures should work better than previous ones. This paper can be made stronger by offering insights along this line.\\n\\n[1] Do, K., Tran, T., & Venkatesh, S. (2017). Learning deep matrix representations. arXiv preprint arXiv:1703.01454.\"}"
]
} |
Hkx3ElHYwS | GQ-Net: Training Quantization-Friendly Deep Networks | [
"Rundong Li",
"Rui Fan"
] | Network quantization is a model compression and acceleration technique that has become essential to neural network deployment. Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network. We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning. Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings. Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. (2019) and a full precision reference accuracy of 69.76%. We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy. Our codebase and trained models are available on GitHub. | [
"Network quantization",
"Efficient deep learning"
] | Reject | https://openreview.net/pdf?id=Hkx3ElHYwS | https://openreview.net/forum?id=Hkx3ElHYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6asfdkLJ22",
"Skl9Y2InoS",
"BJgS_zI3sr",
"rklRvKlniH",
"BkejVFl2oB",
"H1lrw8ghoH",
"rkx1gIx2ir",
"Bygh9Wl2jB",
"Hkgpu1e2iB",
"HJlVa_8WcB",
"Hye6gdYCYr",
"SkeVhAmjYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744698,
1573837953580,
1573835372932,
1573812582441,
1573812531291,
1573811805107,
1573811686933,
1573810580201,
1573810037111,
1572067515898,
1571882996700,
1571663532142
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2264/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2264/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2264/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2264/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2264/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2264/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2264/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2264/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2264/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2264/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2264/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper propose a new quantization-friendly network training algorithm called GQ (or DQ) net. The paper is well-written, and the proposed idea is interesting. Empirical results are also good. However, the major performance improvement comes from the combination of different incremental improvements. Some of these additional steps do seem orthogonal to the proposed idea. Also, it is not clear how robust the method is to the various hyperparameters / schedules. For example, it seems that some of the suggested training options are conflicting each other. More in-depth discussions and analysis on the setting of the regularization parameter and schedule for the loss term blending parameters will be useful.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the detailed response\", \"comment\": \"I appreciate the detailed reply of the authors. Overall, the new results make me feel more positive about this work, but I decided to keep my rating the same. This is primarily for two reasons; While I can understand that some of the additional steps can be understood as part of the main method (eg detaching the gradients), other steps do seem orthogonal. The weight scheduling for example can be employed even for STE type of approaches, as one can similarly gradually apply the quantization operator. The method overall seems to have quite a few moving moving parts and I am not entirely sure how robust it is to the hyper parameters / schedules.\\n\\nFurthermore, the authors argued that the gradients are more accurate due to having the extra loss term for the floating point model. In general this is not true, as at the end of the day we care about the quantized model and not the full precision one (as R3 also pointed out). Finally, one should be careful in the interpretation of the results with unquantized batchnorm; absorbing the scales and offsets changes the marginal distribution of the weights and biases, hence can introduce errors to the subsequent quantizer (as the model was optimized under a specific distribution).\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"I appreciate the authors' responses that clarified some of my questions. The responses elaborated the arguments made in the original draft, while they do not fully resolve the fundamental issues. For example, the I wouldn't say the gradient from the first loss term is more accurate, as it's using full precision, which is \\\"different\\\" from the test environment where reduced precision has to be used. They also suggest a few potential solutions, while the revised version doesn't really contain those ideas.\"}",
"{\"title\": \"Response to Review #3 (2/2)\", \"comment\": \"Q4: About the interpretation of ablation study?\", \"a\": \"Thank you for pointing this out. We call our network GQ-Net, for guided quantization. We have fixed the typo throughout the paper.\", \"q5\": \"What\\u2019s the name of the proposed network?\"}",
"{\"title\": \"Response to Review #3 (1/2)\", \"comment\": \"Thank you for your detailed review and comments. Following are our renposes to each of your concerns:\", \"q1\": \"The two loss terms conflict each other. The paper will benefit from a more in-depth discussion and analysis on this regularization issue.\", \"a\": \"We used a standard learning rate schedule for ResNet-18, and also tied the weight schedule for the accuracy and quantizability losses to the learning rate schedule to let the model optimize first for accuracy and then for quantizability, as described above. We experimented with some other learning rate schedules, but found this schedule resulted in the most stable training and also good accuracy.\", \"q2\": \"The schedule for the loss term blending parameters looks drastic. What is the benifit compreing with two-step quantization finetune?\", \"q3\": \"About the learning rate schedule in the experiments?\"}",
"{\"title\": \"Response to Review #1 (2/2)\", \"comment\": \"Q2: The paper could benefit from conducting additional experiments on different datasets and bitwidth configurations.\", \"a\": \"We compare to GQ-Net to two recent binarized networks, DoReFa-Net [3] and HWGQ [4], both in the 1 bit weights / 4 bit activations configuration. DoReFa-Net has an accuracy of 59.2%, and HWGQ has an accuracy of 59.6% using the ResNet-18 architecture on Imagenet. We did not test our network in the 1/4 configuration, and we believe it may not perform as well as DoReFa-Net and HWGQ in this very stringent regime. However, we note that DoReFa-Net performs the first and last layers in floating point, whereas our network is fully quantized and thus runs using only fixed point hardware. Also, HWGQ also performs the first and last layers in floating point, and furthermore uses nonuniform quantization, which requires dedicated hardware to run.\\n\\n[3] DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients, Zhou et al.\\n[4] Deep Learning with Low Precision by Half-wave Gaussian Quantization, Cai et al.\", \"q3\": \"Citations and comparisons to more recent binarized networks other than XNOR-Net.\"}",
"{\"title\": \"Response to Review #1 (1/2)\", \"comment\": \"Thank you for your detailed review and comments. Following are our renposes to each of your concerns:\", \"q1\": \"Most of the performance improvement comes from a combination of add-on improvements, contribution from the main idea is thus weaken?\", \"a\": \"Thank you for your detailed review and comments. Regarding the optimizations in Section 3, we argue that several of these techniques are part of GQ-Net quantization framework itself, and not orthogonal heuristics which are added to improve performance. In particular, we show that weight scheduling, detaching gradients and multidomain batch normalization arise naturally and in a principled way when optimizing GQ-Net. We also justify the use of alternating training and learned quantizers, and describe a setting where they are not needed.\\n\\n- Multidomain BN: GQ-Net essentially optimizes two models at the same time, namely the full precision and quantized models. Since these models have substantially different statistics, we use different BN moving averages for each. This approach parallels traditional fine-tuning based quantization, where the pre-trained full precision model uses one set of BN moving averages, and the fine-tuned quantized model uses a different set of BN values produced on the basis of the first. \\n\\n- Dynamic weight scheduling: GQ-Net tries to find a model that is both accurate and easily quantizable. However, during training it must optimize the first objective before the second, as otherwise it may produce a floating point model which is similar to its quantized version, but where both models are inaccurate. To prioritize initially for accuracy, we can use a simple schedule where both objectives are weighted equally. Since the accuracy loss and gradients are both initially larger than the quantizability loss and gradients, this schedule has the effect of prioritizing for accuracy. We can prioritize accuracy further by removing the quantizability loss for a few epochs at the start of training and also each time we change the learning rate. At these time points, as commonly observed, the full precision model has the opportunity to significantly improve its accuracy, and so the schedule focuses on this objective while temporarily ignoring quantizability.\\nWe can also try to produce a good weight schedule automatically. For example, during training we can dynamically set the weights to equalize the two loss terms in Equation 1. We will study automatic weight scheduling more in future work. \\n\\n- Detached gradients: This helps reduce interference between the accuracy and quantizability losses. If gradients from the quantizability loss directly propagate to the weights (i.e. if there was an orange arrow from $\\\\mathcal{L}_q$ to $x_L$ in Figure 1), this may lead to weight changes which improve quantizability but decrease accuracy. Detaching gradients somewhat reduces this effect, and encourages weights and theta parameters to change to improve quantizability while maintaining accuracy.\\n\\n- Alternating training for W and theta: Since both W and theta affect quantization, training them jointly may cause interference. It may be possible to train W and theta jointly by using different learning rates for each set of parameters (as done in some works on trained quantizers, eg PACT [1] and LIQ [2]), but this requires a careful selection of learning rates. We found alternating training to be simpler and equally or more effective. \\n\\n- Learned quantizers: Learned quantizers are orthogonal to the main idea of GQ-Net. We tested its importance to GQ-Net by removing it, while still using multidomain BN, dynamic weight scheduling and detached gradients, which we argued above were core components of GQ-Net. Our accuracy decreased by 3.59% to 63.09%. However, this still exceeds RelaxedQuant\\u2019s accuracy of 61.52% in the 4/4 configuration. Furthermore, we note that RelaxedQuant itself uses learned quantization.\\n\\nIn addition, we performed a 5 bit quantization experiment without learned quantizers (or alternating training), but using multidomain BN, weight scheduling and detached gradients. This achieved 67.6% accuracy, which is higher than Integer-only (64.64%) and RelaxedQuant\\u2019s accuracy (65.1%) in the 5/5 setting.\\n\\n[1] PACT: Parameterized clipping activation for quantized neural networks, Choi et al.\\n[2] Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss, Jung et al.\"}",
"{\"title\": \"Response to Review #2 (2/2)\", \"comment\": \"Q2: Do you employ the straight-through estimator (STE) for the weights in the $\\\\mathcal{L}_q$ objective?\", \"a\": \"Thank you for pointing this out. Figure 2b is correct, but the caption is unclear. The orange values in Figure 2b are the outputs after convolution with 4-bits quantized weights and input in a particular layer (we arbitrarily chose layer layer2.0.downsample.0 in ResNet-18, but all layers behave similarly). These values will then be passed through the batch norm and the nonlinearity function of the layer before finally being quantized, thus the histogram has more than 16 bins. We have revised the caption in Figure 2b to make this clear.\", \"q3\": \"How is batch normalization handled?\", \"q4\": \"How do you ensure and ub > lb when you learn the quantizer? Is this better than learning quantization scale and bias terms?\", \"q5\": \"About the distribution illustrated in Figure 2b?\"}",
"{\"title\": \"Response to Review #2 (1/2)\", \"comment\": \"Thank you for your detailed review and comments. Following are our renposes for each of your concerns:\", \"q1\": \"Whether the boost in performance is due to the several additional steps employed, and not due to the main idea itself?\", \"a\": \"Thank you for your detailed review and comments. As Review #1 made a similar comment, our response here is similar to our response for Review #1.\\n\\nWe argue that several of the techniques described in Section 3 are part of GQ-Net quantization framework itself, and not orthogonal heuristics which are added to improve performance. In particular, we show that weight scheduling, detaching gradients and multidomain batch normalization arise naturally and in a principled way when optimizing GQ-Net. We also justify the use of alternating training and learned quantizers, and describe a setting where they are not needed. \\n\\n- Multidomain BN: GQ-Net essentially optimizes two models at the same time, namely the full precision and quantized models. Since these models have substantially different statistics, we use different BN moving averages for each. This approach parallels traditional fine-tuning based quantization, where the pre-trained full precision model uses one set of BN moving averages, and the fine-tuned quantized model uses a different set of BN values produced on the basis of the first. \\n\\n- Dynamic weight scheduling: GQ-Net tries to find a model that is both accurate and easily quantizable. However, during training it must optimize the first objective before the second, as otherwise it may produce a floating point model which is similar to its quantized version, but where both models are inaccurate. To prioritize initially for accuracy, we can use a simple schedule where both objectives are weighted equally. Since the accuracy loss and gradients are both initially larger than the quantizability loss and gradients, this schedule has the effect of prioritizing for accuracy. We can prioritize accuracy further by removing the quantizability loss for a few epochs at the start of training and also each time we change the learning rate. At these time points, as commonly observed, the full precision model has the opportunity to significantly improve its accuracy, and so the schedule focuses on this objective while temporarily ignoring quantizability. \\nWe can also try to produce a good weight schedule automatically. For example, during training we can dynamically set the weights to equalize the two loss terms in Equation 1. We will study automatic weight scheduling more in future work. \\n\\n- Detached gradients: This helps reduce interference between the accuracy and quantizability losses. If gradients from the quantizability loss directly propagate to the weights (i.e. if there was an orange arrow from $\\\\mathcal{L}_q$ to $x_L$ in Figure 1), this may lead to weight changes which improve quantizability but decrease accuracy. Detaching gradients somewhat reduces this effect, and encourages weights and theta parameters to change to improve quantizability while maintaining accuracy.\\n\\n- Alternating training for W and theta: Since both W and theta affect quantization, training them jointly may cause interference. It may be possible to train W and theta jointly by using different learning rates for each set of parameters (as done in some works on trained quantizers, e.g. PACT [1] and LIQ [2]), but this requires a careful selection of learning rates. We found alternating training to be simpler and equally or more effective.\\n\\n- Learned quantizers: Learned quantizers are orthogonal to the main idea of GQ-Net. We tested its importance to GQ-Net by removing it, while still using multidomain BN, dynamic weight scheduling and detached gradients, which we argued above were core components of GQ-Net. Our accuracy decreased by 3.59% to 63.09%. However, this still exceeds RelaxedQuant\\u2019s accuracy of 61.52% in the 4/4 configuration. Furthermore, we note that RelaxedQuant itself uses learned quantization.\\n\\nIn addition, we performed a 5 bit quantization experiment without learned quantizers (or alternating training), but using multidomain BN, weight scheduling and detached gradients. This achieved 67.6% accuracy, which is higher than Integer-only (64.64%) and RelaxedQuant\\u2019s accuracy (65.1%) in the 5/5 setting.\\n\\n[1] PACT: Parameterized clipping activation for quantized neural networks, Choi et al.\\n[2] Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss, Jung et al.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors propose a framework towards 4-bit auantization of CNNs. Specifically, during training, the proposed method contains a full precision branch supervised by classification loss for accurate prediction and representation learning, as well as a parameterized quantization branch to approximate the full precision branch. A quantization loss between the full precision branch and the quantization branch is defined to minimize the difference between activation distributions. The authors proposed a series of improvements, including alternative optimization, dynamic scheduling, detach and batch normalization to help boosting the performance to SOTA under 4-bit quantization.\", \"strengths\": [\"Well-written paper with good clarity and technical correctness.\", \"Proposed method seems light, sweet and technically correct.\", \"Good experimental performance and result on ImageNet.\", \"Good and clear ablation study.\"], \"weaknesses\": [\"Major performance improvement comes from the combination of different incremental improvements.\", \"Lack of evaluations with variety of datasets (CIFAR-10/MNIST)/configurations (other bitwidth)\", \"Lack of the citation and comparison to many most recent works on binarized networks (except XNOR-Net)\"], \"comments\": \"I consider this a well-written paper with great clarity and good empirical performance. I enjoyed reading the paper. The proposed framework seems technically correct and effective. \\n\\nHowever, a major weakness of this work is that most of the performance improvement comes from a combination of add-on improvements, except that the authors put them together into a unified framework and explained elegantly. The vanilla architecture, which is a main contribution and described in Fig. 1, doesn't seem to give that significant improvement. To some extent, the real technical contributions of this work are partly weakened given the add-on combinations and the existence of similar methods. For example, the alternative optimization of W and \\\\theta is similar to alternative re-training in network pruning, although a unified loss/optimization framework is applicable in this case. Others such as dynamic scheduling and gradient detach are also heuristic-driven.\\n\\nThe results on ImageNet under 4-bit quantization are strong and convincing, but the paper could benefit from conducting additional experiments on different datasets and bitwidth configurations. A more comprehensive study similar to Louizos et al., 2019 will be great. Citations and comparisons to more recent binarized networks other than XNOR-Net will be appreciated too.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The paper propose a new quantization-friendly network training algorithm called GQ (or DQ) net. I addresses the existing issues in the common paradigm, where a floating-point network is trained first, followed by a second-phase training step for the quantized version. It is a well-written paper. Concepts were clearly explained and easy to follow. Below I present my comments about some details in the paper that were not entirely clear for me.\", \"The two loss terms conflict each other. If the training algorithm focuses too much on the first term, it will make the network less friendly to the quantization process. On the other hand, the second one is going to enforce too much emphasis on the accuracy from the quantized network. It is natural to involve some hyperparameter search to find the balance between the two blending parameters. The paper suggests a strategy as to how to handle this issue, but it is not comprehensive, and rather controversial. I think the paper will benefit from a more in-depth discussion and analysis on this regularization issue.\", \"The schedule for the loss term blending parameters looks drastic to me. It\\u2019s more like \\u201ctrain the floating point net first, and then train the quantized one, and then revisit the floating point one, and so on.\\u201d I know I simplified, because the floating point network never stops getting updated as it\\u2019s \\\\omega_f is always 1. However, it seems to me that this drastic scheduling strategy sounds like very similar to the traditional approach that trains the floating point network first and then finetune the quantized one, except for the fact that this proposed algorithm repeats this process a few times. Hence, I think the authors\\u2019 argument about the supremacy of the proposed method to the two-step finetuning approach is not clearly supported.\", \"The exponentially decaying learning rate scheduling looks like the one from ResNet. I\\u2019m wondering if it should be the best, especially with the drastic introduction and omission of the second loss.\", \"In the ablation studies, it seems that some of the suggested training options are conflicting each other and the clear winner seems to be the multi-domain BN. I cannot conclude anything from this analysis as to which one is more important than the other one, except for the Alt{W,\\\\theta} case.\"], \"some_minor_things\": [\"What\\u2019s the name of the proposed network? Is it GQ or DQ?\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This work introduces GQ-Net, a novel technique that trains quantization friendly networks that facilitate for 4 bit weights and activations. This is achieved by introducing a loss function that consists of a linear combination of two components: one that aims to minimize the error of the network on the training labels of the dataset and one that aims to minimize the discrepancy of the model output with respect to the output of the model when the weights and activations are quantized. The authors argue that this has the effect of \\u201cguiding\\u201d the optimization procedure in finding networks that can be quantized without loss of performance. For the discrepancy metric the authors use the KL divergence from the predictive distribution of the floating point model to the one of the quantized model. The authors then propose several extra techniques that boost the performance of their method: 1. scheduling the weighting coefficients of the two loss terms (something which reminisces iterative pruning methods), 2. stopping the gradient of the floating point model w.r.t. the second loss term, 3. learning the parameters of the uniform quantizer, 4. alternating optimization between the weights and the parameters of the quantizers and 5. using separate batch normalization statistics for the floating point and quantized models. The authors then evaluate their method on Imagenet classification using ResNet-18 and Mobilenet v1 / v2, while also performing an ablation study about the extra tricks that they propose.\", \"This work is well written and in general conveys the main idea in an effective manner. Quantization friendly neural networks in an important subject in order to make deep learning tractable for real world applications. The idea seems on a high level to be interesting and simple; train floating point models that can fit the data well while also encouraging them to be robust to quantization by enforcing the predictive distributions of the fixed and floating point models to be similar in the KL-divergence sense. Nevertheless, I do have some comments that would hopefully help in improving this work:\", \"It does seem that GQ-Nets need extra tricks in order to perform well, and those tricks come with their own set of hyperparameters that need to be tuned. For example, at section 4.3 you mention that the top-1 accuracy of vanilla GQ-Nets is 60.95, which is lower than the RelaxedQuant baseline (that has 61.52). This raises the question whether the boost in performance is due to the several additional steps employed (which in general can be applied to other quantization techniques as well), and not due to the main idea itself.\", \"Do you employ the straight-through estimator (STE) for the weights in the L_q objective? In the second paragraph of the second page you argue that due to the biased gradients of STE the performance is in general reduced, so I was wondering whether STE posed an issue there or whether you used an alternative estimator.\", \"How is batch normalization handled? Do you absorb the scale and shifts in the weights / biases before you perform quantization or do you quantize the weights and then apply the BN scale and shift in full precision?\", \"How do you ensure and ub > lb when you learn the quantizer? In general learning the quantizer can be also done with alternative techniques (e.g. simply learning the scale and offset) so I was wondering whether you noticed benefits from using the ub, lb parametrization compared to others.\", \"Do you show the pre-quantization distributions at Figure 2b? In the caption you mention quantized but the resolution seems to be higher than the 16 values you should get with 4 bits. Furthermore, it should be noted that the discrepancy in BN in quantized models was, as far as I am aware, firstly noticed at [1] (and subsequently at RelaxedQuant) and both of these methods simply re-estimated the moving averages during the inference time.\", \"Overall, I am on the fence about this work and tend to reject. Having said that, I am of course willing to revise my score after the discussions with the authors / other reviewers.\", \"[1] Probabilistic Binary Neural Networks, Jorn W.T. Peters, Max Welling\"]}"
]
} |
B1x3EgHtwB | ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks | [
"Shuxuan Guo",
"Jose M. Alvarez",
"Mathieu Salzmann"
] | In this paper, we introduce a novel approach to training a given compact network. To this end, we build upon over-parameterization, which typically improves both optimization and generalization in neural network training, while being unnecessary at inference time. We propose to expand each linear layer of the compact network into multiple linear layers, without adding any nonlinearity. As such, the resulting expanded network can benefit from over-parameterization during training but can be compressed back to the compact one algebraically at inference. As evidenced by our experiments, this consistently outperforms training the compact network from scratch and knowledge distillation using a teacher. In this context, we introduce several expansion strategies, together with an initialization scheme, and demonstrate the benefits of our ExpandNets on several tasks, including image classification, object detection, and semantic segmentation. | [
"Compact Network Training",
"Linear Expansion",
"Over-parameterization",
"Knowledge Transfer"
] | Reject | https://openreview.net/pdf?id=B1x3EgHtwB | https://openreview.net/forum?id=B1x3EgHtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_DaNnE7JU",
"rkgS998soH",
"Syl-HS8sjS",
"SyxeqmIjiB",
"H1lptzIjjB",
"r1lZzyIoir",
"Hkef7hN4qH",
"SyxThYwDYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798744669,
1573771916781,
1573770552854,
1573770120168,
1573769861360,
1573768969333,
1572256793770,
1571416501218
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2262/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2262/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2262/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2262/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2262/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2262/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2262/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper develops linear over-parameterization methods to improve training of small neural network models. This is compared to training from scratch and other knowledge distillation methods.\\n\\nReviewer 1 found the paper to be clear with good analysis, and raised concerns on generality and extensiveness of experimental work. Reviewer 2 raised concerns about the correctness of the approach and laid out several other possibilities. The authors conducted several other experiments and responded to all the feedback from the reviewers, although there was no final consensus on the scores.\\n\\nThe review process has made this a better paper and it is of interest to the community. The paper demonstrates all the features of a good paper, but due to a large number of strong papers, was not accepted at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3 (Part 2/2)\", \"comment\": \"3) Q: Initialize the compact networks with the linear product of the expanded networks.\\n\\t\\nWe tried this before and found that it was not the reason behind the improvements. Following the experimental setting of Table 1 and Table 2, we initialized the compact networks with different expansion strategies on CIFAR-10, CIFAR-100 and ImageNet, respectively. The results are given in the table below.\\n\\n-------------------------------------------------------------------------------------------------------------------------\\n Network | Initialization \\t | CIFAR-10 | CIFAR-100 \\n-------------------------------------------------------------------------------------------------------------------------\\n SmallNet | Normal | 78.63 \\u00b1 0.41 | 46.63 \\u00b1 0.27\\n SmallNet | ExpandNet-FC | 79.09 \\u00b1 0.56 | 46.52 \\u00b1 0.36\\n SmallNet | ExpandNet-CL | 78.65 \\u00b1 0.36 | 46.65 \\u00b1 0.47\\n SmallNet | ExpandNet-CL+FC | 78.81 \\u00b1 0.52 | 46.43 \\u00b1 0.72\\n SmallNet | ExpandNet-CK | 78.84 \\u00b1 0.30 | 46.56 \\u00b1 0.23\\n SmallNet | ExpandNet-CK+FC | 79.27 \\u00b1 0.29 | 46.62 \\u00b1 0.29\\n ExpandNet-CK+FC(ours) | Normal | 80.31 \\u00b1 0.27 | 48.62 \\u00b1 0.47\\n-------------------------------------------------------------------------------------------------------------------------\\n\\nFrom these results and results in Table 1, we can see that, on CIFAR-10, compact networks initialized by Expand-FC and Expand-CL yield slightly better results than training the ExpandNets by normal initialization. However, the same trend does not occur on CIFAR-100 and ImageNet, where, with ExpandNet-CL initialization, MobileNet (90 epochs) gets 66.44% (vs 66.48% for the baseline and 69.40% for the ExpandNet-CL), MobileNetV2 (90 epochs) gets 63.07% (vs 63.75% for the baseline and 65.62% for the ExpandNet-CL) and ShuffleNet gets 56.91% (vs 56.89% for the baseline and 57.38% for the ExpandNet-CL). Moreover, compact networks initialized by ExpandNet-CK always yield worse results than training ExpandNet-CKs from scratch. \\n\\nTherefore, we believe that the benefits of our expansion strategies cannot be obtained by initialization using ExpandNets.\\n\\n2) and 3) are great points that we have added them to our paper in Appendix E . Thanks again for these suggestions.\"}",
"{\"title\": \"Response to Review #3 (Part 1/2)\", \"comment\": \"Thank you for your insightful and valuable review. We address your comments below and have modified our paper accordingly. We would appreciate any further feedback.\", \"q\": \"Without non-linearity, what is the added value and how we have better results?\\n\\nOver-parameterization has been shown both theoretically and empirically to facilitate neural network training. We believe this to be exactly our contribution: discovering a simple but effective method that takes advantage of over-parameterization during training by expanding layers, while this over-parameterization is not necessary at inference so that we can compress the better trained expanded networks back to original ones without losing any performance. Note that it is not the same as naively adding new layers to the network with nonlinearity, which would give no direct way to do a lossless compression afterwards. This is a major contribution since no one has studied this aspect before.\\n\\nAs suggested, we conducted additional experiments and analysis to reject the proposed hypotheses and further evidence that the improvement is due to our expansion strategies.\\n\\n1) Q: The improvements are possibly from implicit nonlinear operations between expanded layers.\\n\\nWe conducted experiments to compare the representation ability of float (32 bit) and double (64 bit) precision and to simulate the nonlinearity arising from truncation between expanded layers. Based on the experimental setting of Table 1, but with convolutional kernel size = 5, we trained a small network with float and double as baseline, and then trained an ExpandNet-CK with r=4 in float. In addition, we truncated the small network in double precision to float during training, to simulate the nonlinearity arising when multiplying 2 floats into 1 float. To be precise, we used double to compute (W*x + b), and then converted the result to float to truncate the output feature maps. \\n\\n--------------------------------------------------------------------------------------------\\n Network | CIFAR-10 | CIFAR-100 \\n--------------------------------------------------------------------------------------------\\n SmallNet(float)\\t\\t\\t| 78.94 \\u00b1 0.40 | 47.33 \\u00b1 0.46\\n SmallNet(double)\\t\\t| 78.73 \\u00b1 0.48 | 46.28 \\u00b1 0.50\\n SmallNet(double) with\\n truncated non-linearity\\t| 78.98 \\u00b1 0.13 | 46.21 \\u00b1 0.94\\n ExpandNet-CK (float) \\t| 79.90 \\u00b1 0.38 | 48.26 \\u00b1 0.14\\n--------------------------------------------------------------------------------------------\\n\\nAs shown by the results reported in the table above, the networks with double precision do not outperform those with float precision. Furthermore, the nonlinearity obtained by truncation does not help during training. Our ExpandNet-CK consistently outperforms SmallNets with float or double or truncation. At test time, the outputs from an (uncompressed) ExpandNet and one that was compressed back to the SmallNet architecture are equal, up to precision error, which indicates that truncation does not affect the test performance.\\n\\nThus we can conclude that our improvement is not due to higher numerical precision or the implicit truncation nonlinearity arising from taking the products of several floats in the expanded linear layers.\\n\\n2) Q: Initialize the compact networks with the linear product of the well-trained non-linear counterpart of the expanded network.\\n\\nThis is an interesting suggestion. Based on the same experimental setting as in Table 1, we used the well-trained nonlinear counterparts to initialize the compact networks by computing the linear product of the expanded layers before training as suggested. The compact networks initialized using nonlinear ExpandNet-CL+FC achieve 77.02 \\u00b1 0.35% (vs 79.98 \\u00b1 0.28% using our initialization) on CIFAR-10 and 39.39 \\u00b1 1.08% (vs 47.98 \\u00b1 0.48% using our initialization) on CIFAR-100. Those initialized using nonlinear ExpandNet-CK+FC achieve 75.81 \\u00b1 0.34% (vs 80.81 \\u00b1 0.27% using our initialization) on CIFAR-10 and 34.88 \\u00b1 1.41% (vs 49.82 \\u00b1 0.25% using our initialization) on CIFAR-100. \\n\\nBoth experiments reveal that, while our initialization helps to improve our ExpandNet, using it to initialize the compact network is not effective.\"}",
"{\"title\": \"Response to Review #2 (Part 2/2)\", \"comment\": \"1. 3) Q: The proposed method is a simple application of over-parameterization. The good results might yield from faster convergence of linear over-parameterization as suggested by Arora et.al. 2018.\\n\\nArora et al. 2018 only worked with linear models or linear layers. By contrast, we focus on practical, nonlinear, compact convolutional networks, and we propose to expand convolutional layers, which has not been studied before. Exploring how to expand convolutional layers is one of our contributions. \\n\\nIn their paper, Arora et al. performed a sanity test on MNIST with a CNN, by only expanding the fully-connected layers. According to our experiments, with Expand-FC ONLY, getting better results than the compact network is difficult. In Appendix D, we perform a more thorough evaluation of the behavior observed by Arora et al. In short, the faster convergence they observed seems to be due to their use of a different regularizer, acting on the product of the parameter matrices of the expanded layers, rather than on the individual parameters. This, in turn, makes their model yield worse test error than the compact network, whereas our ExpandNets, which rely on standard regularization, achieve better results. See Appendix D of the revised paper for the detailed discussion.\\n\\n2) Missing results of 400 epochs and KD on ShuffleNetV2, Table 3 and Table 4.\", \"here_are_the_results_of_kd_on_shufflenet\": \"ShuffleNet (w/KD) achieves 57.59% and ExpandNet-CL (w/KD) achieves 57.68% [ShuffleNet yields 56.89% and ExpandNet-CL 57.38%]. We are waiting for the results of 400 epochs. We will include these results in our paper when we get all of them.\\n\\nWe tend to disagree that knowledge transfer methods should be our main baselines. Our approach is complementary to knowledge transfer, and it can also be used on its own in the absence of teacher networks. In any event, Table 1 and 2 already indicate that, in most cases, baseline < baseline+KD < ExpandNet < ExpandNet+KD in terms of accuracy. The ShuffleNet results above confirm that the performance of our ExpandNets can be further boosted with the help of a teacher network.\\n\\nNote that using KD or knowledge transfer with YOLO and U-Net is not straightforward and has received very little attention so far. Doing so goes beyond the scope of this work.\\n\\n3) Initialize all models and apply the methods widely to big models.\\n\\nIn our experiments, we found that, on some datasets, the ExpandNets\\u2019 nonlinear counterparts do not outperform the original models. Using these as initialization does not provide a good starting point. In other words, nonlinearity does not always help in deep networks and our initialization works much better when the baseline networks are quite small.\\n\\nWe did conduct some experiments on deeper and wider networks, but the improvements are not significant. As shown in Appendix A.4, Table 9, where we investigate the use of our Expand-CK on AlexNet with different number of channels, we found that the benefits decrease as the compact model size increases. This, we believe, further evidences that the benefits of our approach are due to over-parameterization.\"}",
"{\"title\": \"Response to Review #2 (Part 1/2)\", \"comment\": \"Thank you for your comments. We address your concerns below.\\n\\n1) Effectiveness of our approach.\\n\\nExpanding FC, CL or CK are different design choices of our approach, and we show them all in the paper for completeness. We demonstrate their effectiveness on 3 tasks (image classification, object detection, and semantic segmentation), using 5 different datasets (ImageNet, PASCAL VOC, Cityscapes, CIFAR-10, CIFAR-100), and with 7 different compact network architectures (SmallNet3x3, SmallNet7x7, MobileNet, MobileNetV2, ShuffleNet, YOLO-LITE, U-Net). It is clear that our ExpaneNets are effective in all experimental settings. Although one can always run more experiments, this already constitutes strong evidence that our method works.\\n\\nWe nevertheless discuss your individual comments in more detail below.\\n\\n 1.1) Q: Expand-FC and Expand-CL are not significant in Table 1, and Table 2 for 400 epochs.\\n\\nTo be exact, ExpandNet-FC is the only configuration that does not achieve better results. However, it is merely one of our configurations. As shown in Appendix A.3 with multiple expansion rates and networks, ExpandNet-CL consistently outperforms the baselines. In addition, expanding FC and CL together consistently improves the training of small networks, as shown in Table 1. We also demonstrate the effectiveness of ExpandNet-CL on many large-scale image recognition tasks (classification using ImageNet; detection using Pascal VOC, segmentation using Cityscapes). This constitutes strong evidence that our approach is effective. \\n\\nFurthermore, as shown in Figure 2 and 3, ExpandNet-CL produces flatter minima and more zero-centered gradient cosine similarity, which indicates that ExpandNet-CL has a better training behavior and can reach solutions that generalize better. This is an important property of our convolutional expansion. \\n\\n\\n 1.2) Q: Some intrinsic property of Expand-CK is more important than over-parameterization.\\n\\nWe conducted an additional ablation study for ExpandNet-CK to show that both ExpandNet-CK and over-parameterization are important.\\n\\nIn our method, the expansion rate (r) controls the number of parameters of ExpandNet-CK. Following the same experimental setting as for Table 1, where the baseline SmallNet achieves a top-1 accuracy of 78.63% on CIFAR-10 and 46.63% on CIFAR-100, we set the expansion rate in [0.25, 0.5, 0.75, 1.0, 2.0, 4.0], and report the corresponding results in the following table. For r < 1, the performance of ExpandNet-CK drops from 78.70% to 72.32% on CIFAR-10 and from 46.41% to 39.32% on CIFAR-100 as the number of parameters decreases. For r > 1, ExpandNet-CK yields consistently higher accuracy. Interestingly, with r = 1, ExpandNet-CK still yields better performance (79.22% vs 78.63% and 47.25% vs 46.63%) with fewer parameters (54.77K vs 66.19K and 60.62K vs 72.04K). \\n\\n--------------------------------------------------------------------------------------------\\n expansion rate | #params (K) | CIFAR-10 | CIFAR-100 \\n--------------------------------------------------------------------------------------------\\n 0.25 | 37.91 / 43.76 | 72.32 \\u00b1 0.62 | 39.23 \\u00b1 0.84\\n 0.50 | 42.81 / 48.66 | 76.77 \\u00b1 0.36 | 43.68 \\u00b1 0.51\\n 0.75 | 48.43 / 54.28 | 78.70 \\u00b1 0.42 | 46.41 \\u00b1 0.52\\n 1.00 | 54.77 / 60.62 | 79.22 \\u00b1 0.52 | 47.25 \\u00b1 0.40\\n SmallNet | 66.19 / 72.04 | 78.63 \\u00b1 0.41 | 46.63 \\u00b1 0.27\\n 2.00 | 87.32 / 93.17 | 79.97 \\u00b1 0.18 | 48.13 \\u00b1 0.42\\n 4.00 | 186.98 / 192.8 | 80.27 \\u00b1 0.24 | 48.55 \\u00b1 0.51\\n--------------------------------------------------------------------------------------------\\n(#params(K) denotes the number of parameters: CIFAR-10 / CIFAR-100)\\n\\nAltogether, these results indicate that, while ExpandNet-CK in itself helps to improve the results, combining it with over-parameterization further boosts the performance.\\n\\nWe also conducted an ablation study using different expansion rates and networks in Appendix A.3, Table 8, to investigate the importance of over-parameterization.\\n\\nNote that all the parameter numbers refer to the training parameters; at test time, our ExpandNets are compressed back, without any loss, so as to have the same number of parameters as the baseline compact network.\"}",
"{\"title\": \"Summary of Changes in New Version\", \"comment\": \"We thank the reviewers for their valuable comments. We have revised our paper in the following way:\\n\\n1. As suggested by reviewer 2, we have added knowledge distillation for ShuffleNet in Table 2.\\n\\n2. As suggested by reviewer 2, we have added Appendix D to discuss the work of Arora et al. (2018) in detail and further highlight the differences with our work. \\n\\n3. As suggested by reviewer 3, we have added the analysis of our expansion strategies to Appendix E.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes linear over-parameterization methods to improve training of small neural network models. The idea is simple -- each linear transformation in a network is overparameterized by a series of linear transformation which is algebraically equivalent to the original linear transformation. Number of experiments are conducted to show the effectiveness of the approach.\\n\\nThe proposed method is a simple application of over-parameterization to improve neural network model training. The motivation is clear and the proposed method is clearly presented. The paper is easy to understand and follow. Great analyses on the training behavior and generalization ability are conducted. Given the simplicity of the method, this could be a standard way of training small neural network model if the effectiveness of the method is observed more widely.\", \"these_are_some_concerns_on_the_paper\": \"1) The effectiveness of the approach is not necessarily significant in all experiments. For example, in Table 1, ExpandNet-FC and ExpandNet-CL were not effective. The same trend is observed in Table 2 for 400 epochs. Given that only ExpandNet-CK improves the performance, we could conclude that some intrinsic property of CK is important than over-parameterization. The good results for 90 epochs in Table 2 may mean linear over-parameterization yields faster convergence as suggested by Arora et al. 2018.\\n\\n2) The comparisons are not extensive. For example, we do not see Init for all models and \\\"w/ KD\\\" for ShuffleNetV2 in Table 2. Table 2 has \\\"N/A\\\". Table 3 and 4 do not have results with \\\"w/ KD\\\". Knowledge transfer methods should be the baseline of the paper.\", \"minor_comments\": \"It will be interesting to see the results of the models used for Init.\\n\\nIt might be interesting to conduct experiments with a big model and see if we do not have any gains.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper is extremely interesting and quite surprising. In fact, the major claim is that using a cascade of linear layers instead of a single layer can lead to better performance in deep neural networks. As the title reports, expanding layers seems to be the key to obtain extremely interesting results. Moreover, the proposed approach is extremely simple and it is well explained in Section 2 with equations (1) and (2). This paper can have a tremendous impact in the research in deep networks if results are well explained.\\n\\nHowever, in its present form, it is hard to understand why the claim is correct. In fact, the model presented in the paper has a major obscure point. Equation (1) and (2) are extremely clear. Without non-linear functions, equations (1) and (2) describe a classical matrix factorization like Principal Component Analysis. Now, if internal matrices have more dimensions of the rank of the original matrix, the product of the internal matrices is exactly the original matrix. Whereas, if internal matrices have a number of dimensions lower than the rank of the original matrix, these matrices act as filters on features or feature combination. Since the authors are using inner matrices with a number of dimensions higher than the number of dimensions of the original matrix, there is no approximation and, then, no selection of features or feature combinations. Hence, without non-linear functions, where is the added value of the method? How the proposed method can have better results. \\nThere are some possibilities, which have not been explored:\\n1) the performance improvement derives from the approximation induced by the representation of float or double in the matrices. The approximation act as the non-linear layers among linear layers.\\n2) the real improvement seems to be given by the initialization which has been obtained by using the non-linear counterpart of the expansion; to investigate whether this is the case, the model should be compared with a compact model where the initialization is obtained by using the linear product of the non-linear counterpart of the expanded network. If this does not lead to the same improvement, there should be a value in the expansion.\\n3) the small improvement of the expanded network can be given by the different initialization. In fact, each composing matrix is initialized randomly. The product of a series of randomly initialized matrices can lead to a matrix that is initialized with a different distribution where, eventually, components are not i.i.d.. To show that this is not relevant, the authors should organize an experiment where the original matrix (in the small network) is initialized with the dot product of the composing matrices. The training should be done by using the small network. If results are significantly different, then the authors can reject the hypothesis.\\nIf the authors can reject (1), (2) and (3), they should find a plausible explaination why performance improves in their experiments.\"}"
]
} |
HkejNgBtPB | Variational Template Machine for Data-to-Text Generation | [
"Rong Ye",
"Wenxian Shi",
"Hao Zhou",
"Zhongyu Wei",
"Lei Li"
] | How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations.Learning such templates is prohibitive since it often requires a large paired <table,description>, which is seldom available. This paper explores the problem of automatically learning reusable "templates" from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b) we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality. | [
"templates",
"variational template machine",
"data",
"vtm",
"generation",
"descriptions",
"tables",
"approaches",
"neural"
] | Accept (Poster) | https://openreview.net/pdf?id=HkejNgBtPB | https://openreview.net/forum?id=HkejNgBtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4RIFJ4R3C7",
"rkxYRkH3sH",
"HJepPJHniB",
"HJg5xRE3jS",
"ryxcKa4hsr",
"SkeetYT-5H",
"HJlIc6pdYH",
"B1x-h8r9dH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744641,
1573830609289,
1573830500859,
1573830130252,
1573830018113,
1572096376113,
1571507597812,
1570555560941
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2261/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2261/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2261/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2261/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2261/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2261/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2261/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper addresses the problem of generating descriptions from structured data. In particular a Variational Template Machine which explicitly disentangles templates from semantic content. They empirically demonstrate that their model performs better than existing methods on different methods.\\n\\nThis paper has received a strong acceptance from two reviewers. In particular, the reviewers have appreciated the novelty and empirical evaluation of the proposed approach. R3 has raised quite a few concerns but I feel they were adequately addressed by the reviewers. Hence, I recommend that the paper be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks very much for your valuable comments.\", \"q\": \"It would be good if the authors could provide an analysis of the computational costs of their methods, as well as of the considered competitors.\", \"a\": \"We compare the training and testing time cost on the WIKI dataset, and with raw data added, VTM spends more time on training but the same time on generation as Table2seq. Here is the detailed time spent ( train and test on single Tesla V100 GPU), for test computational cost, we record how much time to generate 72k sentences.\\n\\n | Table2seq | VTM-noraw | VTM\\n--------------------------------------------------------------------------------------------------------------------\\nTrain | \\uff5e30 mins (6 epochs) | \\uff5e30 mins (6 epochs) | \\uff5e160 mins (15 epochs)\\n--------------------------------------------------------------------------------------------------------------------\\nTest | ~80min | ~80min | ~80min \\n\\nVTM gives the same speed for generating sentences, but it takes more time for training, which are cost to learn the large-scaled unlabeled data, and is acceptable.\\n\\nAdditionally, we've added some new experiments with more sophisticated setups. In the experiments (result see Section 4.3, Figure 3), we control the same decoding strategy under the same temperature, and plot their BLEU and Self-BLEU scores in Figure 3 to analyze the quality-diversity trade-off. Experimental results show that compared to Table2seq, VTM always gives better self-BLEU when they have the same BLEU, and gives better BLEU under the same Self-BLEU. This shows that VTM outperforms Table2text consistently.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks a lot for your insightful comments. In the following parts, we will response to your questions one by one.\", \"q\": \"There are also many grammatical errors in the paper (e.g., ... only enable to sample in the latent space ..., and many others), so I think the writing of the paper can be improved.\", \"a\": \"Thanks, we will proof-read carefully and fix typos in the next version.\", \"here_we_list_the_bleu_and_self_bleu_scores\": \"Dataset | Model | BLEU | Self-BLEU\\n-----------------------------------------------------------------------------\\nWIKI | Table2seq-beam | 26.74 | 92.00\\n | Table2seq-pretrain| 25.43 | 99.88 \\n | VTM | 25.22 | 74.86\\n-----------------------------------------------------------------------------\\nSPNLG | Table2seq-beam | 40.61 | 97.14 \\n | Table2seq-pretrain | 40.56 | 100.00 \\n | VTM | 40.04 | 88.77\\n\\n========\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks very much for your valuable comments.\", \"q\": \"How does the method generalize to other languages? How does it scale with (the lack of) resources?\", \"a\": \"Our method could be easily generalized to other languages because no language-specific processing or resources are used. Additionally, our proposed VTM may well fit languages with fewer resources, in which case the VTM model with massive raw data (usually cheap to obtain) may significantly boost the finally performances when labeled data are hard to get.\\n\\nAdditionally, we've added some new experiments with more sophisticated setups. In these experiments (result see Section 4.3, Figure 3), we control the decoding strategy with the same temperature, and plot their BLEU scores and Self-BLEU scores in Figure 3 to analyze the quality-diversity trade-off. Experimental results show that compared to Table2seq, VTM always gives better self-BLEU when they have the same BLEU, and gives better BLEU under the same Self-BLEU. This shows that VTM outperforms Table2text consistently.\"}",
"{\"title\": \"We update a new version!\", \"comment\": [\"Hi, all. Thanks for reviewing my paper. We've uploaded a new version of our draft, adding more experiments:\", \"Experiments on the computational cost of the models. (Table 4, Page 7)\", \"Experiments on quality-diversity trade-off. (Figure 3, Page 6)\", \"Human evaluation on generation accuracy, coherence and diversity. (Table 7, Page 9)\", \"Please take a look\\uff01\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an approach to generate textual descriptions from structured data organized in tables, by using a \\\"variational template machine\\\" (VTM), which is essentially a generative model to separately represent template and content as disentangled latent variables to control the generation.\\n\\nThe contribution is well-written and well-motivated, the model exposition is clear, and the results are convincing. The experiment setup, depth, and breadth are particularly convincing. I see no reason to not accept this paper.\", \"remarks\": [\"It should be clearly stated which languages feature in the paper. From what I gather, it's only English. How does the method generalize to other languages? How does it scale with (the lack of) resources?\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes Variational Template Machine (VTM), a generative model to generate textual descriptions from structured data (i.e., tables). VTM is derived from the variational autoencoder, where the input is a row entry from a table and the output is the text associated with this entry. The authors introduce two latent variables to model contents and templates. The content variable is conditioned on the table entry, and generates the textual output together with the template variable. The model is trained on both paired table-to-text examples as well as unpaired (text only) examples. Experiments on the Wiki and SpNLG datasets show that models generate diverse sentences, and the overall performance in terms of BLEU is only slightly below the best baseline Table2Seq model that does not generate diverse sentences. The results also show that additional losses for preserving contents and templates introduced by the authors play an important role in the overall model performance.\", \"i_have_several_questions_regarding_the_experiments\": [\"For the Table2Seq baseline, how was the beam size chosen? Did it have any effect on the performance of the baseline model?\", \"Did the authors try other sampling methods for Table2Seq? (e.g., top-K or nucleus sampling)\", \"VTM is only able to achieve comparable performance to Table2Seq in terms of BLEU after including the unlabeled corpus, especially on the Wiki dataset. A way to incorporate this unlabeled data to Table2Seq is by first pretraining\\u00a0the LSTM generator on it before training it on pairwise data (or in parallel). How would this baseline model perform in comparison to VTM?\", \"In the conclusion section, the authors mentioned that VTM outperforms VAE both in terms of diversity and generation quality. What does this VAE model refer to? The experiments show that VTM is comparable to Table2Seq in terms of quality and is better in terms of diversity.\", \"Generating text from structured data is an interesting research area. However, I am not convinced that the proposed method is a significant development based on the results presented in the paper. There are also many grammatical errors in the paper (e.g., ... only enable to sample in the latent space ..., and many others), so I think the writing of the paper can be improved.\"]}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is interesting and proposes a novel approach for addressing a currently not largely considered problem.\\nThe proposed model is sound and appropriate, as it relies on state-of-the-art methodological arguments. \\nThe derivations are correct; this concerns both the model definition and the algorithmic derivations of model training and inference.\", \"the_experimental_evaluation_is_adequate\": \"it compares to many popular approaches and on several datasets; the outcomes are convincing.\\nIt would be good if the authors could provide an analysis of the computational costs of their methods, as well as of the considered competitors.\"}"
]
} |
HJloElBYvB | Phase Transitions for the Information Bottleneck in Representation Learning | [
"Tailin Wu",
"Ian Fischer"
] | In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation? In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_β[p(z|x)] = I(X; Z) − βI(Y; Z) defined on the encoding distribution p(z|x) for input X, target Y and representation Z, where sudden jumps of dI(Y; Z)/dβ and prediction accuracy are observed with increasing β. We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10.
| [
"Information Theory",
"Representation Learning",
"Phase Transition"
] | Accept (Poster) | https://openreview.net/pdf?id=HJloElBYvB | https://openreview.net/forum?id=HJloElBYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KbSY705Mm",
"BylPScBuoB",
"SkxA-tDviH",
"BkeEi_wPjS",
"r1lf7dvPsS",
"SklpmME05H",
"r1gosu5aFB",
"HkgODnIrtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744611,
1573571134627,
1573513478377,
1573513372494,
1573513242161,
1572909604990,
1571821730885,
1571282015916
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2260/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2260/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2260/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2260/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2260/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2260/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2260/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This submission presents a theoretical study of phase transitions in IB: adjusting the IB parameter leads to step-wise behaviour of the prediction. Quoting R3: \\u201cThe core result is given by theorem 1: the phase transition betas necessarily satisfy an equation, where the LHS is expressed in terms of an optimal perturbation of the encoding function X->Z.\\u201d\\nThis paper received a borderline review and two votes for weak accept. The main comment for the borderline review was about the rigor of a proof and the use of << symbols. The authors have updated the proof using limits as requested, addressing this primary concern. On the balance, the paper makes a strong contribution to understanding an important learning setting and a contribution to theoretical understanding of the behavior of information bottleneck predictors.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of the revision\", \"comment\": \"We would like to thank the reviewers for the constructive reviews! We have revised the paper according to the reviewers' comments, and provided detailed responses to each reviewer. A summary of the modification in the revised paper is as follows:\\n\\n(1) We have rewritten the theorems and proofs using limits instead of \\\"$\\\\ll$\\\"\\n(2) We have improved the motivation and algorithm parts, making clear that the main contribution of the paper is its theoretical contribution (Theorem 1 and its analysis), with Algorithm 1 a natural consequence.\\n(3) At the beginning of Section 3, we have stated the overall assumption and settings used throughout the paper.\\n(4) Before introducing the definition for IB phase transitions, we first define relative perturbation function (Definition 1) and second variation (Definition 2), for a better preparation for the definition of IB phase transitions (now Definition 3).\\n(5) For all infima and suprema, we have now specified the domain that the argument is with respect to.\\n(6) For the abstract, we have removed the citations and also explained $X, Y, Z$ and $p(z|x)$, for completeness.\\n(7) Other changes improving the presentation and rigor of theorems and proofs.\\n\\nIn summary, our work provides the first theoretical formula addressing the Information Bottleneck (IB) phase transitions in the most general setting. Through analysis of the formula, we reveal deep connections between the phase transitions, the structure of the dataset, and the learned representation. Numerical experiments show close matches with the theory (and the resulting algorithm). We believe our work provides novel theoretical insights for the compression vs. prediction tradeoff in IB for representation learning, and our technique may also be inspirational and applicable for the understanding of other \\u201ctrade-off\\u201d objectives, where the model\\u2019s ability to predict is balanced against some measure of complexity.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your detailed review! We appreciate that you recognize the significance of our work in giving the first formulation on the phase transition of IB, which was an empirical observation. In the revised submission, we have revamped the theorems and proofs according to your suggestions, improving the paper\\u2019s rigor, as follows.\", \"definition_1_ib_phase_transitions\": \"We have added a \\u201cSection 3.1 Definitions\\u201d at the beginning of Section 3, in which we first state the overall settings and assumption of the paper, then introduce the relative perturbation function (Definition 1) and second variation (Definition 2). Particularly, in introducing the relative perturbation function $r(z|x)$ in Definition 1, we have also introduced $\\\\mathcal{Q}_{\\\\mathcal{Z}|\\\\mathcal{X}}$, which is the set of all valid relative perturbation functions for the probability $p(z|x)$. The definition for IB phase transitions is renamed as Definition 3, in which we have applied the notations in Definition 1, making it more concise. For all the infimum and supremum in the paper, we have made sure that they have well-defined domains.\\n\\nLemma 0.1: \\nWe have now separately stated the definition of $G[p(z|x)]$ and $\\\\mathcal{G}[r(z|x);p(z|x)]$. As regards to your question, although $\\\\forall r(z|x), \\\\mathcal{G}[r(z|x);p(z|x)]\\\\ge\\\\beta$ is more elegant than $\\\\inf\\\\limits_{r(z|x)} \\\\mathcal{G}[r(z|x);p(z|x)] \\\\ge \\\\beta$, the point of Lemma 0.1 is to introduce $G[p(z|x)]:=\\\\inf\\\\limits_{r(z|x)} \\\\mathcal{G}[r(z|x); p(z|x)]$, in preparation for Theorem 1 that gives the condition $G[p^*_\\\\beta(z|x)]=\\\\beta$ for IB phase transitions. Therefore, we use $G[p(z|x)] \\\\ge \\\\beta$ as the condition in Lemma 0.1.\\n\\nLemma 0.2: \\nWe have added the domain for $\\\\Delta\\\\theta$: $\\\\Delta\\\\theta \\\\in \\\\Theta$, where $\\\\Theta$ is the parameter field. There are no requirements for the scale of $\\\\Delta\\\\theta$. We can understand this through an analogy with Taylor expansion in 1D: $f(x_0+h) = f(x_0) + f'(x_0)h + \\\\frac{1}{2}f''(x_0)h^2 + \\u2026$. Our $\\\\delta^2 IB[p_{\\\\theta}(z|x)]$ is a quadratic function of $\\\\Delta\\\\theta$, and corresponds to the $\\\\frac{1}{2}f''(x_0)h^2$ term. The requirement of $\\\\forall \\\\Delta\\\\theta\\\\in \\\\Theta$, $\\\\delta^2 IB[p_\\\\theta(z|x)]\\\\ge 0$ corresponds to $\\\\forall h, \\\\frac{1}{2}f''(x_0)h^2\\\\ge 0$, which essentially states that $f''(x_0)\\\\ge 0$ in the *infinitesimal* neighborhood of $x_0$. Note that since $\\\\frac{1}{2}f''(x_0)h^2$ is a quadratic function of $h$, the scale of $h$ does not change the sign of $\\\\frac{1}{2}f''(x_0)h^2$. Thus, $\\\\frac{1}{2}f''(x_0)h^2>0$ is invariant to the scale of $h$. Similarly, $\\\\delta^2 IB[p_\\\\theta(z|x)]\\\\ge 0$ is invariant to the scale of $\\\\Delta\\\\theta$. Therefore, the condition of \\\"$\\\\forall \\\\Delta\\\\theta\\\\in \\\\Theta$, $\\\\delta^2 IB[p_\\\\theta(z|x)]\\\\ge 0$\\\" is expressing the \\\"curvature\\\" in the *infinitesimal* neighborhood of $\\\\theta$, independent of the scale of $\\\\Delta\\\\theta$. We have also clarified this in the proof of the lemma, under Eq. (21). This invariance to the scale of $\\\\Delta\\\\theta$ is carried to the result of the lemma, in that the ratio inside $\\\\inf\\\\limits_{\\\\Delta\\\\theta\\\\in\\\\Theta}$ for $G_{\\\\Theta}[p(z|x)]$ is invariant to the scale of $\\\\Delta\\\\theta$. \\n\\nThe point of the Fisher Information work is to give another way of understanding the phase transitions. The Fisher Information is used extensively to understand machine learning models, so readers who like to think in terms of the Fisher Information should find this result helpful. The reviewer is correct that it is not a required lemma to generate our core results, and we have clarified this in the text.\", \"theorem_1\": \"We have now stated the overall assumption of the paper at the beginning of Section 3, and also in Theorem 1, we have now pointed to Definition 3 for the definition of IB phase transitions, to make it more clear. For the remarks after Theorem 1, in fact, the whole Section 4 is dedicated to the understanding of Theorem 1, which we have now pointed out. Furthermore, at the end of Section 4.1 (Jensen\\u2019s inequality), we have added a discussion and conjecture about the number of phase transitions in Theorem 1.\", \"theorem_2\": \"We have improved the statement of the condition in the \\u201cif\\u2026\\u201d, which is a statement about the property of $\\\\mathcal{Q}_{\\\\mathcal{Z}|\\\\mathcal{X}}$, the set of relative perturbation functions, and an expanded set $\\\\mathcal{Q}^{(0)}_{\\\\mathcal{Z}|\\\\mathcal{X}}$, which is $\\\\mathcal{Q}_{\\\\mathcal{Z}|\\\\mathcal{X}}$ without the requirement of $\\\\mathbb{E}_{p(z|x)}[r(z|x)]=0$. We have now made it explicit.\", \"paper_length\": \"With all of the clarifications that we have made, the paper has become slightly longer, although it is still under the strict upper limit of 10 pages. We are open to suggestions on what more we could move to the appendices if the current length does not seem to merit a strong accept.\", \"abstract\": \"For the abstract, we have removed the citations and also explained $X, Y, Z$ and $p(z|x)$, for completeness.\\n\\nThank you again for your detailed suggestions!\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your review. Your perspective has helped us clarify our core theoretical results in particular. We respond to your two major concerns in turn below.\\n\\n1. Motivation and importance of Algorithm 1\\nWe do not consider Algorithm 1 to be the primary contribution of the work. As Reviewer 3 excellently summarizes it, our work contributes theoretically to the information bottleneck (IB) principle, and gives the first formulation on the phase transition of IB, which was an empirical observation. Specifically, our core result is Theorem 1 that gives a theoretical formula for IB phase transitions, which answers the question raised in the introduction \\u201chow do they (the phase transitions) depend on the structure of the dataset?\\u201d, through our in-depth theoretical analysis of Theorem 1 in Section 4. This question cannot be answered by simply empirically scanning $\\\\beta$. We have improved the \\\"contributions\\\" part in the introduction to make our core contribution more clear.\\n\\nAlgorithm 1, on the other hand, is a natural consequence of our core result Theorem 1, and it allows us to empirically confirm that the theory holds even for non-trivial datasets like MNIST and CIFAR10. We have clarified this in Section 5 where we describe Algorithm 1. However, it is worth comparing the efficiency of Algorithm 1 to other baselines for detecting phase transitions. The naive algorithm of sweeping $\\\\beta$ to identify the phase transitions requires training $K = (\\\\beta_{\\\\text{max}}-\\\\beta_{\\\\text{min}}) / \\\\Delta\\\\beta$ full models to detect phase transitions separated by at least $\\\\Delta\\\\beta$. E.g., for CIFAR10, we trained 251 different VIB models from scratch to sweep out the Pareto-optimal frontier and find likely phase transitions at $\\\\Delta\\\\beta=0.02$ for $\\\\beta$ in $[1.0, 6.0]$. Each of those models took about 36 hours to train on a single GPU. In contrast, with Algorithm 1, we are able to estimate the phase transitions by training exactly 1 full maximum likelihood model (36 hours on a single GPU), and then a handful of small IB models that can all be trained on CPU in a few hours, and the algorithm doesn\\u2019t rely on selecting a $\\\\Delta\\\\beta$ step size that would limit the resolution of our search. The difference in computation cost between Algorithm 1 and the naive algorithm of sweeping $\\\\beta$ is massive for even moderately-sized problems like MNIST and CIFAR10. Of course, it is not difficult to do better than the naive algorithm if the goal is to efficiently find phase transitions. For example, iterative refinement algorithms should be able to empirically estimate the phase transitions by training logarithmic numbers of models, rather than the linear number of models in the naive algorithm. If we were trying to write a paper on the best algorithm to empirically find the phase transitions, we would have compared against such approaches. We emphasize again, though, that Algorithm 1 is just one practical consequence of the core theoretical results, rather than the focus of the paper.\\n\\n2. Use of limits\\nWe have revamped the mathematical statements throughout the paper according to your suggestions, which we agree makes a substantial improvement to the core theoretical results. In particular, we have rewritten the theorems and proofs using limits instead of \\\"$\\\\ll$\\\". We have also made other substantial improvements according to Reviewer 3's comments. Please see our responses to Reviewer 3 for more detail.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your review. Your comments made us realize that we had put our conjecture about the number of phase transitions in the experimental section. We have moved it into Section 4.1 on the analysis of Theorem 1 to help make the theoretical basis for the conjecture more clear. We have also made a number of other improvements to the presentation of the core theoretical results, which we describe in our responses to the other two reviewers. We hope that you will find those improvements beneficial as well.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the phase transition problem in the information bottleneck (IB) objective and derives a formula for IB phase transitions. Based on the theory developed in the paper, an algorithm is developed to find phase transition points. The interesting observation is phase transition can correspond to learning new class and this paper conjectures in IB for classification, the number of phase transitions is at most C-1, C the number of classes. This observation deserves to be further explored and may be a key to a deeper understanding of neural networks.\\n\\nThe theory developed connects the IB objective, dataset and the representation, and thorough proofs are given. The experiments matches the theoretical findings.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors studied the phase transition in the information bottleneck, which is defined as the point where the IB loss landscape changes. The authors give a theorem showing the practical condition for IB phase transitions which is related to Fisher information. Then an algorithm finding IB phase transitions are proposed and applied to MNIST and CIFAR10 dataset.\\n\\nThe paper is well-organized. The problem formulation, definition, theorem, algorithm and application form a whole story about finding IB phase transition. However, I found that the motivation of this paper is not very clear. Why do we want to find the IB phase transition accurately by running such a complicated algorithm (Algorithm 1). Usually, the scalar \\\\beta is considered as a hyperparameter. By tuning \\\\beta, one can easily produce Figure 1 and see some phase transition points. One can also tune \\\\beta to learn different classed as shown in Figure 3(b). So designing a complicated coordinate-descent based algorithm to find a parameter seems overkilling the problem.\\n\\nMoreover, the mathematical part of this paper can be made more accurate. For example, in definition 1, the authors write \\\"\\\\eta << \\\\beta\\\" and \\\"|\\\\epsilon| << 1\\\", where the quantities \\\\eta and \\\\epsilon are used many times in the proof. However, \\\"<<\\\" is not a rigorous mathematical symbol. The authors should consider rewrite their proof by using limit.\\n\\nTherefore, I think the motivation and the mathematical quality of this paper can be further improved before getting accepted.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper contributes theoretically to the information bottleneck (IB) principle. In particular, the author(s) provided theoretical reasoning on the phase transition phenomenon: when the beta parameter of IB varies, the generalization performance changes in a stepwise manner rather than continuously. The core result is given by theorem 1: the phase transition betas necessarily satisfy an equation, where the LHS is expressed in terms of an optimal perturbation of the encoding function X->Z.\\n\\nOverall, the reviewer believes that this work is a solid contribution to the IB principle, and should be accepted. Remarkably, this work gives the first formulation on the phase transition of IB, which was an empirical observation. Furthermore, the author(s) give an algorithm to find the transition betas, which agrees with empirical studies on CIFAR and MNIST datasets.\\n\\nThe reasons for the weak acceptance instead of a strong acceptance are explained by the following weaknesses.\\n\\nAs a theoretical contribution, it is important to have good formulations and clear statements. The quality of the mathematical statements is not satisfactory and can be largely improved.\", \"definition_1\": \"introduce the concept of relative perturbation first. As a definition, it must be self-contained. Therefore introduce the second-order variation first (don't point to the appendix).\\n\\nLemma 0.1: inf_r(z|x) \\\\mathcal{G} \\\\ge \\\\beta, is equivalent to \\\\forall r(z|x), \\\\mathcal{G}\\\\ge\\\\beta, the latter is more elegant. What is \\\\mathcal{G}, anyway?\\n\\nLemma 0.2: State the condition on the scale of the $\\\\Delta\\\\theta$. The reviewer suspects that if $\\\\Delta\\\\theta$ is large enough, the statement can be violated. This lemma feels like a digress. Why the Fisher information is needed for explaining the phase transition?\", \"theorem_1\": \"the statement is fine. It is better to have all conditions/assumptions listed as this is the core theorem. To be more complete, the authors can discuss the number of roots after the theorem and what happens if there is no root. In general, there have to be a few remarks after theorem 1.\", \"definition_2\": \"\\\\rho_r(X,Y;Z) is defined with respect to what? For example, state \\\"Given a joined distribution p(X,Y), the representation maximum correlation....\\\". Use a different equivalent symbol for definition, such as \\\":=\\\".\", \"theorem_2\": \"Again, \\\\forall{f(x,z)} is with respect to what? In the \\\"if\\\" part of the \\\"if-then\\\" statement, \\\"\\\\forall{f(x,z}}, it can be decomposed as ..\\\" is a false statement, change to \\\"If f{x,z} can be decomposed\\\"\\n\\nIn the abstract, avoid citations, and explain X, Y, Z, p(z|x), for completeness.\\n\\nAs the paper is over the recommended length, the reviewer is asked to be harsher in the assessment.\"}"
]
} |
BkgqExrYvS | PopSGD: Decentralized Stochastic Gradient Descent in the Population Model | [
"Giorgi Nadiradze",
"Amirmojtaba Sabour",
"Aditya Sharma",
"Ilia Markov",
"Vitaly Aksenov",
"Dan Alistarh."
] | The population model is a standard way to represent large-scale decentralized
distributed systems, in which agents with limited computational power interact
in randomly chosen pairs, in order to collectively solve global computational
tasks. In contrast with synchronous gossip models, nodes are anonymous, lack a
common notion of time, and have no control over their scheduling. In this paper,
we examine whether large-scale distributed optimization can be performed in this
extremely restrictive setting.
We introduce and analyze a natural decentralized variant of stochastic gradient
descent (SGD), called PopSGD, in which every node maintains a local parameter,
and is able to compute stochastic gradients with respect to this parameter.
Every pair-wise node interaction performs a stochastic gradient step at each
agent, followed by averaging of the two models. We prove that, under standard
assumptions, SGD can converge even in this extremely loose, decentralized
setting, for both convex and non-convex objectives. Moreover, surprisingly, in
the former case, the algorithm can achieve linear speedup in the number of nodes
n. Our analysis leverages a new technical connection between decentralized SGD
and randomized load balancing, which enables us to tightly bound the
concentration of node parameters. We validate our analysis through experiments,
showing that PopSGD can achieve convergence and speedup for large-scale
distributed learning tasks in a supercomputing environment. | [
"Distributed machine learning",
"distributed optimization",
"decentralized parallel SGD",
"population protocols"
] | Reject | https://openreview.net/pdf?id=BkgqExrYvS | https://openreview.net/forum?id=BkgqExrYvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mw-VeufuXB",
"rklkIa9njr",
"Syxg2t52sH",
"r1xiS8nisB",
"SyesdPGIir",
"BJxfsLM8or",
"ByloJUGLiH",
"SJla4SGIsB",
"BkeHxzS0Kr",
"SklpaUAaKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798744577,
1573854534576,
1573853607763,
1573795395207,
1573427059211,
1573426842406,
1573426658706,
1573426484880,
1571865068550,
1571837637222
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2259/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2259/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2259/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2259/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2259/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2259/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2259/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2259/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2259/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This manuscript studies scaling distributed stochastic gradient descent to a large number of nodes. Specifically, it proposes to use algorithms based on population analysis (relevant for large numbers of distributed nodes) to implement distributed training of deep neural networks.\\n\\nIn reviews and discussions, the reviewers and AC note missing or inadequate comparisons to previous work on asynchronous SGD, and possible lack of novelty compared to previous work. The reviewers also mentioned the incomplete empirical comparison to closely related work. On the writing, reviewers mentioned that the conciseness of the manuscript could be improved.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision Summary\", \"comment\": [\"We again thank the reviewers for their feedback. We summarize our revision, and our claimed contributions.\", \"We have significantly re-written the introduction and related work, specifically for clarity with respect to the work of (Lian et al.) and (Assran et al.) We have added Table 1 (Appendix), which summarizes the assumptions and guarantees for both our work and previous algorithms.\", \"We maintain our claim that we are the first to prove linear speedup for convex objectives in the population protocol model.\", \"In addition, we proved linear speedup for PopSGD in the non-convex case under the PL assumption (a first for any decentralized algorithm), and $\\\\sqrt n$ speedup in the classic non-convex case. In this latter case, our bounds are (significantly) stronger than those implied by previous work when applied to the population protocol model, under similar assumptions.\", \"Our results are based on a new fine-grained analysis technique, which enables us to provide better concentration for the individual models relative to the mean.\", \"We have added an extension to arbitrary graph topologies via the spectral gap (please see Appendix).\", \"We have added experimental results showing similar performance with respect to DA-SGD and SGP, and almost linear practical scaling. Comparisons to data-parallel SGD and Local SGD were also provided.\", \"We have performed a major overhaul of the paper to address all the reviewer comments.\"]}",
"{\"title\": \"Reviewer Response\", \"comment\": \"We sincerely thank the reviewer for engaging with us, and for the insightful questions.\\nWe address them below. \\n\\nWe begin by addressing the question regarding the relationship to the paper of (Boyd et al.).\", \"this_paper_considers_averaging_via_gossip_in_two_models\": \"1) synchronous gossip, structured in global rounds, where each node interacts with a randomly chosen neighbor, and \\n2) asynchronous gossip, where each node wakes up at times given by a local Poisson clock, and picks a random neighbor to interact with. \\nThe population model is functionally equivalent to the asynchronous gossip model, since the interaction times in the latter model can be \\\"discretized\\\" to lead to pairwise uniform interactions. The key difference between our work and averaging in the gossip model is that their input model is static (node inputs are fixed, and node estimates must converge to the true mean), whereas we study the a dynamic setting, where the models are updated in each round by SGD, and should remain concentrated around the parameter mean as it converges towards the optimum. \\nWe have added this discussion to the revision. \\n\\nWith this in mind, we address the following question:\\n\\n> In (Lian et al.), reading the discussion of implementation details in Sec 3.3 and the description of wait-free continuous training and communication in the appendix, it is clear that the algorithm does not require to be implemented in lock-step rounds. Similarly, Assran et al. describe synchronous and asynchronous versions of their method. Specifically, an asynchronous \\\"overlap\\\" version of stochastic gradient push is described in Sec 3 of that paper, and the analysis in Sec 4 covers this asynchronous version of the method, facilitated by allowing a time-varying matrix P(k).\\n\\nBoth papers describe their baseline algorithms in terms of the synchronous gossip model (see previous answer): interactions are round-based, following a random or deterministic matching in every \\\"round.\\\" \\nFor (Lian et al.), this is described in sections 3.2 and 3.3.1; for (Assran et al.) the details can be found in Section 3, and in Appendix A. \\nNext, to avoid deadlocks/slowdown, both papers proceed to *relax* these synchrony requirements by allowing nodes to perform updates based on stale information: this is described in Appendix A of (Lian et al.), and in the Overlap-SGP section for (Assran et al.). \\n\\nOur key point here is that the resulting relaxed models are not identical to---and do not subsume---the population model/asynchronous gossip model. \\nIn the population model, each step is a uniform random pairwise interaction: in particular, due to randomness, it is possible for a node A to interact several times before some other node B interacts at all. In the synchronous gossip model, even with the consistency relaxation, nodes would have to interact once every \\\"round,\\\" even if they may see stale information. \\n\\nFinally, a reasonable question is whether their analyses can be adapted to analyze the population model. \\nAs detailed in the previous response, this is possible for (Lian et al.), but would yield weaker bounds in the non-convex case compared to our analysis. (For instance, their bounds would only apply after $T \\\\geq \\\\Theta( n^6 )$ steps.) \\nOne could also try to analyze the PP model using the techniques of (Assran et al.), by building the interaction graph via the sequence of random interactions in the PP model. We believe this is similar to what the reviewer is suggesting. \\nUnfortunately, in this case, the resulting bound would have no speedup relative to the sequential case (or even negative speedup): this is because the numerator in their convergence bound (the parameter C in Thm1) depends linearly on the diameter of the interaction graph (which must be connected). It is easy to see (e.g. by standard Erdoes-Renyi) that in the PP model this diameter would be linear in $n$. We note that this parameter C is also linear in $\\\\sqrt d$, the model dimension. \\n\\nWe therefore maintain the claim that our analysis yields the best bounds for the PP model, even in the non-convex case. \\n \\n> Regarding what assumption(s) enable linear scaling, the weaker assumption of Lian et al. still ensures that the objectives at different nodes are similar. If no such assumption is made ([...]), would PopSGD still obtain a linear speedup?\\n\\nYes, SGD would still obtain linear speedup, but the assumptions and objective would have to be adjusted. More precisely, in this case, as in (Lian et al.), we would have to assume that the objective $f_i$ for each process $i$, is $L$-smooth and $\\\\ell$-strongly convex. Additionally, we would need to assume bounded gradients for each objective.\\n\\nLet $f(x)=\\\\sum_{i=1}^n f_i(x) / n$ be our global objective function. In this setting, we will be able to achieve the same speedup, but not for the objective $f(\\\\mu_T)-f(x^*)$. Instead, we would get the covergence bound for $\\\\sum_{i=1}^n ( f_i(X_T^i)-f(x*))/n$, where $X_T^i$ is the value of model $i$ at step $T$.\"}",
"{\"title\": \"Not convinced about relationship to previous work\", \"comment\": \"Thank you for your responses.\\n\\nI still do not agree with your assessment of the relationship to previous work. In (Lian et al.), reading the discussion of implementation details in Sec 3.3 and the description of wait-free continuous training and communication in the appendix, it is clear that the algorithm does not require to be implemented in lock-step rounds. Similarly, Assran et al. describe synchronous and asynchronous versions of their method. Specifically, an asynchronous \\\"overlap\\\" version of stochastic gradient push is described in Sec 3 of that paper, and the analysis in Sec 4 covers this asynchronous version of the method, facilitated by allowing a time-varying matrix P(k). I'm also still curious to hear if/how the population protocol model adopted in this paper relates to the asynchronous time model described in Boyd, Ghosh, Prabhakar, and Shah, \\\"Randomized gossip algorithms\\\". \\n\\nRegarding what assumption(s) enable linear scaling, the weaker assumption of Lian et al. still ensures that the objectives at different nodes are similar. If no such assumption is made (so that the objectives/data distributions at different nodes can be arbitrarily different), would PopSGD still obtain a linear speedup?\"}",
"{\"title\": \"Response to individual questions\", \"comment\": \"Please see the main rebuttal text for answers to the main points raised. Below, we address the remaining questions.\\n\\n[R2] Extension to the non-convex case: there is no linear speedup in the number of nodes n.\\n\\nWe can obtain 1 / sqrt n speedup in the general non-convex case, and *linear* speedup under the PL condition. These extensions will be added to the next version.\\n\\n[R2] The paper is too long. \\n\\nWe will do our best to follow the reviewer\\u2019s comments in shortening the paper. \\n\\n[R2] Sampling procedure and parallelism. \\n\\nIn practice, the averaging step does not need to be necessarily performed synchronously. We will clarify this in the paper.\\n\\n[R2] Why are local learning rates required?\\n\\nBecause nodes lack a common notion of time. There is no central scheduler which chooses them to interact: whenever some node completes its minibatch computation, it chooses another node uniformly at random and initiates an averaging step. This other node\\u2019s time step may not be the same as that of the initiator. \\n\\n[R2] In the description of data distribution (page 2, paragraph 4-5, page 4, last paragraph): what if there is more samples than available nodes? do the nodes exchange samples or only gradients? Is it possible to have part of the full dataset on every node without sharing it with anyone? \\n\\nThe nodes can exchange either samples or gradients (the argument would work in either case). The dataset can also be partitioned, as long as the partitions satisfy the technical assumptions. We will clarify this point. \\n\\n[R2] AD-SGD and SGP do not require global synchronization.\\n\\nPlease see the discussion in the main rebuttal.\\n\\n[R2] The AD-PSGD rate does have a linear speedup in the number of workers n, so the claim should be corrected. \\n\\nPlease see the discussion in the main rebuttal. \\n\\n[R2] All lemmas and theorems hold only for the global stepsize \\\\eta_t. Would it also hold with local stepsizes \\\\eta_t^i? \\n \\nYes, the results also hold with respect to local stepsizes--we omitted the additional indexing for brevity. We will clarify this point. \\n\\n[R2] Extensions for arbitrary graphs: would it be possible to have one theorem for all possible graphs and see how graph parameters (e.g. spectral gap or others) influence the convergence rate?\\nYes, we can provide general bounds in terms of the spectral gap. We will add the statement to the next version. \\n\\n[R2] I didn\\u2019t understand the definition of mult constant and what does it control. Re-phrase this paragraph.\\n\\nWe apologize if this was not clear. We will add an example in the next version along the lines of the one provided in the general rebuttal above. \\n\\nWe also thank the reviewer for their detailed comments, which we will implement in the next version.\"}",
"{\"title\": \"Response to individual questions\", \"comment\": \"Please see the main rebuttal text for answers to the main points raised. Below, we address the remaining questions.\\n\\n[R1] The technique (the connection to load-balacing processes) is not what enables linear scaling, but the (strong) assumption that the gradient oracles are identical.\\n\\nThis is not exactly the case. In fact, our analysis in the non-convex case would continue to work under the weaker assumptions of (Lian et al.), although the bounds would be slightly different. We will clarify the use of this assumption in the next version. \\n\\n[R1] Why are decreasing LR schedules necessary?\\n\\nIt is indeed the case that the decreasing LR schedule we describe is necessary (and common) for the convex analysis, but can be generalized in the non-convex case.\"}",
"{\"title\": \"Rebuttal regarding linear speedup, hyperparameters, and experiments\", \"comment\": \"(This text is the second part of the main rebuttal text, which can be found below.)\", \"issue_2\": \"linear speedup, the learning rate regime and hyperparameters.\\n\\nWe first clarify that by linear convergence speedup we mean that the time to convergence is divided by n. (This is what we guarantee in the convex case--please see Thm. 4.1 and its discussion.) \\nThe bounds provided by (Lian et al.) and (Assran et al.) divide convergence time by sqrt{n}, assuming the rest of the parameters are constant. (We believe this is the best possible in the non-convex case barring additional assumptions.)\\n \\nWe are in fact able to obtain the same sqrt dependency on n in the non-convex case, as in the work of (Lian et al.). This is a minor modification to the current argument; we will update the draft to reflect this. \\n\\nIn fact, under the Polyak-\\u0141ojasiewicz condition, we can obtain *linear* speedup in the non-convex case as well. This new extension result will be added to the current submission. \\n\\nThe hyperparameter recipes we use for training (learning rate regime, momentum, batch size, etc.) are exactly the same as that of the standard sequential, but scaled by mult / num_nodes. This was stated in Section 6. \\nFor example, if sequential SGD trains ResNet18 in 90 epochs, decreasing the learning rate at 30 and 60 epochs, then PopSGD with 20 nodes and multiplier 2 would use 90 * 2 / 20 = 9 epochs per node, decreasing the learning rate at 3 and 6 epochs. Thus, the total number of gradient evaluations is 2x larger than sequential, but the end-to-end time could be 10x less. We note that the addition of the multiplier parameter is justified by the theory, since the averaging procedure induces a \\u201cdelay\\u201d which has to be compensated by additional iterations. \\nWe note that this multiplier procedure is similar to that used by (Assran et al.), where they use the speedup of their algorithm relative to data-parallel as the multiplier value (see their section 6.2).\", \"issue_3\": \"Comparisons with other algorithms, e.g. data-parallel SGD, D-SGD, AD-SGD, and SGP [R1, R2].\", \"we_did_provide_comparisons_with_data_parallel_sgd_and_local_sgd\": \"please see Section 6 and Appendix Section A. In particular, we showed that PopSGD converges faster in the convex case, and that it provides significant end-to-end speedup in the non-convex case as well versus both local SGD and data-parallel SGD (~2x end-to-end convergence speedup for ResNet 50). Please see Figure 4 in the original submission for the comparisons.\\nRegarding practical scaling, we recall that Figure 2 (3rd panel) exhibits almost linear scaling for PopSGD. \\n\\nWe are working on adding experiments for D-SGD and SGP.\"}",
"{\"title\": \"Rebuttal regarding relation to previous work\", \"comment\": \"Dear reviewers,\\n\\nThank you for your reviews. We summarize your feedback, discuss it, and outline our planned changes below. We are working on implementing these changes now, and will provide an updated version within the next few days.\", \"this_rebuttal_has_two_parts\": \"the first addresses the relation previous work, whereas the second addresses questions regarding linear scaling and experiments. We also respond to individual comments separately.\\n\\nIt would be extremely useful to us if the reviewers would signal whether they agree with our comments and update plan for the draft.\", \"issue_1\": \"our model and results are subsumed by previous work by (Lian et al.) and (Assran et al.).\\n\\nThis is not the case. As stated in the submission, the analytical models presented in the above papers are round-based: every node is assumed to interact exactly once in each communication round, in particular forming a perfect matching in every round. \\n\\nThis is clear in the model and algorithm descriptions in the papers. Please see e.g. line 5 of Algorithm 1 in (Lian et al.), version https://arxiv.org/pdf/1710.06952.pdf, which describes global averaging in each step; further, (Assran et al.) provide a detailed explanation of why they chose a deterministic perfect matching model, and not random edge sampling. We quote from (Assran et al.): \\n\\\"[...] One could consider designing the matrices P(k) in a stochastic manner, where each node randomly samples one neighbor to send to at every iteration. [...But] random schemes are still not as effective at quickly averaging as deterministically cycling through neighbors in the directed exponential graph. Moreover, with randomized schemes, we are no longer guaranteed that each node receives the same number of messages at every iteration, so the communication load will not be balanced as in the deterministic scheme.\\u201d\\n\\nWe hope this establishes the fact that these papers consider global round-based matching models, and not uniform edge-sampling methods. This fact was stated in our original submission. This distinction is important, since it simplifies the algorithm, connects to a fundamental model in distributed computing, and allows for faster implementation. \\n\\nThe remaining question is whether the techniques of (Lian et al.) and (Assran et al.) could be *modified* to analyze PopSGD. We spent a considerable amount of time looking into this, given the reviewer comments. \\nOur conclusion is the techniques of (Lian et al.) could be adapted to analyze PopSGD in the *non-convex* case, but that the resulting bounds would be weaker, by a polynomial factor in n, the number of nodes. This polynomial difference is linked to the fact that a matrix characterization is used in their analysis. \\nMore precisely, one can instantiate the matrix Wk to be the interaction matrix between only a random pair of nodes, and can relax Assumption 1.2 in the paper to state the Wk only needs to be doubly stochastic *in expectation*. On can then carefully follow through the rest of their argument; the resulting bound would be off by at least a n^2 factor from the bound we obtain on Gamma. \\nWe were not able to apply the analysis technique of (Assran et al.) to obtain better bounds in our setting. \\n\\nWe note however that the above discussion has little bearing on the convex case, which is the main result of our submission. The convex case is not considered in these previous papers; in this case, we are the first to provide linear convergence speedup (see speedup discussion below). \\nWe will provide a detailed explanation of these connections in the updated version of our draft.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to use population algorithms as a mechanism for implementing distributed training of deep neural networks. The paper makes some claims about the relationship to previous work on (asynchronous) gossip algorithms that appear to be incorrect. In fact, the proposed PopSGD algorithm is very closely related to other methods in the literature, including AD-PSGD (Lian et al. 2017b) and SGP (Assran et al. 2018). I recommend it be rejected due to lack of novelty and missing connections to much related work.\\n\\nThe introduction (page 3) mentions that the \\\"matrix characterization is not possible in the population model.\\\" Here the \\\"matrix characterization\\\" refers to the typical approach in which gossip algorithms (synchronous or asynchronous) are formulated and analyzed. I'd appreciate if the authors could elaborate on this claim. In the study of gossip algorithms, the organization of time into \\\"global rounds\\\" is purely for the sake of analysis; a global, synchronized clock is not required to implement these methods. In fact, the description of the setup appears to be very similar to the asynchronous time model described used to analyze \\\"randomized gossip algorithms\\\" (see the well-cited paper by Boyd, Ghosh, Prabhakar, and Shah). In the PopSGD case, the choice is simply to allow the complete graph (i.e., any agent can interact with any other agent) rather than restricting interactions of a given agent to be among a subset of the other agents (i.e., its neighbors).\\n\\nLet me elaborate on the ways in which PopSGD is similar to AD-PSGD and SGP. PopSGD involves interactions between randomly drawn pairs of agents. The AD-PSGD algorithm of Lian et al. (2017b) also performs updates between pairs of agents drawn randomly at every step. The definition of the PopSGD interaction in (1.1) (or equivalently Alg 1) implies that when agents i and j interact, neither i nor j can interact with another agent until the current interaction completes. The main difference appears to be that in Lian et al. (2017b) agents are organized into a bipartite graph where $n/2$ nodes are \\\"active\\\" and initiate interactions with one of the other $n/2$ \\\"passive\\\" nodes (drawn randomly). This is done for practical reasons - to avoid deadlocks.\\n\\nI also believe that PopSGD can be viewed as a particular instance of the overlap-SGP algorithm proposed in Assran et al. (2018). Overlap-SGP, the way it is described, makes use of one-directional interactions (agent i may receive and incorporate information from agent j without the reverse happening simultaneously). This was also introduced for practical reasons. It is possible for multiple interactions to happen simultaneously, and the pattern of iteractions may vary over time. There is nothing in the analysis, however, that prevents one from restricting to symmetric interactions, in which case one could recover the symmetric updates of PopSGD. To compensate for one-directional interactions, Overlap-SGP tracks an additional variable (the weight, or denominator). However, in the case where interactions are always symmetric as in PopSGD, the corresponding update matrices will always be doubly-stochastic, and in this case the weights are always equal to 1. Thus PopSGD really is identical to Overlap-SGP in this special restricted case where interactions are always pair-wise and symmetric. Moreover, Assran et al. (2018) prove that Overlap-SGP achieves a linear speedup in the smooth non-convex setting.\\n\\nThe experiments don't provide any comparison with other related methods, and the discussion in the introduction isn't sufficient to convince me that there are significant differences between these methods. In the experiments, I also wanted to ask about the mult constant. If it is really possible to achieve linear scaling, wouldn't one hope to be able to get away with mult=1?\\n\\nThe decreasing learning rate schedule used in the description and analysis of PopSGD seems very restrictive. Specifically, in the training of deep neural networks it is common to use much different learning rate schedules. Is it fundamentally not possible to do so with PopSGD-type models, or is it just a limitation of the current analysis approach (specifically for convex functions)? What learning rate scheme was used in the experiments?\\n\\nFinally, the introduction (p3) emphasizes that it is the population gradient perspective, and the connection to load-balancing processes, which enable one to achieve linear scaling. I disagree with this statement. While I do agree that convexity alone is not sufficient, the key assumption made here (as well as in other work, such as that of Lian et al.), is that all agents draw gradients from the same distribution; i.e., that all agents have access to independent and identically distributed stochastic gradient oracles. In fact, this is stronger than the assumptions made in Lian et al. (2017a and 2017b), and Assran et al. (2018), where it is only assumed that the gradient oracles at each agent are similar, but not necessarily identical.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper considers scaling distributed stochastic gradient descent to large number of nodes. Paper proposes novel asynchronous variant to decentralized SGD, called PopSGD. It models asynchrony with the population model. Paper theoretically analyzes the proposed method and shows that in the convex case PopSGD has a linear speedup in the number of nodes compared to the sequential training on one node. However, in the non-convex setting, the PopSGD rate doesn\\u2019t have a linear speedup. The paper also provides experimental evaluation of PopSGD where they scale PopSGD up to 1000 of nodes in the convex optimization; and also apply PopSGD to neural network training on ImageNet.\\n\\nMy score is weak reject. The major reason is that it is not clear how does this work theoretically and experimentally compares to the previous asynchronous variants of decentralized SGD (Lian et al. (2017b) AD-PSGD and Assran et al. (2018) SGP) or to the centralized SGD (parallel mini-batch SGD) baseline; and what are the benefits of the proposed method.\", \"concerns_that_should_be_addressed\": \"1. No theoretical and experimental comparison with the baselines (see above).\\n\\n2. Extension to the non-convex case: there is no linear speedup in the number of nodes n. Does this mean that it is better to use Centralized SGD (which has speedup in the number of nodes)? The comparison should be made explicit. \\n\\n\\n3. The paper is a bit too long (10 pages) and contains some repetitions. Consider to shorten a bit. (e.g. procedure of splitting data between nodes was described twice on page 2 and 4; proof overview on page 6 could be merged with two steps on page 7)\\n\\n\\n4. The procedure how to sample nodes uniformly was not discussed in the paper. Moreover, it is also not clear why \\\\Theta(n) updates could be done in parallel. When intersection happens, many nodes would have to wait for the previously selected pairs to finish computation. \\n\\n\\n5. Why are the local learning rates required? When scheduler samples the nodes uniformly it can also transmit them the global time count.\\n\\n\\n6. In the description of data distribution (page 2, paragraph 4-5, page 4, last paragraph): what if there is more samples than available nodes? do the nodes exchange samples or only gradients? Is it possible to have part of the full dataset on every node without sharing it with anyone? \\n\\n\\n7. Page 3, line 3: Lian et al. (2017b) and Assran et al. (SGP) also showed that they don\\u2019t require global synchronization. \\n\\n\\n8. Page 4, line 2: The AD-PSGD rate does have a linear speedup in the number of workers n, so the claim should be corrected. \\n\\n\\n9. Page 7: all lemmas and theorems hold only for the global stepsize \\\\eta_t. Would it also hold with local stepsizes \\\\eta_t^i? \\n\\n\\n10. Extensions for arbitrary graphs: would it be possible to have one theorem for all possible graphs and see how graph parameters (e.g. spectral gap or others) influence the convergence rate? \\n\\n\\n11. Experiments: I didn\\u2019t understand the definition of mult constant and what does it control. Re-phrase this paragraph.\", \"minor_comments\": [\"page 1, last line of the paragraph 2: \\u201cparameter obtained by node i at time t\\u201c -> \\u201cstoch. gradient obtained by node i at time t\\u201c?\", \"page 2, line 4: \\u201cvariants(e.g.\\u201d -> \\u201cvariants (e.g.\\u201d\", \"Usually \\\\mu is used for strong convexity parameter.\", \"page 3, paragraph 2, \\u201cwe emphasize that convexity is not enough\\u2026\\u201d the purpose of this sentence is unclear, what is enough then or why is that important to know?\", \"page 3, related work: Nedic at al. Nedic et al. (2017) -> Nedic et al. (2017). The same for the other citation.\", \"page 3, related work: PP model is not defined.\", \"How can PP model result in a multigraph? If two samples pairs have the same nodes, then they need to be processed sequentially, so it can be modeled with two graphs for different time steps.\", \"Population protocol model (page 4): \\u201cstates store real numbers\\u201d -> can they store vectors instead?\", \"page 5, estimating time and the learning rate section: what happens if the V^i is equal to V_j? Who updates its value?\", \"Figure 1(a) was not discussed at all in the text.\", \"Page 6, Notation and preliminaries: why it is required that T = O(poly n) is not explained.\"]}"
]
} |
B1ecVlrtDr | Symmetric-APL Activations: Training Insights and Robustness to Adversarial Attacks | [
"Mohammadamin Tavakoli",
"Forest Agostinelli",
"Pierre Baldi"
] | Deep neural networks with learnable activation functions have shown superior performance over deep neural networks with fixed activation functions for many different problems. The adaptability of learnable activation functions adds expressive power to the model which results in better performance. Here, we propose a new learnable activation function based on Adaptive Piecewise Linear units (APL), which 1) gives equal expressive power to both the positive and negative halves on the input space and 2) is able to approximate any zero-centered continuous non-linearity in a closed interval. We investigate how the shape of the Symmetric-APL function changes during training and perform ablation studies to gain insight into the reason behind these changes. We hypothesize that these activation functions go through two distinct stages: 1) adding gradient information and 2) adding expressive power. Finally, we show that the use of Symmetric-APL activations can significantly increase the robustness of deep neural networks to adversarial attacks. Our experiments on both black-box and open-box adversarial attacks show that commonly-used architectures, namely Lenet, Network-in-Network, and ResNet-18 can be up to 51% more resistant to adversarial fooling by only using the proposed activation functions instead of ReLUs. | [
"Activation function",
"Adaptive",
"Training",
"Robustness",
"Adversarial attack"
] | Reject | https://openreview.net/pdf?id=B1ecVlrtDr | https://openreview.net/forum?id=B1ecVlrtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"aqAKOavYDJ",
"ryeYA91hjB",
"HygGNcyhjS",
"BJliPOk2sS",
"rklJl_k3oB",
"Hye1Aka8qB",
"H1xhh7JecS",
"SyxadDtqFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744544,
1573808848985,
1573808682120,
1573808227059,
1573808103308,
1572421575391,
1571972019722,
1571620725486
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2258/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2258/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2258/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2258/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2258/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2258/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2258/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work presents a learnable activation function based on adaptive piecewise linear (APL) units. Specifically, it extends APL to the symmetric form. The authors argue that S-APL activations can lead networks that are more robust to adversarial attacks. They present an empirical evaluation to prove the latter claim. However, the significance of these empirical results were not clear due to non-standard threat models used in black-box setting and the weak attacks used in open-box setting. The authors revised the submission and addressed some of the concerns the reviewers had. This effort was greatly appreciated by the reviewers. However, the issues related to the significance of robustness results remained unclear even after the revision. In particular, as pointed by R4, some of the revisions seem to be incomplete (Table 4). Also, the concern R4 had initially raised about non-standard black-box attacks was not addressed. Finally, some experimental details are still missing. While the revision indeed a great step, the adversarial experiments more clear and use more standard setup be convincing.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Authors' Responses to Review #4\", \"comment\": \"We thank the reviewer for the effective review and constructive feedback! We are glad that the reviewer found the idea promising.\\n\\n1. Reviewer 4 makes a great point that in section 3.1, it is hard to find the disadvantages of APL which results in ambiguity in the motivation. We agree with this point so we revised the writing of section 3.1 to shed light on the disadvantages of the normal APL. Then, in section 3.2 we enumerate the advantages of S-APL over APL so the reader can understand the motivation behind proposing S-APL.\\nThe added exposition in section 3.1 explaining the shortcomings of APL can be summarized as follows:\\nAPL units are not zero-centered as they can have a non-zero output for an input of zero. This behavior provides no apparent beneficial purpose and is not present for S-APLs. Furthermore, APLs can only represent piecewise linear functions whose output g(x) = x for x > u, for some scalar u. This significantly restricts the class of piecewise linear functions that APLs can express. S-APLs, on the other hand, do not have this restriction, broadening the scope of functions they can approximate. We have included this in the updated text.\\n\\n2. We appreciate reviewer 4 suggested that performing a statistical significance test makes the results more interpretable. In the general revisions, we addressed this issue by adding a new section in the appendix, under which we performed statistical tests to provide t-values and p-values. We hope that the provided analysis had brought more interpretability for the results of table 1. As one can see almost all the numbers are statistically significant.\\n\\n3. Due to the comment from R4 on adversarial attack experiments, we have added CW-L2 as another open-box attack which is generally much more powerful than FGSM. As the experiments show, although the S-APL activated network is less robust to CW-L2 attack than FGSM attack, it is still showing more robustness in comparison to ReLU, APL, and Swish activated networks. \\nOther than that, we have added APL and Swish activated networks to all the experiments in section 6. \\nR #4 also brought that since failed adversarial attacks contribute positively to the score, avg(|Z_{true} - Z_{adv}|) is useless for judging adversarial robustness. We think it is necessary to mention that as we stated in section 6, the metric avg(|Z_{true} - Z_{adv}|) is the average over adversarial images for which the network is fooled. Considering this clarification, we think the metric can reasonably represent the robustness to attacks.\"}",
"{\"title\": \"Authors' Responses to Review #1\", \"comment\": \"Thank you for the detailed reviews. In the general post, we have addressed your chief concern regarding the use of more up to date architecture. We sincerely hope R1 can revisit the rating in light of our revision and response.\\n\\n1. On the complexity of the S-PAL and marginal gain:\\nWe thank R1 for this comment so we can provide more clarification on the benefits of S-APL.\\nFirst, the value of S and the hinges\\u2019 positions do not need to be tuned for different applications. We use the same value of S and the same hinge positions for all experiments. Since we use batch-normalization before the activation function, the hinge position can be set at intuitive locations that correspond to standard deviations. We set the hinge locations at 0, 1, 2, and 2.5 for both the positive and negative sides of the activation function. We add details for how we choose these values in the appendix. In terms of the complexity related to the number of parameters, the S-APL only adds 8 parameters per layer.\\nIn terms of the benefits of using S-APL, we aimed to show that the S-APL unit is the only learned activation function that both improves classification accuracy as well as makes the network more robust to adversarial attacks. Although the improvement in classification tasks seems marginal, the competition in this area is seeking 0.01 improvements.\\n\\n2. On the use of more updated networks, we agree with the reviewer and we have added ResNet-18 and EffectiveNet to Table 1. The results show that S-APL improves classification performance in those cases as well. Both of these networks are proposed recently are they are commonly used networks in the community.\\n\\n3. R1 also brought the important question, \\u201cis there any intuition why a complicated activation function is more robust to adversarial attack?\\u201d\\nThere are a considerable number of research focusing on improving activation functions as a defense mechanism against adversarial attacks such as [1] and [2]. The authors in [3] also provided a theoretical justification for the important role of activation function as a reason for the vulnerability of DNNs against adversarial foolings. They showed that vulnerability is caused by a failure to suppress unusual signals within network layers. As a remedy, they propose the use of Symmetric Activation Functions (i.e. even functions) in non-linear signal transducer units. These units suppress signals of exceptional magnitude. THey mathematically proved that symmetric networks can also perform classification tasks to arbitrary precision. On the other hand, the learnable activation has shown great superiority over fixed activation in the past few years. Our paper is taking advantage of both ideas of learning the activation and symmetric shape so it can be beneficial in both aspects.\\n\\n[1] Rakin, Adnan Siraj, et al. \\\"Defend deep neural networks against adversarial examples via fixed anddynamic quantized activation functions.\\\" arXiv preprint arXiv:1807.06714 (2018).\\n[2] Wang, Bao, et al. \\\"Adversarial defense via data dependent activation function and total variation minimization.\\\" arXiv preprint arXiv:1809.08516 (2018).\\n[3] Zhao, Qiyang, and Lewis D. Griffin. \\\"Suppressing the unusual: towards robust cnns using symmetric activation functions.\\\" arXiv preprint arXiv:1603.05145 (2016).\"}",
"{\"title\": \"Authors' Responses to Review #2\", \"comment\": \"We thank the reviewer for the detailed reviews and constructive feedback! We apologize for the somewhat delayed response; it took us time to run additional experiments and add more careful analysis so that we can present an improved and more polished paper to everyone. We appreciate your understanding.\\n\\nIn the following, we address the main concerns of the reviewer in the order we received.\\n\\n\\n1. The reviewer makes a great observation that the S-APL is not clearly proposed and we acknowledge that the proposed formulation for S-APL and its following restrictions were not aligned with the assumption of symmetry. For that purpose, both a^i_+ = a^i_- and b^i_+ = b^i_- have to be satisfied. \\nThe restrictions mentioned are to demonstrate that S-APLs can hypothetically take on a symmetric shape. In our experiments, we see that the final shapes are approximately symmetric (i.e. in Figure 2 and Figure 3). We have updated the language used in the paper to make this clear.\\nThe reviewer also brought the point that \\u201cparameters are shared across layers\\u201d is not a clear statement. We have updated that paragraph\\u2019s language by \\u201cS-APL shares the variables a_{+}^s, b_{+}^s, a_{-}^s, and b_{-}^s among all the neurons of a layer (e.i. h_i(x, S) does not depend on i)\\u201d\\n\\n2. We appreciate the useful comment \\u201cTheorem 3.1 does not seem to prove the approximation ability of S-APL\\u201d from the reviewer. We agree that we needed to prove that h(x, S) can approximate arbitrary piecewise linear function (i.e., g(x, S)). This could be done by setting the a^i and b^i s to mimic g(x, S), however, we found it more beneficial to provide a more straightforward proof. We fixed this issue by furnishing a new proof which directly shows that S-APL can approximate any M-Lipschitz continuous functions in an interval of real numbers. The new proof is provided as Theorem 3.1 is in the updated version.\\n\\n3. In regards to another comment \\u201csensitivity of optimization on the initial value\\u201d, we have added the loss trajectory of the S-APL initialized with the final shape of a trained S-APL. As one can see, this new initial state ends up with a lower loss than fixed trained S-APL. However, the ReLU init S-APL is still outperforming other initial states. This observation is strengthening our hypothesis of the two required stages of accelerating gradient and improving expressibility. \\n\\n4. Due to a great suggestion from the reviewer about comparing the robustness of networks equipped with activations other than ReLU and S-APL, we added Swish and plain APL to all the experiments of section 6. All the additional experiments show that S-APL has higher robustness to other activations.\"}",
"{\"title\": \"Paper update overview\", \"comment\": \"We sincerely appreciate all the reviews, they give high-quality comments on our paper with a lot of constructive feedback. In the revised paper, we did our best to address the concerns and suggestions to strengthen our paper. We sincerely hope reviewers revisit the rating in light of our revision and response. The following summarizes our major revisions. Please see our rebuttal for the detailed discussion.\", \"general_revisions\": \"1. Based on the helpful comment of Reviewer 2 on the ambiguity of the experimental conditions, we have added a new subsection \\u201cExperiments' Details and Statistical Significance\\u201d in the appendix where we specified all the details, conditions, and hyper-parameters of the experiments. Adding this section can be super useful for the readers and enable them to reproduce the experiments\\u2019 results.\\n\\n2. Due to another insightful comment from Reviewer 2 on comparing the robustness of S-APL with more activation functions, we have added two of the recent and successful activations namely, APL and Swish to all the experiments of section 6. These additional experiments further demonstrate the superiority of the S-APL activated networks on the robustness to adversarial attacks.\\n\\n3. R1 makes a great point that it is not clear how the main hyper-parameter of S-APL, \\u201cS\\u201d is chosen. To address this, we have attached a subsection in the appendix titled \\u201cNumber of Hinges and the Symmetry of S-APL\\u201d. Within this section, we empirically showed how to choose the parameter \\u201cS\\u201d and how to reduce the complexity of the activation by using a shared S-APL for all the neurons of one layer.\\n\\n4. A great comment by R1, stated that the architectures used in our experiments are not the most updated ones and adding more recent networks would make it more convincing for the community to adopt S-APL. We appreciate this helpful comment. In regards to that, we have added ResNet-18 and EfficientNet as two of the up to date and highly used network in section 4.\\n\\n5. Reviewer 4 made a clever point about the interpretability of the experiments in section 4. We have performed several statistical tests to provide p-values and show the statistical significance of the numbers presented in section 4. Within the newly added section in the appendix, we have calculated the statistical significance of the results for each of the networks in Table 1.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a learnable piece-wise linear activation unit whose hinges are placed symmetrically. It gives a proof on the universality of the proposed unit on a certain condition. The superiority of the method is empirically shown. The change of the activation during training is analyzed and insight on the behavior is provided. The robustness to adversarial attacks is also empirically examined.\", \"this_paper_discusses_a_very_basic_component_of_neural_network_models\": \"activation function. Thus, it should be of interest to many researchers. The proposed method is simple and seems easy to use in real settings. A number of experiments are conducted to validate the method and the results look promising. The experiments in Section 5 is particularly interesting. It might give some hints for the following studies.\\n\\nHowever, there are several things to be addressed for acceptance.\\n\\n1) What is actually proposed is not very clear. \\n\\nS-APL is formulated in Equation 2. However, there are some discussion after that which changes or restricts the equation. For example, it seems that b_i^s^+ = b_i^s^- is assumed throughout the paper. In that case, it should be just reflected in Equation 2. In the third paragraph of Section 3.2, it is mentioned that h_i^s(x) = h_i^s(-x) with b_i^s^+ = b_i^s^-. However, it should also assume that a^s^+ = a^s^-. From the experiments. apparently, a^s^+ = a^s^- is not assumed. It seems that the method has symmetry only for the hinge locations. \\n\\nIn the first paragraph of Section 3.2, it is implied that parameters are shared across layers. It is not very clear what is shared and what is not. Please make that part clear. It will make it easier to understand the experimental settings, too.\\n\\n2) Theorem 3.1 does not seem to prove the approximation ability of S-APL.\\n\\nIt is clear that g(x, S) can represent arbitrary h(x, S), but I am not sure if it is clear that h(x, S) can represent arbitrary g(x, S). It should also depend on the conditions on a^s^+, a^s^-, b_i^s^+, b_i^s^-. I think it needs to prove that h(x, S) can approximate arbitrary piecewise linear function (i.e., g(x, S)) if you want to prove the approximation ability of h(x, S).\\n\\nEquation 4 seems to assume that all intervals are the same (i.e., \\u2200i, B_i - A_i = (B-A) / S). It should be stated explicitly. This relates to the problem 1).\\n\\nI may not understand some important aspect. I am happy to be corrected.\\n\\n3) Experimental conditions are not clear.\\n\\nPlease cite the papers which describe the architecture of the models used in the experiments. The effectiveness of the proposed method should depend on the network architecture and it is importable to be able to see the details of the models.\\n\\n4) On the sensitivity of optimization on the initial value.\\n\\nIt is interesting to see that \\\"fixed trained S-APL\\\" is not comparable with \\\"S-APL positive\\\". If the hypothesis in the paper is correct, it is natural to assume that \\\"fixed trained S-APL\\\" also has some issue on training. It would be interesting to see experimental results with \\\"initialized with trained-S-APL\\\" and \\\"S-APL positive with non-zero initial value\\\". It is a bit weird to observe that \\\"S-APL positive\\\" never becomes non-zero for x < 0.\\n\\n5) Comparison results with other activation units in Section 6.\\n\\nThe proposed method is compared only with ReLU. It is important to see comparisons with other activations such as the plain APL.\", \"some_other_minor_comments\": \"It is quite interesting that objects are actually modified for adversarial attack for the proposed method in Figure 5. It would be interesting to have some consideration on it.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, a new activation function, i.e. S-APL is proposed for deep neural networks. It is an extension of the APL activation, but is symmetric w.r.t. x-axis. It also has more linear pieces (actually S pieces, where S can be arbitrarily large) than the existing activation functions like ReLU. Experimental results show that S-APL can be on par with or slightly better than the existing activation functions on MNIST/CIFAR-10/CIFAR-100 datasets with various networks. The authors also show that neural networks with the proposed activation can be more robust to adversarial attacks.\\n\\nFirst of all, the activation function is much more complicated than the existing ones, as it has to determine the parameter S and the hinge positions. However, the gain is marginal as shown in Table 1. Besides, the authors never tell how to choose S and the hinge positions.\\n\\nSecondly, the neural networks used in the experiments are quite outdated. And the error rates shown in Table 1 are far away from state-of-the-art. Why don't you choose a latest network such as ResNet/DenseNet/EfficientNet and replace the activation with S-APL? The results could be more convincing.\\n\\nI am not an expert in adversarial attack. But is there any intuition why a complicated activation function is more robust to adversarial attack? Again, most of the models used in Table 2 are quite old (Lenet5, Net in Net, CNN).\\n\\nIn a word, the proposed activation function is unnecessarily complicated and the gain is not justified with the latest models and not significant enough to convince people to adopt it.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes the S-APL (Symmetric Adaptive Piecewise Linear) activation function, based on the APL activation function proposed by (Agostinelli et al, 2014). This activation function is constructed as a piecewise linear function that is learned concurrently with training, and, in the case of S-APL, the activation function is forced to be symmetric. S-APL is claimed to help both with trainability and robustness of neural networks to adversarial examples.\\n\\nOverall, the idea is clearly presented, but appears to have many critical flaws, enumerated below:\\n1. It is unclear what the motivation for the symmetry is upon first reading of the paper, i.e., Section 3.1 starts by saying \\\"To overcome the shortcomings of APL,\\\" but up to this point \\u0010I cannot find any exposition that explains what these shortcomings are---the beginning of section 3 just presents the formulation of APL and does not discuss its advantages/disadvantages.\\n\\n2. The experimental results concerning network training are very hard to interpret, as they lack error bars, confidence intervals, and many critical experimental details (e.g. how many networks were trained, the hyper parameters for training, etc.) For these results to be interpretable, the authors should include a detailed description of the environment under which the experiments were performed, and present confidence intervals which demonstrate the significance of the improvement attained by S-APL. (As it stands, it is hard to tell whether, e.g., a 0.1% improvement on CIFAR-10 should be considered significant.)\\n\\n3. The adversarial evaluation section should be substantially revised to address several important flaws:\\n(a) The evaluation for black-box attacks is done in very non-standard threat models (e.g. label-only 3-pixel black-box attacks) that do not seem to be relevant to the black-box robustness of a system. Even under these threat models, the authors should also use more powerful label-based black-box attacks, such as the BoundaryAttack [1] or the label-only attacks in [2] or [3].\\n(b) The avg(|Z_{true} - Z_{adv}|) has an absolute value around Z_{true} - Z_{adv}, which means that failed adversarial attacks actually contribute positively to the score, which severely limits its usefulness for judging adversarial robustness. \\n(c) The open-box (white-box) setting only uses FGSM to evaluate the robustness, which is known to be a weak attack [4] and is recommended against for evaluating adversarial robustness [5]. The methods should be evaluated with PGD or CW-attacks to better judge robustness.\\n(d) Overall, the results presented are not sufficient to evaluate the adversarial robustness of the S-APL activation. A first step towards remedying this would be to follow the evaluation protocol suggestions outlined in [5].\\n\\nThe idea does seem promising, however, and the paper could be substantially improved by including the necessary introduction, evaluation protocols, and experimental details. However, this would require a substantial change to the paper, and thus my recommendation for now is to reject.\\n\\n[1] https://arxiv.org/abs/1712.04248\\n[2] https://arxiv.org/abs/1804.08598\\n[3] https://arxiv.org/abs/1807.04457\\n[4] https://arxiv.org/abs/1802.00420\\n[5] https://arxiv.org/abs/1902.06705\"}"
]
} |
SJeFNlHtPS | Hidden incentives for self-induced distributional shift | [
"David Scott Krueger",
"Tegan Maharaj",
"Shane Legg",
"Jan Leike"
] | Decisions made by machine learning systems have increasing influence on the world. Yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users. Generally speaking, it is possible for an algorithm to change the distribution of its own inputs. We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon. A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline. Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives. We design a simple environment as a "unit test" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS. We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy. | [
"distributional shift",
"safety",
"incentives",
"specification",
"content recommendation",
"reinforcement learning",
"online learning",
"ethics"
] | Reject | https://openreview.net/pdf?id=SJeFNlHtPS | https://openreview.net/forum?id=SJeFNlHtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"IvA7JNBQZF",
"SkgTmWnooB",
"HkgaesWFsr",
"SJx3qIZtoB",
"HJg5_UWFsB",
"rkxvzVbKoS",
"S1xlW-zf9S",
"H1e8RBdx9r",
"rJxhKoSaFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744516,
1573794085036,
1573620469184,
1573619348145,
1573619313557,
1573618702890,
1572114680115,
1572009421575,
1571801988393
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2256/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2256/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2256/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2256/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2256/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2256/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2256/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2256/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper shows how meta-learning contains hidden incentives for distributional shift and how a technique called context swapping can help deal with this. Overall, distributional shift is an important problem, but the contributions made by this paper to deal with this, such as the introduction of unit-tests and context-swapping, is not sufficiently clear. Therefore, my recommendation is a reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your reply to my questions.\\n\\nMy main points are that\\n\\n1) In interactive systems, it is well known that there are some distributional shift with simple reasoning. We don't need a \\\"unit-test\\\" to verify that.\\n\\n2) Practically, what matters is how we can reduce negative consequences from distributional shift. It is also helpful if we can leverage the distributional shift to achieve desirable outcomes.\\n\\n3) To promote beneficial effects from distributional shift, we should focus on the design of reward function or metrics that align well with our desired goals. Just preventing an algorithm to induce distributional shift is less relevant.\\n\\nIn particular, with context swapping, do we get better performing learning algorithms compared with not doing context swapping? If not, what is the benefit of context swapping? There are many dumb models, e.g. just predicting constants, that do not have incentives for distributional shifts, but they are not what we want.\\n\\nFor example, in recommendation system, it is actually useful if we can account for and leverage the distributional shift and in the end achieve better rewards. One such case is the recent online deployment of reinforcement learning algorithms in Youtube [1].\\n\\n[1] Chen, M., Beutel, A., Covington, P., Jain, S., Belletti, F. and Chi, E.H., 2019, January. Top-k off-policy correction for a REINFORCE recommender system. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (pp. 456-464). ACM.\"}",
"{\"title\": \"Authors' response: requesting more feedback, and explaining why we included control experiments\", \"comment\": \"We appreciate the feedback and would love to hear more detailed suggestions for improvements along the lines of your final paragraph. Highlighting any parts of the experimental set-up that you found unclear, would be especially useful, since we believe we\\u2019ve already described it in sufficient detail. We\\u2019ll be happy to make all of the specific changes you\\u2019ve suggested.\\n\\nRegarding \\u201crestrict[ing] the claims to PBT and PBT-like methods\\u201d, we\\u2019re sorry this wasn\\u2019t clear. I assume you\\u2019re referring to the bottom half of page 6, where we discuss Q-learning and REINFORCE. I\\u2019ll attempt to clarify this now, and request that you please let us know:\\n1) what if anything is still unclear to you,\\n2) whether you still think we shouldn\\u2019t have made such claims\\n3) what you think could be done to make this easier to follow.\\nWe included these additional experiments at the request of a previous reviewer, and we believe they shed some light on our results using PBT. We chose these experiments precisely because we view these other algorithms as similar to PBT in important (but different) ways, and we believe these experiments serve as controls to support our hypotheses as to why PBT has this effect. \\nWe certainly believe that future work should aim for a more conclusive and general understanding of how choice of learning algorithms influence which incentives are pursued. The connection we draw with meta-learning is just one example; including what we have already observed and hypothesized in these control experiments seems likely to help future researchers develop such understanding.\"}",
"{\"title\": \"Authors' response: Answering questions and explaining relevance and relation to alternative approaches (...continued)\", \"comment\": \"\", \"regarding_the_final_paragraph_of_your_review_on_the_relevance_of_our_work\": \"There are already well-known issues related to SIDS, and works addressing them. If I understand correctly, we are on the same page about SIDS, but you are skeptical that:\\n1) HIDS presents new challenges that require new approaches.\\nand/or\\n2) The challenges HIDS raised have (or will have) significant practical relevance.\\nand/or\\n3) Our work makes useful progress in addressing challenges of HIDS. \\nIt would help us to know which of the above describes your position! Our primary aim is to explain HIDS, the kinds of problems we imagine it might lead to, and some ideas for how they could be addressed. We hope that a clear exposition of the problems related to HIDS will motivate other researchers to come up with their own ideas for how to diagnose and address these problems in practice.\\n\\nWhile we agree that it\\u2019s unclear how significant our work is for current issues with real-world systems, we believe we provide important insights! Two in particular are:\\n1) Viewing the learning algorithm (and not just the loss/reward function) as an important aspect of specification.\\n2) Learners trained with supervised learning + meta-learning can pursue incentives for SIDS (similarly to RL agents).\\nIn general, we believe that understanding when and why incentives are hidden or revealed is an important and interesting scientific question that deserves attention, and hope to clearly communicate this question and our motivation for studying it.\"}",
"{\"title\": \"Authors' response: Answering questions and explaining relevance and relation to alternative approaches\", \"comment\": \"Thank you for your thoughtful review.\\nWe\\u2019re glad that you find the topic important, and hope to clarify what we see as the relevance of our work.\\nWe\\u2019re striving to make our paper as clear as possible, and would greatly appreciate any further help in doing so, or identifying other ways to address the challenges of SIDS and especially HIDS.\", \"addressing_your_questions\": \"\", \"q1\": \"I am not very familiar with this series of research but I am wondering why the paper focuses on meta-learning and its connection to HIDS. Does it use meta-learning as a tool to identify HIDS?\", \"a1\": \"We believe we are the first to study HIDS. We think meta-learning provides a clear illustration of why HIDS might lead to easily-overlooked problems; meta-learning is often framed as a method of finding a better solution to a given problem, but in fact it can also change what counts as a good solution, and our impression is that many researchers find that surprising (sentence 1 of final paragraph of section 1).\", \"q2\": \"Does [we] use meta-learning as a tool to identify HIDS?\", \"a2\": \"Yes, you can view it this way. But the point is that one should be aware of which incentives are hidden/visible, and also be aware that seemingly innocuous changes in the learning algorithm can change that (sentence 3 of section 3.2).\", \"q3\": \"It is well known normally an interactive system that can change its inputs have distributional shift. What other information does this \\\"unit-test\\\" inform us?\", \"a3\": \"We agree this is well known. The unit test tells us whether a given learner is indifferent to such changes, or will actively seek to induce them. Consider our example of content recommendation. Content recommendation can change user interests whether or not the learner is seeking to induce such a shift, but we should be more concerned about algorithms that view changing user interests as a legitimate strategy to improve performance than those that are indifferent to such changes.\", \"q4\": \"Similarly, how does \\\"context swapping\\\" mitigate distributional shift? From the experiments, it dumbs the meta-learning algorithms and make it pass the \\\"unit-test\\\", but I am not sure what other practical benefits it can bring to improve real systems.\", \"a4\": \"TODO: an example\\nContext swapping doesn\\u2019t make the meta-learning algorithms less smart, it merely changes their incentives, aiming to remove incentives for distributional shift. However, as we noticed in the content recommendation experiments, it doesn\\u2019t work well in situations where learners are unable to track any distributional shift which does occur. In other words, it\\u2019s only a starting point for managing learners\\u2019 incentives, not a complete solution.\", \"regarding_your_2nd_to_last_paragraph\": \"First, I'm not precisely sure what you are suggesting as an alternative. Can you be more concrete, or provide an example?\\nIt seems like you are suggesting that RL algorithms can learn to model distributional shift in the environment and account for it.\\nWhile this is true, this does not address the issue of whether/when an RL agent should view SIDS as a legitimate part of a solution strategy. By default, RL algorithms aim to maximize returns by any means, viewing any form of SIDS as something which should be leveraged to drive up performance.\\nWe could try to address this by providing a reward function that penalizes only those SIDS we think are undesirable. For example, in the content recommendation setting, instead of seeking learners indifferent to changes in user preferences, we could attempt to provide the learner with a specification that would distinguish between good changes (e.g. based on informing users) and bad changes (e.g. based on manipulating or misinforming users). \\nHowever, we have concerns about the scalability and tractability of this as a fully general approach, since it seems to rely on the learner having thorough knowledge of human preferences over different outcomes. Such an approach may be impractical and error-prone, since it may require the reward to be a function of the entire history of interaction. It may also be undesirably value-laden, since different users may have different ideas about what forms of influence are (il)legitimate.\\nThis is discussed briefly in paragraph 3 of the introduction.\"}",
"{\"title\": \"Authors' response: clarifying a few points and requesting elaboration\", \"comment\": \"Thanks for you encouragement and detailed comments.\\nWe\\u2019d greatly appreciate further input on how to improve our submission!\\n\\nFirst, to clarify: context swapping is meant to remove *incentives* to induce (or prevent) distributional shift, but not to prevent or reduce SIDS, which may occur regardless of the learner\\u2019s incentives. For example, Pennycook et al.[1] find evidence that the \\u201cillusory truth effect\\u201d can lead users to believe in \\u201cfake news\\u201d; this would happen regardless of whether an intelligent content recommendation system was trying to induce such an effect, or merely showed a user fake news articles because that\\u2019s what it predicted they would click on.\", \"addressing_your_bullet_points\": [\"We really want our paper to be as clear as possible, and would love to know more specifically which sentences you found awkward or difficult to parse.\", \"The distribution of \\u201cwhen the owner will wake up today\\u201d is shifted; the robot wakes them up in order to ensure that they will want coffee. We can make this more explicit. To be clear, our point is not that the robot will have difficulty learning in the presence of such a distributional shift; our point is that the robot having an incentive to produce such a shift is an alignment problem.\", \"We agree this is overstated, and will soften the claim. We\\u2019ve demonstrated this for PBT and REINFORCE (when considered as a meta-learning algorithm), only. We believe it will hold true for a wide variety of meta-learning algorithms, but probably not all of them.\", \"In fact, it is not the tuning of the learning rate that results in non-myopic behavior. Rather, the EXPLOIT step of PBT is the main mechanism by which non-myopia is incentivized. Appendix 3.1.2 walks through the mechanism.\", \"Indeed, the choice of epsilon is important. We use epsilon=0.1, and for much larger values of epsilon, non-myopic strategies are unstable and do not persist. We will include more discussion and exploration of this choice.\", \"While there's certainly an element of chance (as our experiments demonstrate) as to whether the learner learns a stable non-myopic policy, we don\\u2019t think that invalidates the result; we think it is significant and surprising that Q-learning can yield a sub-optimal policy in 10/30 experiments. Can you please explain why this is a concern for you? Or elaborate on what you mean by \\u201cbased on chance\\u201d?\", \"[1] Gordon Pennycook, Tyrone D Cannon, and David G. Rand. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology (forthcoming), 2019.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper discusses concepts of self-induced distributional shift (SIDS) and the hidden incentives when using meta-learning algorithms. It then prescribes a unit-test to check whether there is hidden incentive for distributional shift (HIDS) in the algorithm and proposes to use context swapping to mitigate such phenomenon.\\n\\nI am not very familiar with this series of research but I am wondering why the paper focuses on meta-learning and its connection to HIDS. Does it use meta-learning as a tool to identify HIDS? But from the description and experiments, it seems the paper is talking about how meta-learning itself leads to HIDS, for example, by comparing different hyper-parameter setting for meta-learning and PBT, it shows the unit-test is failing. So it seems like meta-learning itself leads to distributional shift?\\n\\nAlso I cannot fully appreciate the utility of this \\\"unit-test\\\". It is well known normally an interactive system that can change its inputs have distributional shift. What other information does this \\\"unit-test\\\" inform us? Similarly, how does \\\"context swapping\\\" mitigate distributional shift? From the experiments, it dumbs the meta-learning algorithms and make it pass the \\\"unit-test\\\", but I am not sure what other practical benefits it can bring to improve real systems.\\n\\nUsually, a reinforcement learning algorithm can meaningfully mitigates the adverse effects of distributional shift by explicitly modeling this interactive process and evaluating rewards with considerations to distributional shift caused by different policies. It is difficult to see how the concepts discussed in the paper provide meaningful approaches to address the issue.\\n\\nOverall, the paper touches the important question of distributional shift for machine learning systems but I find the concepts discussed in the paper, such as the focus on meta-learning, the \\\"unit-test\\\", and \\\"context-swapping\\\", less relevant to how we can really mitigate the issues in real systems or how it can provide additional insights about the problem.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The main idea of the paper: When using meta-learning there is an inherent incentive for the learner to win by making the task easier. The authors generalise this effect to a larger class of problems where the learning framework induces a set of Hidden Incentive for Distributional Shift (HIDS) and introduce Context Swapping, a HIDS mitigation technique. In the experimental section, the authors propos a HIDS unit test which then they employ to show that PBT (Population Based-Trainng), a popular meta-learning algorithm exhibits HIDS ant that context swapping helps fixing it.\\n\\nOverall, I found the idea of the paper interesting, but the attempt to generalise the effect from meta-learning to general learning setups hard to follow and detracting from the overall value. I think the authors should have restricted their claims to PBT and PBT-like methods and follow-up with something more general in future work. \\n\\nFurthermore, the notation and formaliation of the problem are incomplete:\\n * the concept of \\u2018trajectory\\u2019 is introduced without being properly defined, though its crucial in the definition of the proposed HIDS mitigation approach\\n * the context swapping algorithm description is not clearly motivated and explained, a diagram showing the learner shuffling would be quite helpful\\n * in the HIDS unit-test section, the game theoretical setup is only partially explained, the defection and cooperation actions are not clearly linked to the HIDS \\n * In Section 4.1.1 HIDS UNIT TEST EXPERIMENTAL RESULTS AND DISCUSSION Figure2 is refered for results without ever stating the task and the Figure itself does not mention it\\n\\n In terms of suggestions, I think the paper needs to go through a careful refactoring with an attention to technical details (careful concept definition, introduction of notation, clarity on experimental setup)\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The authors study the phenomena of self-introduced distributional shift. They define the term along with the term hidden incentives for distributional shift. The latter describes factors that motivate the learner to change the distribution in order to achieve a higher performance. The authors study both phenomena in two domains (one being a prisoner dilemma and the other a recommender system) and show how meta-learning reveals the hidden incentives for distributional shift. They then propose an approach based on swapping learners between environments to reduce self introduced distributional shift.\", \"In my opinion this paper should be (weak) rejected for several reasons.\", \"The paper is poorly written. Some sentences are hard to comprehend, even after repeated reading. It feels hastily written and should be carefully proofread with the help of a proficient English speaker.\", \"The very first paragraph is not a helpful example. While the authors provide an example that that illustrates a distributional shift, they don\\u2019t describe what the shift is and why that shift in a negative consequence for learning. Therefore, the example is not helpful, but rather confusing.\", \"In point 3 of the contributions, the authors state \\u2018that meta-learning reveals HIDS in these environments\\u2019. Is this true for all meta-learning approaches? The current statement overclaims their findings. If the authors decide to keep it, they should provide a description of the types of meta-learning approaches that reveal HIDS and which don\\u2019t or evaluate all different meta-learning approaches.\", \"Section 4.1 is hard to follow. The authors state that the meta-learner is used to tune the learning rate \\u2013 they fail to clearly explain how exactly the learning rate tuning results in a non-myopic behavior for a myopic learner.\", \"In the Q-learning example in Section 4.1, is the effect of \\\\epsilon considered? The discovery of non-myopic strategies might simply be based on chance. I would like the authors to include an investigation into the effect of this parameter.\", \"Generally speaking, I appreciate the author\\u2019s thoughts on SIDS and how they approach revealing hidden incentives. Although I vote to reject this paper, I strongly encourage the authors to rewrite the paper, address all other issues that are noted during this peer review and resubmit.\"]}"
]
} |
BygY4grYDr | The divergences minimized by non-saturating GAN training | [
"Matt Shannon"
] | Interpreting generative adversarial network (GAN) training as approximate divergence minimization has been
theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting
extensions such as f-GANs and Wasserstein GANs. For both classic GANs and f-GANs, there is an original variant of training and a "non-saturating" variant which uses an alternative form of generator gradient. The original variant is theoretically easier to study, but for GANs the alternative variant performs better in practice. The non-saturating scheme is often regarded as a simple modification to deal with optimization issues, but we show that in fact the non-saturating scheme for GANs is effectively optimizing a reverse KL-like f-divergence. We also develop a number of theoretical tools to help compare and classify f-divergences. We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training. | [
"GAN"
] | Reject | https://openreview.net/pdf?id=BygY4grYDr | https://openreview.net/forum?id=BygY4grYDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fu6heF04gq",
"SkeyqBiynS",
"Bkx1JiIFjB",
"r1lFVP8tiB",
"Syg7S7Utjr",
"BJezD3zRtH",
"SJgXA_V2FB"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798744487,
1574053254636,
1573640918922,
1573639984571,
1573638971011,
1571855449943,
1571731658847
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2255/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2255/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2255/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2255/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2255/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2255/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"As the reviewers point out, the core contribution might be potentially important but the current execution of the paper makes it difficult to gauge this importance. In the light of this, this paper does not seem ready for appearance in a conference like ICLR.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors study the \\u2018non-saturating\\u2019 variant of training for GANs and show that it is equivalent to a regular training procedure minimizing a \\u201csoftened\\u201d reverse KL divergence as opposed to Jensen-Shannon divergence. They show a connection between the two training procedures for more general f-divergence losses. For instance, they show that\\n1. non-saturating training on original GAN loss minimizes KL(p/2+q/2 || p) \\n2. non-saturating training on KL(p||q) loss minimizes KL(q||p)\\n\\nThe authors start by arguing about previous analyses of the non-saturating training scheme and present why they do not arrive at the complete picture. Then they go on to introduce f-divergences and f-GAN training before explaining in their notation what precisely non-saturating training means. Then they show that it corresponds to minimization of not the original f-divergence but a hybrid (f,g) divergence. \\n\\nOverall the paper presents most of its insights as speculative statements and does not do a good enough job of attempting to concretely formalize them.\", \"questions\": \"1. Why is argument of Nowozin et al wrong when q is parametric?\\n2. What are tail weights of an f-divergence?\\n3. Unclear why one needs the variational lower bound in f-GAN training.\\n4. Why does hybrid training (f,g) converge to the minimization of f? Can the discussion in Appendix F be formalized into a Theorem or a Lemma?\\n5. One of the main takeaways of the paper appears to be that initial phases of JS divergence minimization leads to flat gradients which can be problematic - question: is this an artifact of JS being bounded? Could unbounded divergences avoid running into this issue?\\n\\nThe content organization and highlighting of the main result in the paper can be significantly improved. Since the paper is exposing a theoretical connection as its primary result, I would also recommend a higher level of formalism overall.\\n1. Figure 2 is references much earlier than it appears\\n2. Tail weights are referenced before they are defined\\n3. Formal statement of non-saturating gradient based training captured within a subsection or box. Hard to locate in current draft. Need to read 5 sections to get to it.\\n4. Appendix C should be in main body.\\n5. Some definitions need to be defined more formally. For instance,\\n a. Hybrid optimization\\n b. Non-saturating training\\n\\nIt is unclear how significant the contribution of the paper is. It is a clean mathematical observation but the consequences of the connection are not explored and fleshed out. For instance, can by realizing that the non-saturating gradient training is optimizing a different f-divergence can we explain why non-saturating gradient training is more useful? Any insights backed by some simple example settings of synthesized probability distributions would also be useful. The paper also does not propose any new training methods based on the insights uncovered and it is not clear how significant the connection uncovered is with the information presented in the current draft. I believe it is an interesting direction that the paper probes and with a more deeper look into the phenomenon and related directions can be ready for publishing in a venue such as ICLR. In it\\u2019s current form I unfortunately don\\u2019t think the contributions of the paper are significant enough for acceptance.\"}",
"{\"title\": \"Updates in light of reviewer #2's comments\", \"comment\": \"> The expressions in Section 3 can be a useful tool to investigate the statistical properties of estimators with f-divergences. However, I think that the usefulness of alternative expressions in Section 3 is not very clear, though an intuitive interpretation is presented.\\n\\nWe completely agree that the present paper does not immediately suggest ways to improve current GAN training practice. As mentioned when responding to reviewer #3, we hope that progress over time comes from both better theoretical understanding and better experimental results. In this case, we would argue that our result is simple to state and understand (\\\"non-saturating GAN training doesn't actually optimize Jensen-Shannon\\\") and provides a better understanding of a state-of-the-art method for GAN training (used for example for StyleGAN).\\n\\n> Moreover, the numerical experiment is not sufficient to support some new expressions. Detailed theoretical analysis of the non-saturating training based on the proposed expression would be required for publication from ICLR. \\n\\nThe paper outlines a reinterpretation of what current practice is already doing. Implementing optimization of the softened reverse KL using our approach is (intentionally) identical to the standard implementation of current non-saturating GAN training. To try to understand your concern better, is it that the paper does not provide a practically improved way to train GANs, or that you don't feel the paper empirically justifies its theoretical reformulation of current GAN training sufficiently?\\n\\n> In addition, the authors could shorten the paper within 8 pages that is the standard length of ICLR paper.\\n\\nApologies, it's now of the standard length.\\n\\n> In the paper \\\"On topological properties of f-divergences\\\" (1967), Csiszar intensively studied non-saturating properties of f-divergences\\nin the paper. It would be helpful for readers to add some comments on the relation between the theoretical development in this paper and Csiszar's paper.\\n\\nWe were unfortunately unable to locate a copy of this paper. We tried searching the back issues of Studia Scientiarum Mathematicarum Hungarica and the usual aggregators. If you have any suggestions we would be very grateful.\"}",
"{\"title\": \"Updates in light of reviewer #3's comments (continued)\", \"comment\": \"> I totally agree with the author(s) on the point that this is a note (sec 7, second paragraph), rather than an academic paper. There is a lot of derivations in the paper, but insightful discussions are limited in the sense that it is not clear how to connect these results to improve practice. The section titles also read like bullet points in a note.\\n\\nThe references to a note are now fixed. We completely agree that the paper does not immediately suggest ways to improve current best practice. However we would argue that progress over time comes from both better theoretical understanding and better experimental results. In this case, the result is simple to state and understand (\\\"non-saturating GAN training doesn't actually optimize Jensen-Shannon\\\") and provides a better understanding of a state-of-the-art method for GAN training (used for example for StyleGAN).\\n\\n> Not certain about the significance of this paper. The derivations are fairly standard and I can not find any useful proposals that might benefit the practice of f-GAN training. There is a number of papers discussing non-saturating GAN training (see sec 8), and I do not think this paper adds too much value to this discussion. \\n\\nWe acknowledge that our derivation of f-GANs takes up quite a bit of space in the paper while being similar in substance to that of the original f-GAN paper. The main differences are a more elementary derivation of the variational lower bound, without using Legendre transforms / Fenchel conjugates (but it's completely trivially equivalent), and our use of a standardized form for the output of the critic (in our case the optimal critic is always log p - log q).\\n\\nIn terms of contributions to the general discussion, the present paper addresses a number of deficiencies in the previous literature in this area (this has been made more prominent in the new draft by moving the related work section straight after the introduction), and extends the current literature by showing for the first time that non-saturating GAN training can be viewed as divergence minimization, but for a divergence that has very different properties from Jensen-Shannon. While a relatively straightforward result and pleasingly straightforward to state, we hope / believe it's one that deserves to be well-known by GAN practitioners and theoreticians.\"}",
"{\"title\": \"Updates in light of reviewer #3's comments\", \"comment\": \"> This paper discusses alternative training strategies for f-GANs. While the discussion has some interesting points, the presentation needs to be much improved. It is not easy to follow this paper in its current form, and the main results are not properly emphasized. As such, I am not certain of its real contribution. f-GANs are not routinely used in practice (except for the vanilla JSD and RKL), and as far as I can tell the saturating gradient issue is no longer a central concern (it has been well addressed years ago, with, e.g. WGANs). I am voting to reject this submission, but I am willing to re-evaluate this paper if the author(s) significantly improves their writing.\\n\\nWe view the theoretical result that conventional non-saturating GAN training can be viewed as divergence minimization as the main result. We've tried to make that main contribution more prominent in the abstract and moved the section containing the main result earlier in the paper.\\n\\nThe saturation issue is well-addressed practically by both non-saturating training and WGANs, and we would argue both are prominently used in practice. For example, the ground-breaking StyleGAN paper used non-saturating training, finding it performed better than WGANs. This makes the current paper relevant by extending our best theoretical understanding of current practice.\\n\\nWe agree that f-GANs are not routinely used in practice, and consider our results on \\\"non-saturating\\\" variants of f-GANs are secondary. We cover f-GANs in some detail mainly because our main result is framed in those terms.\\n\\n> In Section 2, what does it mean by \\\"the Fisher metric of the family\\\"? The concept of Fisher metric is defined anywhere in the text, and there is no reference to it.\\n\\nClarified the wording in the main text, removing the reference to the Fisher metric, and added detail in an appendix.\\n\\n> It is not is intuitive why the second derivative of f_R takes the form (given above Eqn (2)). Please elaborate.\\n\\nWe added an intermediate step in the derivation.\\n\\n> Please avoid the use of subjective phrases such as \\\"imagining\\\", \\\"get a feel\\\", etc. I am guessing the author(s) are trying to suggest taking a visual inspection of the discrepancies projected on the log-likelihood ratio axis (which is 1D) and figure out which f-div might be more appropriate.\\n\\nWe will address this before camera ready publication.\\n\\n> Fig. 3 needs legends. There are two solid (dotted, resp) lines in the Figure, and I am guessing one of them is for the saturating and the other for the non-saturating gradient. This needs to be specified because the line specs are identical.\\n\\nApologies, we will address this before camera ready publication.\\n\\n> After going through the entire paper, I would highly recommend the author(s) to take a course in academic writing. The main contributions are not highlighted and some of the key concepts are not even properly defined. For example, analysis of the non-saturating gradient, which is supposedly the main result of this submission, appeared in pp. 7, by which time most readers have exhausted their patience. The vanilla version of the non-saturating scheme never appeared in the main text. The notation system is also non-standard, where the notation , normally reserved for average/expectation, has been used to denote the gradient wrt . The writing can be very unprofessional at times, for example, \\\"the answers to these questions are yes, yes and no respectively\\\".\\n\\nAddressing specifics first, we moved the main result several sections earlier in the paper. Overlines are used for a lot of different mathematical concepts, and the use for a gradient is fairly standard when talking about adjoints and derivatives. Removed the \\\"yes, yes, no\\\" line specifically (that was indeed a slightly strange choice of register). In terms of the broader point, we're sincerely sorry it was hard to follow, and thank you for helping us try to improve the manuscript.\\n\\n> I do not know why the author(s) inserted one toy experiment in the paper, as it serves no purpose. Each model converges to their respectively optimal, as expected. There is no discussion of how to choose an appropriate f-div or (f,g) hybrid in practice. \\n\\nMoved the toy experiment to the appendix. We included it for two reasons: People are often suspicious of whether theoretical results can really be applied in practice; and this subject area in particular has had several incorrect or questionably correct attempted derivations previously, and we wanted to provide some empirical evidence that we weren't also making a similar sort of error.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"######## Updated Review #########\", \"I would like to thank the author(s) for their rebuttal, which I have carefully read. I also appreciate the effort made to improve the paper. My overall evaluation of the paper stands unchanged.\", \"#############################\", \"#### My review is based on the updated paper downloaded from the anonymous link. ####\", \"This paper discusses alternative training strategies for f-GANs. While the discussion has some interesting points, the presentation needs to be much improved. It is not easy to follow this paper in its current form, and the main results are not properly emphasized. As such, I am not certain of its real contribution. f-GANs are not routinely used in practice (except for the vanilla JSD and RKL), and as far as I can tell the saturating gradient issue is no longer a central concern (it has been well addressed years ago, with, e.g. WGANs). I am voting to reject this submission, but I am willing to re-evaluate this paper if the author(s) significantly improves their writing.\", \"In Section 2, what does it mean by \\\"the Fisher metric of the family\\\"? The concept of Fisher metric is defined anywhere in the text, and there is no reference to it.\", \"It is not is intuitive why the second derivative of f_R takes the form $u^{-3} f''(u^{-1})$ (given above Eqn (2)). Please elaborate.\", \"Please avoid the use of subjective phrases such as \\\"imagining\\\", \\\"get a feel\\\", etc. I am guessing the author(s) are trying to suggest taking a visual inspection of the discrepancies projected on the log-likelihood ratio axis (which is 1D) and figure out which f-div might be more appropriate.\", \"Fig. 3 needs legends. There are two solid (dotted, resp) lines in the Figure, and I am guessing one of them is for the saturating and the other for the non-saturating gradient. This needs to be specified because the line specs are identical.\", \"After going through the entire paper, I would highly recommend the author(s) to take a course in academic writing. The main contributions are not highlighted and some of the key concepts are not even properly defined. For example, analysis of the non-saturating gradient, which is supposedly the main result of this submission, appeared in pp. 7, by which time most readers have exhausted their patience. The vanilla version of the non-saturating scheme never appeared in the main text. The notation system is also non-standard, where the notation $\\\\bar{\\\\lambda}$, normally reserved for average/expectation, has been used to denote the gradient wrt $\\\\lambda$. The writing can be very unprofessional at times, for example, \\\"the answers to these questions are yes, yes and no respectively\\\".\", \"I do not know why the author(s) inserted one toy experiment in the paper, as it serves no purpose. Each model converges to their respectively optimal, as expected. There is no discussion of how to choose an appropriate f-div or (f,g) hybrid in practice.\", \"I totally agree with the author(s) on the point that this is a note (sec 7, second paragraph), rather than an academic paper. There is a lot of derivations in the paper, but insightful discussions are limited in the sense that it is not clear how to connect these results to improve practice. The section titles also read like bullet points in a note.\", \"Not certain about the significance of this paper. The derivations are fairly standard and I can not find any useful proposals that might benefit the practice of f-GAN training. There is a number of papers discussing non-saturating GAN training (see sec 8), and I do not think this paper adds too much value to this discussion.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, firstly, a useful expression of the class of f-divergences is proposed. The authors investigate theoretical properties of some popular f-divergences from newly developed tools. Then, the expression is used to investigate GANs with the non-saturating training scheme.\\n\\nThe expressions in Section 3 can be a useful tool to investigate the statistical properties of estimators with f-divergences. However, I think that the usefulness of alternative expressions in Section 3 is not very clear, though an intuitive interpretation is presented. Moreover, the numerical experiment is not sufficient to support some new expressions. Detailed theoretical analysis of the non-saturating training based on the proposed expression would be required for publication from ICLR. In addition, the authors could shorten the paper within 8 pages that is the standard length of ICLR paper. In the paper \\\"On topological properties of f-divergences\\\" (1967), Csiszar intensively studied non-saturating properties of f-divergences\\nin the paper. It would be helpful for readers to add some comments on the relation between the theoretical development in this paper and Csiszar's paper.\"}"
]
} |
HJluEeHKwH | The Differentiable Cross-Entropy Method | [
"Brandon Amos",
"Denis Yarats"
] | We study the Cross-Entropy Method (CEM) for the non-convex optimization of a continuous and parameterized objective function and introduce a differentiable variant (DCEM) that enables us to differentiate the output of CEM with respect to the objective function's parameters. In the machine learning setting this brings CEM inside of the end-to-end learning pipeline in cases this has otherwise been impossible. We show applications in a synthetic energy-based structured prediction task and in non-convex continuous control. In the control setting we show on the simulated cheetah and walker tasks that we can embed their optimal action sequences with DCEM and then use policy optimization to fine-tune components of the controller as a step towards combining model-based and model-free RL. | [
"machine learning",
"differentiable optimization",
"control",
"reinforcement learning"
] | Reject | https://openreview.net/pdf?id=HJluEeHKwH | https://openreview.net/forum?id=HJluEeHKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vwetc35iBu",
"HJlhapg5sS",
"HylCoagqjH",
"rkgcuTgqjS",
"S1xI86l9oB",
"SJeQE6lqir",
"B1ea-6e5jS",
"HyxhrAGRtB",
"B1xaWlkCFS",
"rJgo2xHotB",
"rJeB1qzxdB",
"Sylm1GWeuS",
"SkxansyeOB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798744457,
1573682627886,
1573682598153,
1573682545784,
1573682510443,
1573682475249,
1573682437214,
1571855939949,
1571840004590,
1571668146986,
1569888733433,
1569882587269,
1569876916574
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2254/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2254/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2254/AnonReviewer3"
],
[
"~Zhiao_Huang1"
],
[
"ICLR.cc/2020/Conference/Paper2254/Authors"
],
[
"~Zhiao_Huang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a differentiable version of CEM, allowing CEM to be used as an operator within end-to-end training settings. The reviewers all like the idea -- it is simple and should be of interest to the community. Unfortunately, the reviewers also are in consensus that the experiments are not sufficiently convincing. We encourage the authors to expand the empirical analysis, based on the reviewer's specific comments, and resubmit the paper to a future venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Specific Response to R3 Part 2/2\", \"comment\": \"> Also, will your relaxed top-k perform sensibly when there are many ties in the observed f(x) values?\\n\\nYes, the soft top-k operations can nicely handle ties like this and will do something reasonable. To give a quick code example here using the LML code from https://github.com/locuslab/lml, we can look at some example inputs and outputs here for the soft top-3 operation:\\n\\n```\\nimport lml\\nimport torch\\n\\nfor i in range(4):\\n x = torch.ones(5)\\n if i > 0 :\\n x[-i:] = -1.\\n y = lml.LML(3)(x)\\n print(f'x = {x.numpy()}\\\\ny = {y.numpy()}\\\\n==========')\\n\\nx = [1. 1. 1. 1. 1.]\\ny = [0.599997 0.599997 0.599997 0.599997 0.599997]\\n==========\\nx = [ 1. 1. 1. 1. -1.]\\ny = [0.6917577 0.6917577 0.6917577 0.6917577 0.23296393]\\n==========\\nx = [ 1. 1. 1. -1. -1.]\\ny = [0.78206927 0.78206927 0.78206927 0.3269012 0.3269012 ]\\n==========\\nx = [ 1. 1. -1. -1. -1.]\\ny = [0.8497297 0.8497297 0.43351656 0.43351656 0.43351656]\\n```\\n\\nOne numerical edge case of vanilla CEM is when all of the top-k values are the same, the variance becomes near-zero and no further updates are necessary. If this happens with DCEM we just return the current iterate and hope that it is either optimal for the task at hand, or that the parameters of the objective can be updated to continue doing parameter learning. In practice, we have not noticed our checks/warnings being thrown for this edge case in any of our experiments.\\n\\n> Can you speak more to why you don't think it is providing better performance as well?\\n\\nThe baselines for the task we are considering are already near-SOTA and there are a number of additional ablations that we plan on doing that we\\u2019ve put in the shared response portion. We also needed to significantly reduce the number of trajectories the controller can sample to differentiate through it.\\n\\n> Perhaps the latent space is useful, for example, for transfer learning + adaptation?\\n\\nYes, the latent control sequence space is likely to pick up on some shared task-agnostic structures although it may not be directly useful for transfer. For example, the structure of the optimal control sequence space for making the humanoid run is likely very different that the structure of the optimal control sequence space for making the humanoid jump or perform other tasks, although the two spaces may contain some similarities such as smoothness over time or correlations behind how the actuators can work together.\"}",
"{\"title\": \"Specific Response to R3 Part 1/1\", \"comment\": \"Thanks very much for reading through our paper and giving us your thoughts on it. We are in agreement that our paper will be of interest to many readers, even in the current form. We have included a shared response in a separate thread here and below are some more specific responses to your review:\\n\\n> [Experiments] and \\\"They only compare within the design space of DCEM.\\\"\\n\\nWe have a more complete response to this in our shared response and would also like to clarify that our experiments are not just within the design space of DCEM. Our baselines on the cheetah and walker tasks match the SOTA performance of other control/RL architectures in this space.\\n\\n> The empirical advantage of DCEM vs. unrolled GD is clear, but it's not\\nclear to me what the intuition behind this is. \\n\\nThere are quite a few other interesting properties that we are investigating in followup work. For example, it seems useful to consider a distribution over large parts of the space that is being optimized over that is continually refined so that DCEM can consider a much broader range of possible solutions than gradient descent does, and it isn\\u2019t as susceptible to immediately focusing on local solutions to the problem.\\n\\nAnother direction of followup work we are pursuing is that the comparison of DCEM vs GD does not have to be so binary and one could consider unrolled optimizers that use both zeroth- and first-order information and thus could capture either DCEM or GD as special cases.\\n\\n> Why would GD not want the output to be near a local minimum?\\n\\nThere may be too many degrees of freedom, especially in the energy-based setting when random restarts or other additional modifications are not used. As our Figure 1 shows, one case is when the GD iterates always start at a high-energy location and the energy surface learns to make the iterate descend to the regression target.\\n\\n> Also, why is DCEM not also sensitive to the number of steps?\\n\\nDCEM can still be sensitive to the number of steps -- in the synthetic task we\\u2019re considering, it may not be as sensitive to the number of steps and results in a local optimum around the data because we initialize the sampling distributions to cover the entire range of the output space. Thus the energy function does not have as many degrees of freedom as with vanilla GD, which one cannot start with a distribution over the entire output space that\\u2019s refined around the optimum.\\n\\n> Alternatives to top-k / \\u201cDesign by Adaptive Sampling\\u201d (https://arxiv.org/abs/1810.03714)\\n\\nThanks for the reference! We were not aware of that paper and have added a reference to it. Upon a quick read-through it is not immediately clear to us how to apply it in our setting. In their section (2) they discuss how they relax the likelihood problem in their eq (12) with eq (13) by relaxing the set S to S^(t) by using some stochastic oracle to estimate p(y \\\\geq \\\\gamma^(t) | x). It\\u2019s not immediately obvious to us how to create an oracle like this in our setting, as in their Appendix S5, they are learning neural networks for the oracles in their setting. Would we also have to learn an oracle in our setting too? If so, one nice property of using the soft top-k relaxation we are using is that it does not involve any additional learning.\\n\\n> Can you comment on the suitability of other weighting schemes besides top-k?\\n\\nIn most cases we are interested in using CEM, the top-k operation is used as the weighting scheme and we are not very familiar with any reasonable alternatives that are used in the optimization setting we consider here.\"}",
"{\"title\": \"Specific Response to R2\", \"comment\": \"Thanks for the encouraging review! We agree this paper can lead to many interesting followups have written some more thoughts on this in our shared response above. Here are some more specific responses to your review:\\n\\n> the authors should at least discuss similarities to LML (Amos 2019) in a clear manner in related work. Not including it in the related work is somewhat surprising to me.\\n\\nWe considered adding the csoftmax/csparsemax/LML paper to the related work and are still open to having a discussion of them in there, but we see these methods as a tool for making the top-k operation differentiable here rather than an area of literature that we are building on.\\n\\n> First of all, Proposition 1 is an existing result, hence authors should give a proper citation in its definition. Second of all, Proposition 3 does not include anything about asymptotic (tau -> 0) whereas the stated one-line proof is using asymptotic arguments.\\n\\nThanks, we have added some standard references for these well-known results and have clarified that the asymptotic is a corollary to Prop 3.\\n\\n> Finally, there are other minor issues like Lemma1 not having a proof, proposition 1 has no statement about its proof etc. The manuscript would significantly benefit from a thorough proof reading for mathematical completeness and correctness.\\n\\nWe will add a proof of Lemma 1 using a standard epsilon-delta argument although we do not feel that this will significantly add to the paper as it is a trivial lemma with (as far as we can see) an uninsightful and generic proof. Do you suspect that there are any other mathematical completeness or correctness issues with our paper?\\n\\nWe very strongly feel that the proof of this trivial lemma is not important to our paper, but if you feel otherwise and if your review of the paper would increase if we added this proof, please let us know and we will add it immediately rather than waiting until later in the review period.\\n\\n> 1) The only additional algorithmic element introduced by the manuscript is the tau and it is not experimented. Is it crucial to use the temperature parameter? If yes, what is the effect of it? Manuscript needs a collection of ablation studies discussing the tau.\\n\\nWe ablated \\\\tau for the cartpole experiments in our original submission and the results are in Appendix D. We do not see \\\\tau as a crucial parameter and just added it to show that DCEM can become non-differentiable and can approach vanilla CEM as the temperature goes to zero. We use \\\\tau=1 in all of our other experiments and it is not a very important hyper-parameter to tune as the learning is likely able to adapt to any reasonable value of it.\\n\\n> 2) The main claim of the paper is \\\"...make solving the control optimization process significantly less computationally and memory expensive.\\\" This might be true but not really experimented. Authors do not report any quantitative computation time and/or memory requirement study. I believe the latent DCEM is more memory and computation efficient but quantifying this is important.\\n\\nOne main claim of our paper is that we can create a differentiable controller with CEM. This is impossible to do if DCEM is applied to the original control problem as it usually is. We do quantify this in the paper, as, for example, using CEM with 1000 samples in each iteration for 10 iterations with a horizon length of 12 requires 120,000 evaluations of the transition dynamics to obtain the next action for a *single* state. Trying to keep track of these evaluations and backprop through all of these causes OOM issues. We are able to reduce this by an order of magnitude to \\u201cjust\\u201d 12,000 transition dynamics evaluations which enables us to differentiate through them.\\n\\n> CEM vs implicit differentiation\\n\\nThis is definitely interesting and relevant work to ours, some of which we cited and discussed in the original version of our submission. We have updated our submission to include all of these references. The crux of the issue in the control setting is that reaching a fixed point to implicitly differentiate through can be extremely difficult, especially for the non-linear dynamical systems that we consider in this paper with approximate neural network dynamics. The differentiable MPC paper (http://papers.nips.cc/paper/8050-differentiable-mpc-for-end-to-end-planning-and-control) implicitly differentiates through a non-convex continuous control problem by reaching a fixed-point in SQP iterates and then differentiating through that locally convex approximation to the control problem. However they only considered simple and smooth settings where reaching a fixed point almost always happened and their method does not work if a fixed point is not reached, and thus we are unable to compare to them. In contrast our method works even when a fixed point is not reached.\"}",
"{\"title\": \"Specific Response to R1\", \"comment\": \"Thank you for giving our paper a close read and for the detailed comments despite it being out of your area. We have posted shared response in a separate thread, and here are some more specific responses to your review.\\n\\n> \\\"I don't understand how Proposition 1 adds to the paper. This is a standard thing. Similarly for Proposition 3.\\\"\\n\\nWe agree that these are trivial propositions. They are helpful to the paper, as the solution to Prop 3 may not be immediately obvious to all readers (and is not shown in exactly that form in other references) and we think the connection to Prop 1 is interesting. We have added citations around these to help clarify that we are not claiming to be the original source of these well-known facts.\\n\\n> Is there a way to guarantee that the solution found by (D)CEM is a reasonable approximation to the argmin. For unrolled gradient descent, this can be done by looking at the gradient wrt x.\\n\\nThis is an interesting point, and not something that people usually check even when unrolling gradient descent. With CEM and DCEM, one could check and see if all of the iterates are the same value.\\n\\n> Similarly, how is the temperature \\\\tau chosen in practice?\\n\\nWe introduced the \\\\tau hyper-parameter just to show that DCEM can approach the vanilla CEM as \\\\tau approaches 0. In all of our main experiments we use \\\\tau=1 and do not think that this is a very important hyper-parameter empirically as the function being learned should be able to adapt to whatever is being learned, as long as it starts reasonably far away from the hard top-k operation.\\n\\n> How are the hyper-parameters for CEM chosen - the function g(.), the value of k, \\\\tau, T chosen in practice. If the criticism of GD is that it overfits to the hyper-parameters - learning rate and the number of steps, why isn't this a problem with (D)CEM.\\n\\nThere\\u2019s a lot of room for choosing hyper-parameters here and selecting hyper-parameters is the bane of a lot of research and there is a lot to discuss in this space. We will keep our response here short as our rebuttal is already quite long, but we note that in many domains, such as for control, these hyper-parameters still have to be selected for vanilla CEM and a good starting point for our differentiable variant in these domains is to use similar values.\\n\\n> Section 4: Since you're comparing against unrolled GD, please formally state what the method is.\\n\\nThanks, we have formalized this in our paper at the beginning of the section.\\n\\n> Section 4.2: How is the structure of Z decided, that is how do you fix the space for searching for the policy in the Z space?\\n\\nWe assume that Z is some low-dimensional Euclidean space/box and we learn a decoder that maps these points back up to the full control sequence space.\\n\\n> There are other methods that auto-encode the policy u_1:H to search the space. How does the proposed method compare to these methods? This is important to disentangle the effect of GD vs CEM and that of just searching in a more tractable space of policies.\\n\\nYes, we included references to Co-Reyes et al. (2018); Antonova et al. (2019) in our related work section and please let us know if there are any others you know of. Our work is complementary to these methods and can be used on top of them to help fine-tune their learned latent space if you have the knowledge that their latent space is going to be used for control.\\n\\n> Section 5.1: How is the number of optimizer steps (=10) decided? Also, how is the learning rate for GD picked. Is the performance of unrolled GD worse for all values of \\\\eta, even after a grid-search over the learning rates?\\n\\nIn all of our experiments for our paper we use unroll 10 steps of GD or DCEM as it is a relatively standard number to use in these settings, and we arbitrarily set the GD learning rate to something that is also relatively standard here. Our goal in this setting is not to show that DCEM can over-fit to the small synthetic regression task we are considering and give superior performance to GD -- in fact the performances between GD and DCEM are nearly identical -- and instead our goal is to show a setting where GD and DCEM perform the same but learn extremely different energy surfaces, which we show in Figure 1. Do you agree that this is a novel demonstration of this happening? If not, can you send us over references with similar ideas so that we can properly contextualize our work?\\n\\n> [Baselines/ablations]\\n\\nSee our shared response above on differentiable control and SOTA here -- there are no easily applicable differentiable control baselines in the settings we consider as none of them work and we full-heartedly agree that more ablations/robustness studies of DCEM in this setting are important to study in future work as we use it as a more general policy class across a significantly wider range of environments, although we feel at this point for the purpose of the demonstration shown in this paper such ablations are not as insightful\"}",
"{\"title\": \"Shared Response Part 2/2\", \"comment\": \"1. DCEM can be used for model-free policies to help (semi-)amortize the max-Q computation in ideas such as https://openreview.net/forum?id=BkxXe0Etwr or model-based policies as shown in our paper.\\n2. How can DCEM be used with controllers that have a Q or value estimate function at the end such as https://arxiv.org/abs/1908.06012 This context is especially interesting as the controller's horizon approaches 0, this captures vanilla Q learning as a special case.\\n3. What is the best policy optimizer to use with a differentiable controller and what are the implications? While we show PPO experiments in this paper, one could also use SAC/TD3. However these algorithms are usually used in situations where the model-free policy and Q function have similar representational power (e.g. are neural networks) and it's not as clear if using a control-based policy with a neural network Q function is ideal, or if we can also use parts of the controller to essentially estimate a multi-step Q function.\\n4. Exploration is a large part of policy learning and we may be able to use the dynamics model to help with this, potentially by using disagreement or out-of-distribution detection as in https://arxiv.org/abs/1810.12162 and https://arxiv.org/abs/1906.04161 \\n5. How should the controller be warm-started or given context? For a given system state, it's reasonable that the controller could immediately start with a guess of the region of the action sequence space that it could consider rather than starting with no information as we have done is this paper.\\n\\nWe feel that including all of the details and experiments behind these directions in this version of the DCEM paper muddle the more general contribution here and are worth presenting separately. With this in mind, do the reviewers agree with our choice of showing demonstrations across a broader range of tasks in this version of the paper with careful comparisons in the RL/control setting in followup work?\"}",
"{\"title\": \"Shared Response Part 1/2\", \"comment\": \"Thanks very much for the useful feedback and comments on our paper. We are especially happy to hear from R2 that \\\"the proposed method is definitely impactful. Considering the fact that CEM is a powerful and widely used tool, I believe the work will lead to many interesting follow-ups\\\" and from R3 that \\\"this paper will be of interest to many readers, since it works at the interface between [evolutionary search] and [unrolled gradient descent].\\\"\\n\\nWe have posted a new version of the paper that addresses some points raised by the reviewers (more specific details below) and emphasize that the goal of our paper is to present this idea with demonstrations across a variety of domains. We very much agree with the reviewers that there are many future directions to build on this work and that there is a near-unbounded space of possible future experiments, ablations, and analyses to be done, especially in the control and reinforcement learning setting. The ~10 pages of additional details and ablations we have in our appendix merely scratch the surface with what we believe to be among the most important and insightful pieces to include and are releasing all of our model, experiment, and plotting code so that anybody can quickly reproduce, extend, and analyze/ablate our approach in new ways that we haven\\u2019t considered.\\n\\n# Differentiable control and SOTA results\\nWe would firstly like to directly address the issue raised by all of the reviewers of our empirical results not being SOTA and not having comparisons to related approaches. The context of our contribution in the differentiable control literature is important to see the empirical value of our paper and is something that none of the reviewers commented on, and may have overlooked or under-appreciated. We would like to highlight and emphasize that *all* prior literature on differentiable control (e.g. https://arxiv.org/abs/1703.09260 https://arxiv.org/abs/1706.09597 https://arxiv.org/abs/1802.05803 https://arxiv.org/abs/1810.13400 https://openreview.net/forum?id=ryxC6kSYPr) have only shown empirical results in simple environments such as the pendulum and cartpole, may not work with neural network dynamics, and may not do policy learning.\\n\\nOur empirical goal in this paper in this space is to demonstrate that DCEM can use policy learning to tune parts of a non-convex controller in more complex environments. We focused on this in the non-trivial DeepMind control suite cheetah and walker environments. The reviewers stated that our work is difficult to justify and compare to related work -- we agree this part could be made clearer in our paper, but our results are extremely consistent with previously published work in this space. Our baseline (which we call full CEM) is our implementation of PlaNet (https://arxiv.org/abs/1811.04551) in the proprioceptive setting and our baseline agents in the cheetah and walker proprioceptive environments are nearly identical to the results in the PlaNet paper. Our results are also extremely consistent with the agents published in the DMC paper https://arxiv.org/abs/1801.00690 and we have also evaluated them with 100 evaluation episodes. To further appreciate the difficulty of doing model-based control in these environments, you can see our model predictions and how policy learning with DCEM helps bring the agents back to more reasonable policies at: https://sites.google.com/view/diff-cross-entropy-method/home\\n\\nIn light of this new information and context, do the reviewers agree that this is a non-trivial novel demonstration that has not been shown in the prior differentiable control literature before? If not, can you please indicate why not and provide a reference to relevant differentiable control literature that makes a similar demonstration?\\n\\nWhile we have been continuing to study and build upon CEM as a way of learning extremely generic controllers/policies that uniformly work across all of the standard continuous control environments, it is not the main focus of this paper, which we instead would like to be a thought-provoking new differentiable non-convex optimizer with broader applications. We are preparing many additional experiments and ablations in the control/RL space that would add at least 10-20 additional pages of details -- there are many interesting followup questions we are exploring. For example:\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a differentiable variant of the Cross-Entropy method and shows its use for a continuous control task.\", \"It introduces 4 hyper-parameters and it is not clear how robust the method is to these.\", \"Although the idea is interesting, I think the paper needs a more rigorous experimental comparison with previous work and other methods.\"], \"detailed_review_below\": [\"The abstract should mention clearly that the proposed method allows you to differentiate through argmin operation and can be used for end to end learning. Similarly, please reframe parts of the introduction to make it more accessible to a general reader. For example, in the introduction, \\\"approximation adds significant definition and structure to an otherwise...\\\". This statement requires more context to make it useful. Similarly, \\\"smooth top-k operation\\\" is not clear.\", \"Is there a way to guarantee that the solution found by (D)CEM is a reasonable approximation to the argmin. For unrolled gradient descent, this can be done by looking at the gradient wrt x.\", \"It might be more useful to explain CEM before the related work section or just moving the related work to the end.\", \"Section 3: If the paper is about CEM, please give some motivation and details rather than just citing De Boer, 2005.\", \"There is a notation clash between \\\\pi for the sort and policy later in the paper. Similarly, \\\"t\\\" is for both for the iterations of CEM and the time-stamp in the control problem.\", \"I don't understand how Proposition 1 adds to the paper. This is a standard thing. Similarly for Proposition 3.\", \"Isn't there an easier way to make the top-k operation soft - by sampling without replacement proportional to the probabilities? Please justify this design decision. Similarly, how is the temperature \\\\tau chosen in practice?\", \"Please explain the paragraph: \\\"Equation 4 is a convex optimization layer and... GPU-amenable..\\\" Isn't this critical to the overall scalability of this method?\", \"- How are the hyper-parameters for CEM chosen - the function g(.), the value of k, \\\\tau, T chosen in practice. If the criticism of GD is that it overfits to the hyper-parameters - learning rate and the number of steps, why isn't this a problem with (D)CEM.\", \"Section 4: Since you're comparing against unrolled GD, please formally state what the method is.\", \"Section 4.2: How is the structure of Z decided, that is how do you fix the space for searching for the policy in the Z space?\", \"There are other methods that auto-encode the policy u_1:H to search the space. How does the proposed method compare to these methods? This is important to disentangle the effect of GD vs CEM and that of just searching in a more tractable space of policies.\", \"Section 5.1: How is the number of optimizer steps (=10) decided? Also, how is the learning rate for GD picked. Is the performance of unrolled GD worse for all values of \\\\eta, even after a grid-search over the learning rates?\", \"For Section 5.2, please compare to baselines mentioned in the paper. Also, there needs to be an ablation/robustness study for the DCEM method.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"After reading authors' response, I am sticking to my original decision. Authors addressed most of the issues I raised and I am happy with their response; however, I still believe the paper should not be accepted since it is not adding enough value. The problem is important and impactful. However, the algorithmic idea comes from LML (Amos 2019), and the impact on the real problems has not been demonstrated. Hence, it is adding no value algorithmically, and adding a very small value from application perspective. It is basically saying LML can be trivially applied to differentiate through CEM, and it works on some simple toy problems. To me this is mostly a sanity check. Hence, I am sticking to my weak-reject decision.\\n-------\\nThe manuscript is proposing a method to make cross-entropy method (CEM) differentiable. CEM is a widely used zeroth-order optimization method. The main idea in the paper is applying the recently proposed limited multi-label projection (LML) layer in a straight-forward manner to the CEM since the major computational tool in CEM iteration is top-k selection. The authors apply the proposed method to synthetic energy-based learning and continuous control problems. \\n\\nThe proposed method is definitely impactful. Considering the fact that CEM is a powerful and widely used tool, I believe the work will lead to many interesting follow-ups. In addition to these, the work is addressing computational scalability of model-based RL which is both under-explored and important problem. \\n\\nThe proposed model is novel from a modelling perspective since it makes CEM part of end-to-end learnable models. Whereas, it has no algorithmic novelty since it is a straightforward application of the LML layer to the CEM problem. Lack of algorithmic novelty is not an issue but the authors should at least discuss similarities to LML (Amos 2019) in a clear manner in related work. Not including it in the related work is somewhat surprising to me.\\n\\nThe exposition can clearly be improved. First of all, Proposition 1 is an existing result, hence authors should give a proper citation in its definition. Second of all, Proposition 3 does not include anything about asymptotic (tau -> 0) whereas the stated one-line proof is using asymptotic arguments. Finally, there are other minor issues like Lemma1 not having a proof, proposition 1 has no statement about its proof etc. The manuscript would significantly benefit from a thorough proof reading for mathematical completeness and correctness.\\n\\nOne major issue with the manuscript is the experimental study. 1) The only additional algorithmic element introduced by the manuscript is the tau and it is not experimented. Is it crucial to use the temperature parameter? If yes, what is the effect of it? Manuscript needs a collection of ablation studies discussing the tau. 2) The main claim of the paper is \\\"...make solving the control optimization process significantly less computationally and memory expensive.\\\" This might be true but not really experimented. Authors do not report any quantitative computation time and/or memory requirement study. I believe the latent DCEM is more memory and computation efficient but quantifying this is important.\\n\\nI am curious on the choice of CEM. There are other methods which can be utilized since this is basically a bi-level optimization problem. One can use implicit gradients or similar methods (like: https://arxiv.org/abs/1602.02355, https://arxiv.org/abs/1809.01465, https://arxiv.org/abs/1909.04630, http://proceedings.mlr.press/v22/domke12/domke12.pdf). Can these methods also be utilized instead of back-propagation through optimization procedure? If yes, you should compare with them or explain why you did not. If no, you should explain why.\\n\\nIn summary, the paper is very impactful. On the other hand, the proposed empirical study significantly lacks in many aspects. I would be happy to increase my score if authors can address these issues.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"*Summary*\\nVarious optimization methods can be wrapped to form black-box differentiable deep learning modules. This allows end-to-end learning of energy functions that can be used, for example, in continuous control. There is a whole cottage industry of designing these modules. Advancements in this field are of broad interest to the ICLR community. This paper proposes to unroll the cross entropy method, which is very different than the standard practice of unrolling gradient descent. Experiments on continuous control benchmarks demonstrate that this can be used to learn a latent space in which test-time optimization is performed. By doing this optimization in latent space, it can be performed much faster than in the raw high-dimensional space.\\n\\n*Overall Assessment*\\nThe paper is well written and the technical contribution is explained well. Both evolutionary search methods (e.g. CEM) and unrolled gradient-based optimizers are very popular in ML these days. This paper will be of interest to many readers, since it works at the interface between these.\\n\\nI do not have much background in continuous control, model-based RL, etc. Therefore, it is hard for me to assess whether the experiments compare to the right baselines, etc. It appears to me that the experiments on cheetah and walker do not compare a particularly broad set of methods. They only compare within the design space of DCEM. Furthermore, the key result in these experiments is that the DCEM policy results in more efficient inner optimization (such that running the policy is faster). The overall messaging of the paper was not about reducing the costs of executing the policy but in improving performance. Such a result is not provided in these experiments.\\n\\nI have worked extensively with unrolled optimizers and can speak to the correctness and usefulness of the paper's methodological contribution.and the experiments in sec 5.1. However, these are more for providing insight into the method, and are not large-scale experiments.\\n\\nMy evaluation is a weak reject, since the paper would be greatly improved by stronger empirical results for the large-scale continuous control benchmarks with comparison to a broader set of methods.\\n\\n*Comments*\\n\\nThe empirical advantage of DCEM vs. unrolled GD is clear, but it's not clear to me what the intuition behind this is. You write \\\"one potential advantage of DCEM is that the output is more likely to be near a local minimum of the energy surface so that, e.g., more test-time iterations can be used to refine the solution.\\\" Why would GD not want the output to be be near a local minimum. Also, why is DCEM not also sensitive to the number of steps? The variance of the CEM distribution gives a natural lengthscale similar to the step size in GD. You discuss this further at the end of sec 5.1 Are there simple experiments you could do that compare the robustness of GD (such as with random restarts or unrolled Langevin dynamics) vs. DCEM?\\n\\nIn the standard CEM, doing weighted MLE with 0-1 weights coming from top-k is useful because the set of examples for MLE is size k, which yields computational savings. However, if you can tolerate doing weighted MLE on all available samples, then there may be better ways to set the weights than using a softened version of top-k. See, for example, the example weights in 'Design by Adaptive Sampling' (arxiv.org/abs/1810.03714). Can you comment on the suitability of other weighting schemes besides top-k? Also, will your relaxed top-k perform sensibly when there are many ties in the observed f(x) values?\\n\\nThe principal critique of the paper is the positive DCEM results on cheetah + walker are mostly about runtime, instead of performance. Can you speak more to why you don't think it is providing better performance as well? Perhaps the latent space is useful, for example, for transfer learning + adaptation?\"}",
"{\"comment\": \"Yeah, soft top-k makes it be real \\\"CEM\\\" instead of something else. I guess top-k encourages exploration comparing with softmax. If hard version CEM is better than the soft version in general, LML should be a better choice here.\", \"title\": \"Thank you for the reply.\"}",
"{\"comment\": \"Thank you for your comment. Yes, in theory, any function that computes a differentiable weighting from the sampled function values could be used as the weights in the maximum weighted likelihood problem (5) and most reasonable choices, such as the softmax, would likely work well in practice too. The end-to-end learning would likely be able to adapt to any reasonable differentiable weighting mechanism.\\n\\nWe chose the LML projection because it captures the original cross-entropy method as a special case as the temperature approaches zero, which does not hold for a temperature-scaled softmax. The LML projection can also be done with a single line of code that, for the problems we consider in this paper (a batch of 128 problems with 100 variables and the soft top-10 entries) runs in ~6ms in comparison to the ~0.6ms the softmax takes.\\n\\nSomething else to consider for future work -- one could imagine training a model using DCEM with a soft top-k operation and 1) slowly annealing the temperature to zero during training to help squeeze the last few bits of accuracy out of the system, or 2) at evaluation time, using vanilla CEM with a hard top-k operation to get a slightly better solution to the optimization problem. In these cases, it may be more likely to help/work if a soft top-k operation like the LML projection is used rather than the softmax, for example, as it would be closer to the hard version.\", \"title\": \"On the weight calculation\"}",
"{\"comment\": \"I like this work very much. The differentiable optimization process is an important direction.\\n\\nI wonder that if how important the soft top-k modular is here. The output of the soft-topk module is the weights over samples. The only requirement is that the sampler with a higher score should get higher weight. Can we calculate such weights with more straightforward methods? For example, when k=1, we can get such weights by softmax(v_t/\\\\tau). It seems that such weights don't hurt the differentiability of the optimization process. I don't know if I am correct. I will appreciate if the authors could give some words on this.\", \"title\": \"Great work; Can we replace soft top-k with any other soft-attetion mechanism?\"}"
]
} |
S1xO4xHFvB | Atomic Compression Networks | [
"Jonas Falkner",
"Josif Grabocka",
"Lars Schmidt-Thieme"
] | Compressed forms of deep neural networks are essential in deploying large-scale
computational models on resource-constrained devices. Contrary to analogous
domains where large-scale systems are build as a hierarchical repetition of small-
scale units, the current practice in Machine Learning largely relies on models with
non-repetitive components. In the spirit of molecular composition with repeating
atoms, we advance the state-of-the-art in model compression by proposing Atomic
Compression Networks (ACNs), a novel architecture that is constructed by recursive
repetition of a small set of neurons. In other words, the same neurons with the
same weights are stochastically re-positioned in subsequent layers of the network.
Empirical evidence suggests that ACNs achieve compression rates of up to three
orders of magnitudes compared to fine-tuned fully-connected neural networks (88×
to 1116× reduction) with only a fractional deterioration of classification accuracy
(0.15% to 5.33%). Moreover our method can yield sub-linear model complexities
and permits learning deep ACNs with less parameters than a logistic regression
with no decline in classification accuracy. | [
"Network Compression"
] | Reject | https://openreview.net/pdf?id=S1xO4xHFvB | https://openreview.net/forum?id=S1xO4xHFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fflH5pKQq",
"H1eZ4aN2jS",
"SkxlZokior",
"Bklvp51joB",
"ryguvqJjiB",
"SJgIDBwJ9B",
"ByxlTSdttS",
"rke2IBwPtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744429,
1573829929307,
1573743352469,
1573743294808,
1573743199543,
1571939678277,
1571550648368,
1571415380376
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2253/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2253/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2253/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2253/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2253/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2253/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2253/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposed a very general idea called Atomic Compression Networks (ACNs) to construct neural networks. The idea looks simple and effective. However, the reason why it works is not well explained. The experiments are not sufficient enough to convince the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Updated Version for Rebuttal\", \"comment\": \"Many thanks to all the reviewers for providing valuable feedback and insights regarding our work.\", \"consequently_we_updated_our_paper_to_clarify_the_following_key_points_in_the_review\": \"> Reviewer 2 rightfully proposed the additional comparison to sparsification methods employing L1-regularization and hard-thresholding. We added the respective experiment results as table 6 in the appendix. Furthermore we added the paper regarding compression with the Kronecker product to our related work.\\n\\n> To empasize the importance of model compression which Reviewer 1 legitimately questioned, we added more respective references to our motivation in the introduction. \\n\\n> In response to the points correctly brought up by Reviewer 3 we more extensively elaborate on the intuition behind the general model idea and the observed effects and competitive results. Furthermore we clarify the points regarding algorithm 1.\\n\\nBesides the aforementioned points, we answered the raised questions in more detail in the direct comments to the reviews.\"}",
"{\"title\": \"Answers to points brought up in review\", \"comment\": \"Thank you very much for the feedback and the positive evaluation of our paper.\\n\\n> \\u201eOne obvious baseline missing is sparse compression...\\u201c\\n\\n1) Regarding the sparse compression baseline we want to point out, that the Bayesian Compression baseline [1] in our paper is implicitly sparsifying the network. Furthermore the authors compare their method against the sparsifying variational dropout proposed by [2] and show that they achieve better results. \\nWe performed some additional experiments employing simple L1 regularization and L1 regularization combined with iterative hard thresholding (cp. [3]) but without explicit cardinality constraint [4]. The results show that both methods in general perform a bit worse than the small FC baseline, beating our ACN in some of the cases where the small FC baseline is also stronger, especially for the two last parameter bins (with the highest number of parameters). However it doesn\\u2018t change the overall impression and results. We will add the additional results to the appendix to clarify the points made.\\n\\n\\t\\tSparseL1\\tSparseL1+HT\\t\\n\\t\\tAccuracy\\tAccuracy\\t\\nhar\\t\\t\\t\\t\\t\\n< 500\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 1000\\t&\\t0.181\\t&\\t0.193\\t\\\\\\\\\\n< 2500\\t&\\t0.181\\t&\\t0.959\\t\\\\\\\\\\n< 5000\\t&\\t0.981\\t&\\t0.967\\t\\\\\\\\\\n>= 5000&\\t0.981\\t&\\t0.975\\t\\\\\\\\\\nnomao\\t\\t\\t\\t\\t\\n< 250\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 500\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 1000\\t&\\t0.718\\t&\\t0.718\\t\\\\\\\\\\n< 2500\\t&\\t0.952\\t&\\t0.949\\t\\\\\\\\\\n>= 2500&\\t0.952\\t&\\t0.951\\t\\\\\\\\\\ninternetAds\\t\\t\\t\\t\\t\\n< 1000\\t&\\t0.916\\t&\\t0.913\\t\\\\\\\\\\n< 2500\\t&\\t0.916\\t&\\t0.966\\t\\\\\\\\\\n< 5000\\t&\\t0.969\\t&\\t0.966\\t\\\\\\\\\\n< 10000&\\t0.977\\t&\\t0.966\\t\\\\\\\\\\n>= 10000&\\t0.977\\t&\\t0.966\\t\\\\\\\\\\nisolet\\t\\t\\t\\t\\t\\n< 2500\\t&\\t0.033\\t&\\t0.488\\t\\\\\\\\\\n< 5000\\t&\\t0.033\\t&\\t0.933\\t\\\\\\\\\\n< 7500\\t&\\t0.938\\t&\\t0.933\\t\\\\\\\\\\n< 10000&\\t0.938\\t&\\t0.953\\t\\\\\\\\\\n>= 10000&\\t0.938\\t&\\t0.953\\t\\\\\\\\\\nspambase\\t\\t\\t\\t\\t\\n< 250\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 500\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 1000\\t&\\t0.733\\t&\\t0.739\\t\\\\\\\\\\n>= 1000&\\t0.919\\t&\\t0.922\\t\\\\\\\\\\nsplice\\t\\t\\t\\t\\t\\n< 500\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 1000\\t&\\t0.533\\t&\\t0.533\\t\\\\\\\\\\n< 2500\\t&\\t0.889\\t&\\t0.974\\t\\\\\\\\\\n< 5000\\t&\\t0.971\\t&\\t0.974\\t\\\\\\\\\\n>= 5000&\\t0.976\\t&\\t0.977\\t\\\\\\\\\\ntheorem\\t\\t\\t\\t\\t\\n< 250\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 500\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 1000\\t&\\t0.422\\t&\\t0.422\\t\\\\\\\\\\n>= 1000&\\t0.488\\t&\\t0.493\\t\\\\\\\\\\nbioresponse\\t\\t\\t\\t\\t\\n< 1000\\t&\\t0.548\\t&\\t0.548\\t\\\\\\\\\\n< 2500\\t&\\t0.548\\t&\\t0.777\\t\\\\\\\\\\n< 5000\\t&\\t0.767\\t&\\t0.791\\t\\\\\\\\\\n< 10000&\\t0.788\\t&\\t0.791\\t\\\\\\\\\\n>= 10000&\\t0.788\\t&\\t0.791\\t\\\\\\\\\\noptdigits\\t\\t\\t\\t\\t\\n< 250\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 500\\t&\\t0.000\\t&\\t0.000\\t\\\\\\\\\\n< 750\\t&\\t0.000\\t&\\t0.104\\t\\\\\\\\\\n< 1000\\t&\\t0.104\\t&\\t0.961\\t\\\\\\\\\\n>= 1000&\\t0.979\\t&\\t0.985\\t\\\\\\\\\\n\\n> \\u201e...this work should be compared with compression schemes that work via kronecker product, ...\\u201c\\n\\n2) The proposed paper employing the Kronecker product is quite interesting. We will add it to the related work. Our 3rd baseline \\u201eTensorNet\\u201c [5] (see section 4.2.2) employs the Tensor-Train (TT) format [6]. The TT format is itself a special case of a Nested Kronecker Tensor Decomposition [7].\\nFurthermore [5] is well known in the network compression literature and comes with available code which simplifies the experiments. Therefore we argue that the TensorNet baseline is a good representation of compression methods based on layer-wise matrix decomposition and low-rank approximations. Furthermore our model does not only focus on layer-wise decompositions but takes the whole (deep) network structure into account. \\n\\n\\n\\n[1] Louizos, Christos, Karen Ullrich, and Max Welling. 2017. \\u201cBayesian Compression for Deep Learning.\\u201d In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 3288\\u20133298. Curran Associates, Inc. http://papers.nips.cc/paper/6921-bayesian-compression-for-deep-learning.pdf.\\n\\n[2] Molchanov, Dmitry, Arsenii Ashukha, and Dmitry Vetrov. 2017. \\u201cVariational Dropout Sparsifies Deep Neural Networks.\\u201d ArXiv:1701.05369 [Cs, Stat], June. http://arxiv.org/abs/1701.05369.\\n\\n[3] Han, Song, Jeff Pool, John Tran, and William Dally. 2015. \\u201cLearning Both Weights and Connections for Efficient Neural Network.\\u201d In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 1135\\u20131143. Curran Associates, Inc. http://papers.nips.cc/paper/5784-learning-both-weights-and-connections-for-efficient-neural-network.pdf.\\n\\n[4] Jin, Xiaojie, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. 2016. \\u201cTraining Skinny Deep Neural Networks with Iterative Hard Thresholding Methods.\\u201d ArXiv:1607.05423 [Cs], July. http://arxiv.org/abs/1607.05423.\\n\\n[5] Novikov, Alexander, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov. 2015. \\u201cTensorizing Neural Networks.\\u201d ArXiv:1509.06569 [Cs], September. http://arxiv.org/abs/1509.06569.\\n\\n[6] Oseledets, Ivan V. \\\"Tensor-train decomposition.\\\" SIAM Journal on Scientific Computing 33, no. 5 (2011): 2295-2317.\\n\\n[7] Cichocki, Andrzej, Namgil Lee, Ivan V. Oseledets, A-H. Phan, Qibin Zhao, and D. Mandic. \\\"Low-rank tensor networks for dimensionality reduction and large-scale optimization problems: Perspectives and challenges part 1.\\\" arXiv preprint arXiv:1609.00893 (2016).\"}",
"{\"title\": \"Answers to clarification questions\", \"comment\": \"Thank you for your time and the valuable feedback and insights regarding our paper.\", \"we_would_like_to_clarify_some_points_mentioned_in_your_review\": \"> \\u201eIs it just randomly constructed network also perform well?\\u201c\\n\\n1) No, naively randomly constructed networks do not perform well, as can be seen with the RER [1] baseline (section 4.2.2, figure 3 and table 5 in the appendix). The proposed method works well compared to the other baselines because the special weight sharing architecture enables the model to use the available capacity given by the number of its parameters more efficiently. As we show in the experimental section this applies to randomly constructed networks, where the shared weights are trained end-to-end what leads to an effective fine-tuned collective network. However we want to emphasize that only the distribution and connections of the neurons are random, while the number of layers and number of neurons per layer is predefined. Furthermore as we point out in the conclusion, the proposed method could be combined with smarter approaches to construct even more powerful networks, e.g. by using NAS methods (cp. [7]).\\n\\n\\n> \\u201eThe model size is small, but in what cases this small model size matters?\\u201c\\n\\n2) The small model size achieved by the presented methods matters in different theoretical and real world scenarios. An increasing number of recent publications are concerned with network compression approaches to improve scalability and minimize the required and utilized memory of originally huge models to run them on edge devices with restricted resources (IoT devices, smartphones, etc.) [2,3,4,5,6].\\n\\n\\n> \\u201eIs this a reliable way to create useful models?\\u201c\\n\\n3) The random construction of ACN is reliable and produces useful models, what is demonstrated by the reasonable variances shown in table 5 in the appendix. In the performed experiments on 9 diverse real world datasets with a different number of instances, features and classes as well as on 3 image datasets, the results show that the performance and gains of the proposed method are significant.\\n\\n\\n> \\u201eOn page 7, in Figure 3, why logistic regression only has a single point in some of the plots?\\u201c\\n\\n4) Since logistic regression has a constant number of parameters and in figure 3 we compare models for different numbers of parameters, there can only be one point for logistic regression in all plots.\\n\\n\\n\\n[1] Cire\\u015fan, Dan C., Ueli Meier, Jonathan Masci, Luca M. Gambardella, and J\\u00fcrgen Schmidhuber. \\\"High-performance neural networks for visual object classification.\\\" arXiv preprint arXiv:1102.0183 (2011).\\n\\n[2] Cheng, Yu, Duo Wang, Pan Zhou, and Tao Zhang. 2017. \\u201cA Survey of Model Compression and Acceleration for Deep Neural Networks.\\u201d ArXiv:1710.09282 [Cs], October. http://arxiv.org/abs/1710.09282.\\n\\n[3] Kim, Yong-Deok, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. 2015. \\u201cCompression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications.\\u201d ArXiv:1511.06530 [Cs], November. http://arxiv.org/abs/1511.06530.\\n\\n[4] Han, S., X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally. 2016. \\u201cEIE: Efficient Inference Engine on Compressed Deep Neural Network.\\u201d In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 243\\u201354. https://doi.org/10.1109/ISCA.2016.30.\\n\\n[5] Samie, Farzad, Vasileios Tsoutsouras, Lars Bauer, Sotirios Xydis, Dimitrios Soudris, and J\\u00f6rg Henkel. 2016. \\u201cComputation Offloading and Resource Allocation for Low-Power IoT Edge Devices.\\u201d In 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT), 7\\u201312. https://doi.org/10.1109/WF-IoT.2016.7845499.\\n\\n[6] Mehta, Sachin, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi. 2018. \\u201cESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation.\\u201d In Computer Vision \\u2013 ECCV 2018, edited by Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, 11214:561\\u201380. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-01249-6_34.\\n\\n[7] Elsken, Thomas, Jan Hendrik Metzen, and Frank Hutter. 2018. \\u201cNeural Architecture Search: A Survey.\\u201d ArXiv:1808.05377 [Cs, Stat], August. http://arxiv.org/abs/1808.05377.\"}",
"{\"title\": \"Answers to clarification questions\", \"comment\": \"Thank you for your time and the thorough evaluation of our paper.\", \"in_the_following_we_want_to_clarify_the_points_brought_up_in_your_review\": \"1) There is no delta missing. Line 6 in Algorithm 1 is meant as 2 consecutive lines since we return the two sets m and delta at the end of the alogirthm. We will separate it to make it more clear.\\nHowever as you suggest it would also be possible to absorb the mask delta into m_i and only return m.\\n\\n2) No, we do not select the best sample architecture. In the experiments we run Algorithm 1 only once per seed for 3 seeds and average the performance of all 3 resulting architectures. We do not apply any selection regarding the sampled architectures. All other models are also initialized, trained and evaluated over 3 different seeds and the results are averaged as well. \\n\\n3) To motivate the benefit of recursively repeating neurons, we would like to present an example from function composition. Let us consider a simple function $f(x) = (\\\\alpha x +\\\\beta)^2, f: \\\\mathbb{R} \\\\rightarrow \\\\mathbb{R}$ which has only two parameters $\\\\alpha, \\\\beta$. By applying the composition $f(f(x)) = \\\\left( \\\\alpha^3 x^2 + 2 \\\\alpha^2 \\\\beta x + \\\\alpha \\\\beta^2 + \\\\beta \\\\right)^2$ we get a more complex function, but still having just two parameters $\\\\alpha, \\\\beta$. We can keep composing $f(\\\\dots f( \\\\dots f(x)))$ and achieve a very complex function, yet with only two parameters. Notice that the intuition of repeating neurons is equivalent to that of achieving a higher non-linear expressivity by composing functions, for instance composing a set of functions $f(x), g(x), h(x), \\\\dots$ yields very deep representations, e.g. $f(g(f(h(g(h(f(x)))))$. Please consider that each $f, g, h$ can be a neuron, therefore our atomic networks are special cases of recursive function compositions from a set of base functions (a.k.a. repeating neurons in our paper). In our assessment, we are the first to consider adding non-linear expressivity by recursively applying the same set of neurons (a.k.a. functions).\\n\\nTherefore ACN achieves much deeper architectures with the same number of parameters compared to a standard FCN, what could further improve the fitting capability. Finally we see the expected trend that the fit for both models increases respectively when increasing the number of parameters when going from left to right in both rows of figure 2.\\n\\n4) The main focus of this work is showing the advantage of ACN compared to MLP baselines on vector data. The image datasets were added for experimental diversity as special case of high dimensional vector data (the images were flattened) with an explicit structure. In the same way as MLPs, ACNs are not able to levarage the spatial information in image data compared to a specialized architecture like ConvNets. Furthermore, although ConvNets share parameters of their filters over the image, they do not share parameters between layers. Since the extension of the underlying idea of our ACNs to ConvNets is not feasible within the short time of the rebuttal periode, we plan to explore that direction in future work.\\n\\n5) In our experiments we follow the established trend of comparing the compression rate w.r.t. a large standard model. However contrary to most work we also introduce a small, tuned FCN of comparable size to compressed networks, which is shown to be a very strong baseline [2].\\nIn general our experiments confirm the findings of [1], that with a very large and comprehensive hyperparameter search including the general network architecture, one can find very shallow and small FC networks which perform on par or even better than most networks produced by compression techniques. The inherent advantage of the compression techniques is however that in most cases they lead more reliably and with less computational effort to relatively small and well performing architectures. \\nThe 528 times is a typo in the text, it should be 218 times as reported in table 2. Furthermore the compression rates are achieved on different datasets e.g. the 1115 times of ACN compared to 133 of small FC on the internetAds dataset.\\n\\n[1] Liu, Zhuang, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. 2018. \\u201cRethinking the Value of Network Pruning.\\u201d ArXiv:1810.05270 [Cs, Stat], October. http://arxiv.org/abs/1810.05270.\\n\\n[2] Chen, Wenlin, James Wilson, Stephen Tyree, Kilian Weinberger, and Yixin Chen. \\\"Compressing neural networks with the hashing trick.\\\" In International Conference on Machine Learning, pp. 2285-2294. 2015.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new way to create compact neural net, named Atomic Compression Networks (ACN). An immediate related work is LayerNet, where a deep neural net is created by replicating the same layer. Here, this paper extends replication down to the neuron level.\\n\\nI am leaning towards rejecting this paper because the experimental setup is not well justified and a few important details are missing before conclusions can be drawn. I would like to ask a few clarification questions. Depending on the authors\\u2019 answers, I might be willing to adjust my rating. \\n\\n(1) Is there missing a delta in the first half of line 6 in Algorithm 1? \\n\\n(2) Throughout the experiments, for the same hyperparameter (e.g. Table 4 in A.2) do you run Algorithm 1 more than once and select the best sample architecture? If the answer is yes, summarizing all masks as one parameter will not be reasonable. Given a yes answer, I would also like to ask if the same number of samples have been considered for FC (for the same hyperparameter). \\n\\n(3) Is there any intuition behind why FC does a much worse job of fitting curves than ACN with much less parameters? This refers to Fig. 2, if we compare FC with 41 parameters to ACN with 18 parameters. I am confused because MSE on sampled points often goes down when we increase the number of parameters for the application of curve fitting.\\n\\n(4) Convolution can be thought of as a special case of ACN. ConvNet is the default architecture for working on image datasets. Since MNIST and CIFAR are considered, why not also compare to ConvNet?\\n\\n(5) The claims that \\u201cACNs achieve compression rates of up to three orders of magnitudes compared to fine-tuned fully-connected neural networks with only a fractional deterioration of classification accuracy\\u201d is quite misleading. Given fully-connected neural networks achieve up to 528 times with also a fractional deterioration (Sec. 4.3), by presumably having a shallower architecture.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes a new method called Atomic Compression Network for constructing neural networks. The idea is straightforward. Basically, firstly create some neurons in random fashion, then reuse a subset of those neurons in each layer. The experiments shows ACN produces better accuracy than baseline models including a FC network, a Baysesina compression method, etc. for MINIST, etc. The paper also show ACN uses much less numbers of parameters and achieves similar accuracy when comparing with a large optima FC network on a set of datasets.\\n\\nOverall, I don\\u2019t support accepting this paper. First, I don\\u2019t think the proposed idea is very innovative. Please elaborate why this method seems to work well when comparing baseline models. Is it just randomly constructed network also perform well? Secondly, I\\u2019m not convinced we will use this method to build network in real world applications. The model size is small, but in what cases this small model size matters? Is this a reliable way to create useful models? \\n\\nOn page 7, in Figure 3, why logistic regression only has a single point in some of the plots?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper explores the use of replicating neurons across and within layers to compress fully connected neural networks. The idea is simple, and is evaluated on a number of datasets and compared with fully connected, single layer, and several compression schemes.\", \"strengths\": \"a lot of nice experiments with clearly advantageous results are given.\", \"weaknesses\": \"One obvious baseline missing is sparse compression, which can be achieved using either l1 regularization, or hard thresholding + fine tuning, both of which are easy to implement and appear in several works, e.g.\\n\\nScalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization (Yang, Ruozzi, Gogate)\\nTraining skinny deep neural networks with iterative hard thresholding methods (Yin, Yuan, Feng, Yan)\\n\\n... many others just via googling ... \\n\\nAlso, I think this work should be compared with compression schemes that work via kronecker product, which seem very similar to this scheme (but where the kronecker matrix is binary to produce replication)\\n\\nCompression of Fully-Connected Layer in Neural Network by Kronecker Product (Zhou, Wu)\\n(more via google)\\n\\nOne obvious advantage of replication over kronecker product is lower complexity, but nonetheless, the methods belong in a similar family.\\n\\nOtherwise, I think the work makes sense, the idea is nice, and the results show promise!\", \"after_rebuttal\": \"I have read the rebuttal and the authors have basically addressed all my concerns. It is a bit disappointing that simple L1 regularization can give competitive results, but the fact that the authors are willing to do the experiment and incorporate the results convinces me that there's nothing being hidden here, and the reader can make a fair and informed conclusion, so I have no more complaints.\"}"
]
} |
SJgwNerKvB | Continual learning with hypernetworks | [
"Johannes von Oswald",
"Christian Henning",
"Benjamin F. Grewe",
"João Sacramento"
] | Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-of-the-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets. | [
"Continual Learning",
"Catastrophic Forgetting",
"Meta Model",
"Hypernetwork"
] | Accept (Spotlight) | https://openreview.net/pdf?id=SJgwNerKvB | https://openreview.net/forum?id=SJgwNerKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"IIg582klus",
"HyeR6lUYiB",
"r1gp9gUKsH",
"rkludeIKiS",
"rkgcEg8tjH",
"S1gdMgUtjB",
"H1ehZ8mCtH",
"rygAU3qTtB",
"rkebr3IhKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744398,
1573638342323,
1573638292992,
1573638255782,
1573638194435,
1573638159639,
1571857923777,
1571822678149,
1571740728943
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2252/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2252/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2252/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2252/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2252/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2252/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2252/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2252/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes to use hypernetwork to prevent catastrophic forgetting. Overall, the paper is well-written, well-motivated, and the idea is novel. Experimentally, the proposed approach achieves SOTA on various (well-chosen) standard CL benchmarks (notably P-MNIST for CL, Split MNIST) and also does reasonably well on Split CIFAR-10/100 benchmark. The authors are suggested to investigate alternative penalties in the rehearsal objective, and also add comparison with methods like HAT and PackNet.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Collective response to reviewers\", \"comment\": \"We are very grateful to all three reviewers for the time taken in carefully assessing our work and for the overall positive feedback, that we found very encouraging.\\n\\nWe have added new results including a study of the PermutedMNIST-100 CL2/3 benchmark and improved the clarity of the manuscript. We also provide new analyses on chunking and a comparison on PermutedMNIST-10/100 to a masking CL method known as HAT, following AnonReviewer1's suggestions. We believe that this has significantly improved the paper and we sincerely hope that AnonReviewer1 and AnonReviewer2 would consider increasing their rating from a weak accept to accept.\\n\\nWe have placed the PermutedMNIST-100 CL2/3 results on the Appendix. Following AnonReviewer3's suggestion to increase the paper length to nine pages upon acceptance, and if the reviewers find such change appropriate, we could use the additional space and move Fig. A4 to the main text.\"}",
"{\"title\": \"Response to AnonReviewer1 (Part 2/2)\", \"comment\": \"(Continuation of Part 1/2)\", \"q\": \"I guess HNET+ENT for CL1 scenario is just HNET?\\n\\nCorrect. This has been clarified in the manuscript (Table 1 caption).\"}",
"{\"title\": \"Response to AnonReviewer1 (Part 1/2)\", \"comment\": \"We thank the reviewer for his positive feedback and constructive comments. We ran additional experiments triggered by the questions that were raised in the assessment of our manuscript and modified it accordingly, as detailed below.\", \"q\": \"For CIFAR-10/100 groups of 20 classes are added in 5 steps?\\n\\nWe consider the benchmark introduced by Zenke et al (2018), where the entire CIFAR-10 dataset is first solved, followed by five sets of ten CIFAR-100 classes. We have now clarified the manuscript on this point (beginning of \\\"split CIFAR-10/100 benchmark\\\" paragraph).\\n\\n(Continued: please see part 2/2)\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for his encouraging feedback and appreciation of our efforts, that we very much enjoyed reading.\\n\\nWe are happy to follow the proposed suggestion and move the additional split CIFAR-10/100 experiments to the main text if the paper gets accepted.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for his overall positive feedback and appreciation of our work. We reply individually to each raised point below.\", \"q\": \"Are the chunking parameters shared/updated across tasks?\\n\\nCorrect. A single set of chunk embeddings is shared and updated across tasks, and treated like other parameters in $\\\\Theta_h$.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to use hypernetwork to prevent catastrophic forgetting. In deep learning, the information of the samples are converted to parameters during the training process, however, future training process could interfere with the information from the previous tasks. One of the method to prevent forgetting is to use reheasal, which retrains the network with previous data. The mechanism of this work is to store the previous samples as a trained point in the parameter space, so that a set of points in the original space is stored and thus rehearsed as one point in the parameter space, this saves both the memory and computation.\", \"i_give_a_weak_accept_of_this_paper_due_to_the_following_reasons\": \"\", \"pros\": [\"The idea of converting a set of data points to one point and rehearse at a meta level is a smart and novel idea.\", \"It shows significant improvement compared to baseline methods, especially for split CIFAR experiments.\", \"The Appendix contains a fair amount of details and additional experiments on generative models.\"], \"cons\": [\"This works assumes a task incremental setting, during training process task is received one by one, within each task we could assume i.i.d shuffling of the data. During testing, the task boundary is optional. Although this setting has been taken by many other works in this field, it is also criticised that availability of task boundary is an unrealistic setting. A more realistic setting would be to continually learn with a continuous non-stationary stream of data, which indicates there's no split of train / test phase. Thus a general continual learning method should not require task boundary, which would be problematic for this work as it depends on task conditioning.\", \"For the rehearsal objective in 2, L2 penalty is used. This could be a problem as minimizing the L2 distance in the parameter space does not necessarily minimize the task loss.\"], \"questions_i_have_that_needs_clarification\": [\"Chunking: Are the chunking parameters shared/updated across tasks?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Review of \\u201cContinual learning with hypernetworks\\u201d\\n\\nThis paper investigates the use of conditioning \\u201cHypernetworks\\u201d (networks where weights are compressed via an embedding) for continual learning. They use \\u201cchunked\\u201d version of the hypernetwork (used in Ha2017, Pawlowski2017) to learn task-specific embeddings to generate (or map) tasks to weights-space of the target network.\", \"there_is_a_list_of_noteworthy_contributions_of_this_work\": \"1) They demonstrate that their approach achieves SOTA on various (well-chosen) standard CL benchmarks (notably P-MNIST for CL, Split MNIST) and also does reasonably well on Split CIFAR-10/100 benchmark. The authors have also spent some effort to replicate previous work so that their results can be compared (and more importantly analyzed) fairly to the literature, and I want to see more of this in current ML papers. (one note is that the results for CIFAR-10/100 is in the Appendix, but I think if the paper gets accepted, let's bring it back to the main text and use 9 pages, since the results for CIFAR10/100 are substantial).\\n\\n2) In addition to demonstrating good results on standard CL benchmarks, they also conduct analysis of task-conditioned hypernetworks with experiments involving long task sequences to show that they have very large capacities to retain large memories. They provide a treatment (also with visualization) into the structure of low-dim task embeddings to show potential for transfer learning.\\n\\n3) The authors will release code to reproduce all experiments, which I think is important to push the field forward. Future work can not only reproduce this work, but also the cited works.\\n\\nThe work seems to be well written, and the motivation of using hypernetworks as a natural solution to avoid catastrophic loss is clearly described. Overall, I think this work is worthy of acceptance and should encourage more investigation into hypernetworks for CL and transfer learning going forward in the community.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Paper 1872\\nPaper proposes a method for CL. The method is based on hypernetworks. These networks are a metamodel, which produce the parameters (from a task-conditioned embedding) which will be used in the main network. Preventing forgetting in the main network is now, replaced by preventing forgetting in the hypernetwork. This is done by imposing a regularization on the hypernetwork outcome, imposing that the generated weights should be similar for previous tasks (similar to Li & Hoiem who impose this on the network outputs). In addition, the paper proposes chunking, which refers to using an additional set of chunk, embeddings which are shared for all tasks, which allow compressing the hypernetwork. Furthermore, they propose an extension that allows for image replay (this is not an easy extension and an impressive contribution on itself, but maybe confusing for the current paper).\\n\\nCONCLUSION\\nOverall, I like the idea of the paper and it is well explained. However, I found that the experiments of the paper where not well designed to verify the main contribution (hypernetworks), nor where they compared to the most relevant methods. I am borderline with this paper, and recommend borderline accept (borderline not being an option).\\n\\nQUESTIONS\\n1. I think the motivation of why hypernetworks are expected to have less forgetting (than addressing forgetting directly in a network) should be discussed early in the paper. \\n\\n2.I do not understand why the training is performed in two steps. First computing a candidate Delta THETA_H and then o ptimizing Eq 2. Why not directly optimizing Eq 2, replacing the second factor with|| f_h(e^t, THETA^*_h)-f_h(e^t, THETA_h) ||. This is how this regularization is normally applied (e.g. Li & Hoiem). If the authors insist in using Eq 2, I would like to see it compared with the proposed version. \\n\\n3. The experiments should show that hypernets better address CL then addressing this directly in the network (and preferably provide reasons for this). Comparison with the closest methods like HAT and PackNet should be included. Especially, HAT is interesting since it is also based on an embedding. \\n\\n4. Also, more experiments on CIFAR would be welcome. The MNIST variations already provide very high accuracies. For CIFAR-10/100 groups of 20 classes are added in 5 steps ? Scenario CL3 would be interesting for CIFAR as well. \\n\\n5. I would like to see more analysis and results for the chunking. (As said before the replay is also a nice addition, but it seems an add-on of the main-text, shrinking the space to analyze the main contributions of the paper in the experiments.)\\nI guess HNET+ENT for CL1 scenario does not use ENT and is just HNET?\"}"
]
} |
BJxDNxSFDH | Few-Shot Regression via Learning Sparsifying Basis Functions | [
"Yi Loo",
"Yiluan Guo",
"Ngai-Man Cheung"
] | Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples. Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks. Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions. This enables a few labeled samples to approximate the function. We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task. We show that our model outperforms the current state of the art meta-learning methods in various regression tasks. | [
"meta-learning",
"few-shot learning",
"regression",
"learning basis functions",
"self-attention"
] | Reject | https://openreview.net/pdf?id=BJxDNxSFDH | https://openreview.net/forum?id=BJxDNxSFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_Exo7Gxf9",
"r1x-IvpijH",
"SkemNPaiiH",
"BJl_xPaojB",
"r1ggNMGw5B",
"B1eXuCxRYS",
"S1eefcvhtH",
"HklCW2U3Pr",
"HylDbJr2DS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798744368,
1573799752732,
1573799723412,
1573799663742,
1572442664104,
1571847786741,
1571744264378,
1569643526500,
1569636095419
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2251/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2251/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2251/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2251/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2251/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2251/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2251/Authors"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All reviewers agree that this paper is not ready for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for your review and comments. We will do our best to address your comments/questions below.\\n\\nWe apologize if our method is not clearly explained enough. Yes indeed as you pointed in Eq (5). Both the weights of the Basis Function Learner, \\\\theta and Weights Generator \\\\psi are optimized jointly end-to-end. A description of the self-attention block was included in the supplementary section C of the paper.\\n\\nAs noted by you and other reviewers, we will include experiments with more realistic regression tasks in future versions/submissions of the paper.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you for a thorough reading and review of our paper. We will try our best to address your comments/concerns below.\\n\\nWe thank you for the suggestion of an alternative experimental setting of the few-shot image completion task with disjoint classes in training and testing. We will include this experimental setup in future versions of the paper.\\n\\nWe will strive to include experiments of more realistic regression tasks in the future versions of the paper and we thank you for giving us a list of related works that we could look into and compare against.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thank you for a thorough reading and review of our paper. We will try our best to address your comments/concerns below.\\n\\nIt is true our method might be able to be extended to classical few-shot classification tasks. However the main idea of our paper is to learn an optimal combination basis that can be used to predict a regression function, we choose to limit our experiments and evaluations to just few-shot regression tasks.\\n\\nRegarding your comment on more diverse/realistic datasets, we will strive to include experiments with more realistic regression tasks in future versions/submissions of the paper.\\n\\nFor comparisons of ANP/NP/CNP against our method, we show the results below:\\n\\n | ANP | NP | CNP |\\n--------------------------------------------------------------------------------------------------------------------------------\\nAlt. Sinusoidal 10 shot | 1.234 +- 0.075 | 3.240 +- 0.125 | 3.045 +- 0.120 |\\nAlt Sinusoidal 5 shot | 2.613 +- 0.109 | 3.829 +- 0.125 | 3.686 +- 0.125 |\\n--------------------------------------------------------------------------------------------------------------------------------\\n1D Heat Eqn 10 shot | (6.02 +- 0.50)*10^-3 | (1.18 +- 0.06)*10^-2 | (1.04 +- 0.06)*10^-2 |\\n1D Heat Eqn 5 shot | (7.88 +- 0.47)*10^-3 | (1.41 +- 0.07)*10^-2 | (1.42 +- 0.08)*10^-2 |\\n--------------------------------------------------------------------------------------------------------------------------------\\n2D Gaussian 10 shot | (1.26 +- 0.26)*10^-3 | (1.40 +- 0.29)*10^-3 | (1.30 +- 0.28)*10^-3 |\\n2D Gaussian 20 shot | (5.67 +- 1.10)*10^-4 | (7.05 +- 1.65)*10^-4 | (7.06 +- 1.45)*10^-4 |\\n2D Gaussian 50 shot | (2.38 +- 1.22)*10^-4 | (4.51 +- 0.85)*10^-4 | (4.39 +- 0.96)*10^-4 |\\n--------------------------------------------------------------------------------------------------------------------------------\\n\\nWe note that our method outperforms the NP family of methods for the alternative sinusoidal task but is slightly worse in performance compared to ANP for the Heat Equation task and is worse than all NP methods for the Gaussian task. The gap in performance on the Gaussian task certainly indicates that there is room for improvement in our method and we will take that into account in our future submissions.\\n\\nThe Ensemble results of our method, as specified in Section 4.1 consist of 10 separately instances of our model (with randomly initialized weights) trained on the same set of regression tasks. The final prediction of the ensemble model is obtained by taking a mean of predictions of the 10 separate models. As for the ensemble results for the image completion task, we found that the ensemble version of our method does perform slightly better than ANP in the MNIST image completion task (2.12e-2 for 100-shot). Though we note It is not an equivalent comparison against the NP methods as they themselves are not ensemble methods.\\n\\t\\nYou are correct to note that the outputs of the Basis Function Learner are passed through a ReLU activation function. We choose this particular design choose to emulate the structure of traditional neural networks as the output of the Basis Function Learner can be seen as the penultimate layer of a neural network whereas the linear combination of the weights vector and the learned basis functions can be seen as the final layer of the network.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper describes an approach to few-shot regression based on\\nlearning a sparse basis and a weight estimator network. The authors\\nintroduce a sparsity inducing term in the loss to encourage sparse\\nweight generation for tasks; these sparse coefficient vectors are then\\nprojected onto the learned, task-dependent basis for\\nregression. Experimental results are given on two synthetic regression\\nproblems, with a comparison with the recent state-of-the-art.\\n\\nThis paper has some interesting ideas in it. However, it does have\", \"some_issues\": \"1. Clarity. There are several points of the proposed technique that\\n are not described clearly enough. For example, the diagram in\\n Figure 1 leads me to believe that the basis and weights generators\\n are independent (and thus not trained end-to-end). However, the\\n loss in eq. (5) seems (though it is not completely clear to me) to\\n depend on both networks (which is how I would expect things to\\n work). Also, the \\\"self attention blocks\\\" mentioned at several\\n points are never completely defines. And from the ablation study\\n is seems that the improvement form self-attention is the lion's\\n share of the overall improvement. I do not feel that it would be\\n easy to reproduce the results reported in this paper without\\n significant guesswork.\\n\\n 2. The experimental results are somewhat limited. The sinusoidal\\n regression problem is very artificial, as is the image regression\\n task. Focusing on regression is inherently limiting, but results\\n on more realistic regression problems would help establish more\\n clearly the significance of the contribution.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose using sparse adaptive basis function models for few shot regression. The basis functions and the corresponding weights are generated via respective networks whose parameters are shared across all tasks. Elastic net regularization is used to encourage task specific sparsity in the weights, the idea being that with only a small number of available training examples, learning a sparse basis is easier than learning a dense basis with many more parameters. The method is validated on both synthetic data and on image completion tasks.\\n\\nI am leaning towards rejecting the paper. 1) Although, the paper is well written and easy to follow the technical contributions of the paper are limited. Adaptive basis functions and their sparse combinations are decades old ideas. While the application of these ideas to few shot regression does appear to be novel, this combination don\\u2019t seem to provide an obvious improvement over existing alternatives. 2) The empirical evidence presented is rather limited, the proposed approach only seems to outperform competitors on the synthetic sinusoidal regression experiments. Lack of strong empirical performance along with the limited novelty\", \"detailed_comments_and_questions\": [\"The approach naturally extends to few shot classification problems once the MSE loss in Equation 5 is replaced with an appropriate cross entropy loss. Was this considered and is the approach competitive on few shot classification problems.\", \"The empirical section could be significantly improved.\", \"Diverse synthetic data: I don\\u2019t see the value in presenting two sets of synthetic sinusoid regression experiments. It would be better to replace the alternative sinusoid task with qualitatively different tasks. This would help the audience ascertain whether the favorable performance demonstrated in Table 1 generalizes beyond sinusoidal signals.\", \"Comparisons: 1. Why are comparisons to neural processes missing in the additional synthetic experiments presented in the supplement and from Table 2? This is a particularly egregious omission since on the real data (attentive) neural processes outperform the proposed method.\", \"2. The ensemble approach seems to improve on the individual model significantly in Table 2. Why was this not considered for the image completion experiments? The authors would also do well to more clearly describe how the ensembling was performed.\", \"I find it curious that the basis functions are restricted to be non-negative. The description in 3.3 suggests that the basis function network outputs are passed through a ReLU. What was the rational behind this design choice?\"], \"minor\": \"Why are both ANP and \\u201cOurs\\u201d highlighted in Table 3, when ANP clearly outperforms and does not appear to be within statistical noise of \\u201cOurs\\u201d.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a regression approach that, given a few training (support) samples of a regression task (input and desired output pairs), should be able to output the values of the target function on additional (query) inputs. The proposed method is to learn a set of basis functions (MLPs) and a weight generator that for a given support set predicts weights using which the basis functions are linearly combined to form the predicted regression function, which is later tested (using the MSE metric) w.r.t. the ground truth. The method is trained on a large collection of randomly sampled task from the target family and is tested on a separate set of random tasks. The experiments include:\\n* sinusoidal wave prediction from a few samples\\n* MNIST and CelebA inpainting from a set of known pixel values (in 28x28 and 32x32 resolution respectively)\\n* additional experiments on heat equation and 2D Gaussian distribution task in Appendix\\nThe experiments show that the proposed approach outperforms the other methods on the sinusoidal wave toy problem, and yet performs less good then than Kim et al. 2019 on MNIST and CelebA.\\n\\nI propose to reject the paper in its current form, and consider the following negative points for further improvement:\\n\\n1. The posed problem is not really few-shot learning, in (now classical) few-shot learning, such as few-shot classification on benchmarks such as miniImageNet, CUB, tieredImageNet, CIFAR-FS, FC100, etc. the meta-training is done on a disjoint set of categories and testing is done on a completely new set of categories unseen during training. The gap between disjoint visual categories is very large, and it does not come close to being tested on a different from training samples sinusoidal wave or different set of hidden pixels in inpainting on the same (seen during training) set of classes (where the basis function module could learn a set of basis functions for every class). In the proposed setting, I think a better definition would be \\\"learning a structured regression\\\" from a set of sample points to a function, and not few-shot regression.\\nIf the authors would like to keep the \\\"few-shot flavour\\\", I would suggest re-formulating the experiments, and meta-train on some set of classes (e.g. inpainting over digits 0 to 4) and meta-test on a different set of classes (e.g. inpainting over digits 5 to 9). This partially holds for faces as they are all mostly different categories (different people), but in 32x32 resolution and MSE metric, I don't think they are sufficiently different.\\n\\n2. I would expect stronger results on the more realistic MNIST and CelebA experiments (although as suggested in 1. the setting there should be different), currently it does less well then existing method.\\n\\n3. An emerging important class of few-shot regression problems is few-shot object detection, where the bounding box coordinates of objects location need to be regressed. There are several papers and benchmarks in this space, and it will help the current paper to test on this challenging family of problems. Please see the following papers for benchmarks and settings:\\n* LSTD: A Low-Shot Transfer Detector for Object Detection, Chen et al. 2018\\n* RepMet: Representative-based metric learning for classification and one-shot object detection, Karlinsky et al. 2019\\n* Few-shot Object Detection via Feature Reweighting, Kang et al. 2019\"}",
"{\"comment\": \"Hi,\\n\\nThank you for the comment.\\n\\nThe link to the Github repo has been updated with our code for the paper.\", \"title\": \"Code uploaded\"}",
"{\"comment\": \"Hi,\\n \\nNo code is present in the repo of the github link. It is not fair to provide a placeholder link for code submissions (which impact the review process) and submit code taking considerable buffer time after submission deadline.\", \"title\": \"No code in the repo of the provided github link\"}"
]
} |
H1eLVxrKwS | Removing input features via a generative model to explain their attributions to classifier's decisions | [
"Chirag Agarwal",
"Dan Schonfeld",
"Anh Nguyen"
] | Interpretability methods often measure the contribution of an input feature to an image classifier's decisions by heuristically removing it via e.g. blurring, adding noise, or graying out, which often produce unrealistic, out-of-samples. Instead, we propose to integrate a generative inpainter into three representative attribution map methods as a mechanism for removing input features. Compared to the original counterparts, our methods (1) generate more plausible counterfactual samples under the true data generating process; (2) are more robust to hyperparameter settings; and (3) localize objects more accurately. Our findings were consistent across both ImageNet and Places365 datasets and two different pairs of classifiers and inpainters. | [
"attribution maps",
"generative models",
"inpainting",
"counterfactual",
"explanations",
"interpretability",
"explainability"
] | Reject | https://openreview.net/pdf?id=H1eLVxrKwS | https://openreview.net/forum?id=H1eLVxrKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1yX16aYe6A",
"rkeFo6TjiH",
"H1xQDCssjH",
"r1e3h7zMjS",
"BJxLlbxfor",
"BJgd9leMsH",
"HklU8yTbjH",
"HylJ807-oB",
"rJeKlJ0liS",
"ByeyNBBAqr",
"SkxCsZgccr",
"S1e1xRnRFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744336,
1573801377269,
1573793371481,
1573163955919,
1573155054034,
1573154959967,
1573142350392,
1573105223081,
1573080816764,
1572914471270,
1572630950007,
1571896807229
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2249/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2249/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2249/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Perturbation-based methods often produce artefacts that make the perturbed samples less realistic. This paper proposes to corrects this through use of an inpainter. Authors claim that this results in more plausible perturbed samples and produces methods more robust to hyperparameter settings.\\nReviewers found the work intuitive and well-motivated, well-written, and the experiments comprehensive.\\nHowever they also had concerns about minimal novelty and unfair experimental comparisons, as well as inconclusive results. Authors response have not sufficiently addressed these concerns.\\nTherefore, we recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Could you elaborate on your reasons?\", \"comment\": \"Thank you so much for taking the time to respond to us!\\n\\nWe would greatly appreciate it if you could elaborate on your reasons for \\\"contributions are not enough for ICLR\\\".\\nWe really wish to improve the manuscript further in light of your comments.\", \"re\": \"novelty\\n- We think Chang et al. 2019 results are misleading, and completely opposite to what we claim in this paper. \\nIt is a pity for the community if work that does things in the correct way but being second is discouraged.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thank you very much for your detailed response and for addressing all of the questions. Unfortunately, I was not convinced that the contributions of the work are enough for the ICLR venue. The general direction of the work, however, points towards a promising solution to the existing methods' drawbacks.\"}",
"{\"title\": \"We tried to clarify the main points of our paper\", \"comment\": \"Thank you for the constructive criticisms and suggestions that helped us revised the paper to be stronger! :)\\nWe hope to hear your thoughts about our replies below.\\n\\n\\n> The technical contribution is minimal, the author combined two already existent techniques.\\n\\nWe agree and do not claim novel technical contributions. However, we claim to be the first to have studied integrating an inpainter in the correct way into multiple well-known attribution methods, on large-scale image datasets, here, ImageNet and Places365.\\n\\nWe argue that complex theoretical or heavy engineering work is not always necessary or better. Instead, the community should also support simple solutions that move the field in the right direction.\\nFor example, we would not want to overlook DropOut or ReLUs because of their simple/small technical contributions, right? :)\\n\\n\\n> the comparisons are not fair, since on two out of three techniques the authors consistently change the competitors, letting them work with zero-value occluders or random noise\\n\\n- We wish to clarify that we did NOT change any original methods here. For LIME we used their own implementation. For MP and SP we implemented ourselves following the original algorithm.\\n- The original LIME and SP algorithm remove input pixels by replacing them with 0s.\\n\\n\\n> with the proposed approach wrt the first two competitors, which looks strange. Can the authors try to apply the comparative approaches in their original versions?\\n\\nWe worry that you might have misunderstood the main results. In Sec. 4.2., we did not use SP, LIME, or MP at all. Instead, all we tested is the different filling options that have been used in the literature. The object masks are generated objectively via a third-party DeepLab segmentation network.\\n\\n\\n> The fact that we are using an inpainting tool which may work in some cases (in providing well-distributed patches) but in other may fail, corrupting consequently the overall following analysis is a price I don\\u2019t want to pay, so I prefer some synthetical but controlled artifact.\\n\\n- We agree that generative models are themselves black-boxes. However, we believe that they have reached a performance level that enable them to be useful in synthesizing absences of objects or input features, which is interesting!\\n- Importantly, we showed that at the downstream explanation task, the methods using generative inpainters performed better quantitatively\\u2014-(1) more robust to hyperparameters; (2) more accurate per localization benchmark.\\n\\n\\n> Elaborating more on the work by Adebayo et al. 2018 \\nYes, we are a big fan of their Sanity checks paper! We will try to describe better in a revision.\\nWe wish to note that Adebayo et al. 2018 studied the heatmap sensitivity to model parameter changes rather than hyperparameter changes (as in our paper)\\n\\n\\n> The title is misleading\\nThank you for the catch! You can see in the revision, we have changed from \\u201cclassifier\\u201d into \\u201cimage classifier\\u201d to address your concern. :)\\n\\n\\n> Fig. 1\\u2019s caption should include the references to SP, LIME, and MP\\nYes, we have added the references. Thanks!\"}",
"{\"title\": \"Response to R3: DeepFill-v1 is the best choice regarding both inpainting speed and image quality\", \"comment\": \"> Is it necessary to use a strong generative inpainter?\\n\\nThat\\u2019s a great question! \\na) Short answer: we indeed had tried other non-learning approaches but we found the learned inpainter DeepFill-v1 to be more preferable due to both (1) significantly faster inpainting speed; (2) arguably better quality (see quantitative comparisons in [1]).\\n\\n[1] Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5505-5514).\\n\\nb) Long answer: We had tried using PatchMatch (https://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/patchmatch.pdf), a state-of-the-art out-of-the-box non-learning inpainting approach. However, PatchMatch is actually an extremely slow, iterative method that compares arbitrary pairs of patches in the same image to find the closest match to fill in part of the missing region. \\n- It takes an average of 472.35 secs / image to inpaint using PatchMatch using multithreading on CPU. In contrast, a forward pass through DeepFill-v1 takes only 1.5 seconds per image on CPU.\\n- Image evaluation is subjective by nature, therefore, we also include a qualitative comparison between PatchMatch vs. DeepFill-v1, in case you are interested.\", \"https\": \"//drive.google.com/open?id=1pAwSJGS5llOsh2INDnuhoDprFkrZJXwP (note that PatchMatch left many blank areas for some cases because it failed to find matching patches within 100 iterations\\u2014-a hyperparameter.\"}",
"{\"title\": \"Response to R3 comments 1/2\", \"comment\": \"> It is unclear what is the computational cost of the inpainter.\\n\\nThank you for the note! We will add it in a revision.\\n\\n- It takes us 0.019 secs to inpaint one 256x256 image (01 forward pass) using 1 GTX 1080Ti. \\n- 01 MP vs. MP-G optimization run of 300-steps to generate one attribution map:\", \"mp\": \"7.273 secs / run\", \"mp_g\": \"13.530 secs / run\\n\\n- 01 LIME vs. LIME-G run (50 superpixels, 1000 samples) to generate one attribution map:\", \"lime\": \"2.222 secs / run\", \"lime_g\": \"10.779 secs / run (we obtained this empirical number yesterday for a batch of 10; however, in theory it should be ~4.2 secs / run).\\n\\nNote that for LIME or SP, one can speed up a run further by using larger batch sizes.\\n\\n\\n> Could the inpainting approach be also extended to faster explanation techniques (e.g. gradient-based, or propagation-based)?\\n\\nWe are not sure to understand the suggestion. Our approach is applicable to any method that attempts to remove an input feature. Fast attribution methods such as LRP, Input x Gradient or vanilla Gradient attempt to construct a heatmap analytically and therefore, do not perturb the input image in the first place (thus, we do not see inpainting applicable there).\\n\\n\\n> Bounding box experiments are rather indirect and the deletion/insertion metrics do not systematically show the performance improvement. \\n\\nThanks for your note! Evaluation is indeed what we worry about a lot because every attribution map evaluation metric (including Insertion/Deletion) is imperfect with its own pros and cons.\\n\\n- On (1) our sensitivity evaluation and (2) Localization task (which is widely used in the literature), SP-G and LIME-G consistently outperformed SP and LIME on both ImageNet and Places365. So the findings here are significant.\\n\\n- For Insertion / Deletion, the results are mixed. For example, both ImageNet and Places365, SP-G outperformed SP under Deletion, but not Insertion. In contrast, LIME-G outperformed LIME under Insertion, but not Deletion. \\n\\n- For honest reporting, we avoid drawing conclusions here. Note that for the Insertion / Deletion metrics has a drawback: the number of pixels to be zero-ed at once is also a hyperparameter to tune, which may change the entire conclusion!\\nWe are running our evaluation on this hyperparameter and will report in a revision.\\n\\n- Per our literature survey, the Insertion and Deletion metrics are less widely used than the Localization task. In contrast to our paper, previous work often only reports either Insertion e.g. in [1] or only Deletion e.g. in [2], but did not compare whether the conclusions generalize on both metrics.\\n\\n[1] Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & M\\u00fcller, K. R. (2016). Evaluating the visualization of what a deep neural network has learned. \\n[2] Wagner, J., Kohler, J. M., Gindele, T., Hetzel, L., Wiedemer, J. T., & Behnke, S. (2019). Interpretable and fine-grained visual explanations for convolutional neural networks. CVPR.\\n\\n\\n> Perhaps the deletion metric should have been equipped with inpainting?\\n\\nWe have cited Samek et al. 2017 in a revision for clarity. Thanks for a nice pointer and an interesting idea!\\nWe will try to run the suggested evaluation and report it here.\\n\\n\\n> How about broadening the evaluation benchmark to non-perturbation approaches?\\n\\nWe appreciate your suggestion but we do not see the purpose of evaluating non-perturbation attribution methods in the context of our paper. Our approach here is merely to improve the perturbation-based techniques. That is, we test replacing heuristic perturbation approaches by a learned perturbation method, i.e. inpainting, that creates more plausible samples. \\n\\nPlease let us know if you think otherwise. We\\u2019d be happy to consider any suggested evaluation!\\n\\n\\n\\n> An experiment I found particularly interesting is the robustness to perturbation hyperparameters.\\n\\nThank you, we think it is an important evaluation metric moving forward as well!\"}",
"{\"title\": \"Response to R4 minor points\", \"comment\": \"Please see below our replies to your other comments (which helped us a lot in revising the paper!).\\n\\n> Move the MP explanation in the discussions to the experiments.\\nThanks for your suggestion! We will make this change in a revision.\\n\\n\\n> How about sampling more images from the conditional P(x_r|x_\\\\r) and averaging the prediction probability?\\nThis is honestly what we had wanted to do from the beginning of the project i.e. marginalizing over all possible scenarios where a feature is absent. However, state-of-the-art image inpainting methods are currently only able to synthesize *one single* output per input image. There were recent efforts in CVPR 2019 in the face domain e.g. [1]. However, on ImageNet or Places365, none of them were really working in practice (confirmed by our own preliminary tests and correspondence with the authors).\\n\\nThat is, we did not want to add more complex theoretical bits that do not really work yet in practice. The best and publicly available inpainter we could find is DeepFill-v1 (used in this paper).\\n\\n[1] Zheng, Chuanxia, Tat-Jen Cham, and Jianfei Cai. \\\"Pluralistic Image Completion.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n\\n\\n> Apart from the random seeds, how about testing LIME sensitivity to the number of superpixels?\\nWe tested the sensitivity to changing random seeds is because that sensitivity is unavoidable to everyone when running LIME due to the stochastic sampling of the random masks.\\nBut thanks to your suggestion! We have also ran statistics for that and found that LIME-G is consistently more robust than LIME (on all 3 similarity metrics: SSIM, Pearson, Spearman) when we vary the number of superpixels between { 50, 150 }.\", \"see_the_plot_here_https\": \"//drive.google.com/open?id=1W5jWz-3nshiT0tL6BlE4I6hc5xjpweNk ---LIME-G heatmaps are always more consistent than those of LIME (dark bars are longer than light bars). This figure is also now included in a revision.\"}",
"{\"title\": \"Response to R4 major points\", \"comment\": \"We really enjoyed your insightful comments a lot! Thank you and we hope to engage you in more discussions here. :-)\\nPlease find here our responses to your major points.\\n\\n> Novelty of this work\\nWe agree that both our work and Chang et al. 2019 (ICLR 2019 paper) used the existing inpainter DeepFill-v1 and therefore we did not claim technical novelty. \\n- However, in the interpretability field, we believe the integration of inpainters into attribution methods is an important direction that deserves further research. With a strong generative model of the world, one can ideally generate synthetic intervention samples (here, removing input features) to perform causal inference.\\n- We found that the quantitative and qualitative results in Chang et al. 2019 are negative and they did not suggest integrating an inpainter into MP to be a fruitful direction. Their finding is intriguing as it contradicted our intuition and therefore motivated our investigation!\\n\\n\\n> Training an inpainter that is able to remove the background while keeping the salient object\\nWe agree! Actually, we\\u2019d love to work on it in follow-up work because, at the moment, there is no such inpainter in the literature, to the best of our knowledge.\\n\\n\\n> How do generative inpainters help improve the hyperparameter robustness? \\nWe attempted to provide insights into this excellent question in Sec. 5.2 (which we have also revised to make it clearer. Thanks!). \\nThe gist is that a strong generative inpainter only allows the classifier probability i.e. P(c|x_\\\\r) to drop when an important discriminative feature is removed, yielding robust/consistent heatmaps upon hyperparameter changes. In contrast, the gray-masked out-of-samples introduced by heuristic perturbations yielded heatmaps that are more noisy and sensitive to even random seed changes (e.g. LIME).\\n\\n\\n> How is downstream explanation result improved with generative models?\\nWe are too worried about this question and therefore have attempted all common evaluation metrics, which have their own pros/cons. We found our methods outperformed the counterparts on two objective metrics:\\n\\n- On the well-known Localization task (Sec. 4.4), which have been used in many influential papers [1-6] (including Chang et al. 2019), SP-G and LIME-G consistently outperformed SP and LIME. The assumption of the Localization task is that better explanations should highlight the salient object in an image, which is imperfect but arguably reasonable for the ImageNet dataset which is object-centric.\\n- On our sensitivity evaluation (Sec. 4.3), which is critical for ensuring good attribution methods, SP-G and LIME-G also consistently outperformed SP and LIME on both ImageNet and Places365 datasets.\\n- We are aware of the recent BAM evaluation dataset by Google Brain, which appeared on arXiv (https://arxiv.org/pdf/1907.09701.pdf) recently. We are excited to test our methods on it soon!\\n\\n[1] Fong, R. C., & Vedaldi, A. (2017). Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision (pp. 3429-3437).\\n[2] Wagner, J., Kohler, J. M., Gindele, T., Hetzel, L., Wiedemer, J. T., & Behnke, S. (2019). Interpretable and fine-grained visual explanations for convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9097-9107).\\n[3] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929).\\n[4] Chang, C. H., Creager, E., Goldenberg, A., & Duvenaud, D. (2018). Explaining image classifiers by counterfactual generation, ICLR, 2019.\\n[5] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (pp. 618-626).\\n[6] Zhang, J., Bargal, S. A., Lin, Z., Brandt, J., Shen, X., & Sclaroff, S. (2018). Top-down neural attention by excitation backprop. International Journal of Computer Vision, 126(10), 1084-1102.\\n\\n\\n> Does an inpainter is actually taking the data back to the true data distribution?\\nWe agree that this is an important note. However, it could be argued that the same doubt exists for all generative tasks in Machine Learning. That is, density estimation for high-dimensional data is challenging and evaluating generative models is an active open research (famously as seen in the GAN or NLP literature).\\n\\n\\nWe will reply to your minor points in another reply.\"}",
"{\"title\": \"[ To all reviewers ] The contributions of this work\", \"comment\": \"We would like to thank all three reviewers for their positive feedback and insightful questions! We have revised the paper in light of many of your comments.\\n\\nWe wish to clarify the contributions of our work in this common thread and will address each reviewer\\u2019s comments separately in other replies. \\n\\n\\n1. Our contribution to the interpretability community\\n\\nThe three attribution methods being studied in our work are all well-known and have ~10,000 citations in total i.e. SP by Zeiler et al., 2014 (7521 cites), LIME by Ribeiro et al., 2016 (1991 cites), and MP by Fong & Vedaldi, 2017 (246 cites). \\nThese famous papers have and will continue to inform our interpretability community i.e. ~40,000 papers on Google Scholar that contain both the \\u201cinterpretability\\u201d and \\u201cmachine learning\\u201d keywords.\\n\\nHowever, we found two worrisome issues among them and many other attribution methods (including the related Chang et al. 2019): (1) the perturbed samples are clearly unrealistic; (2) some methods are so sensitive to hyperparameter changes.\\n\\nFor example, we found that 20.5% of LIME perturbation samples are labeled by ResNet-50 into one of the three classes: jigsaw puzzle, maze, and hen-of-the-wood. That is, these grayed-out masked images look mostly like puzzles or mazes to the classifier (regardless of the actual input image). A histogram of labels of LIME samples is in Fig. S21 in the revision (also here, https://drive.google.com/file/d/1-p8AedfD-5dHjco4ZlAVHIe9jLPOccu1/view?usp=sharing).\\n\\nWe are the first to show that, for SP and LIME, ameliorating such an issue of \\u201cunrealistic intervention samples\\u201d has consistently produced more (1) accurate attribution maps (by the Localization task) and (2) more robust attribution maps.\\nWe believe our work is important to the community and the contrary negative result with MP-G interestingly suggests a need for better a inpainter in the future.\\n\\n\\n2. Our contribution regarding attribution map evaluation metrics\\n\\nWe agree and are deeply worried that the interpretability field is currently lacking a good, commonly accepted benchmark.\\n\\n--2.a) The localization task has been well-known and used in many influential and also recent interpretability papers. On this task, our SP-G and LIME-G are consistently superior to SP and LIME.\\n\\n--2.b) Our insignificant result on the Insertion / Deletion metrics does not necessarily imply G-methods are not promising. Because these two metrics are based on the assumption that input pixels are *independent* and are better-suited for evaluating fine-grained attribution maps e.g. Gradient or Integrated Gradient, but not the coarse methods such as SP, LIME or MP. \\n\\nThat is, there are many open issues: (1) Insertion / Deletion evaluation requires zero-ing out input pixels, and therefore yield adversarial examples\\u2014-the exact issue we attempt to solve in this paper; (2) how many pixels should be knocked out at one?; (3) if the pixel-independence assumption is unrealistic, should we knock out superpixels instead?\\n\\n--2.c) Due to the lack of good evaluation metrics, we proposed to also evaluate attribution methods by their sensitivity to hyperparameters. To the best of our knowledge, this important factor is often neglected in prior work and we are the first to throughly evaluate heatmaps both previous (SP, LIME, MP) and ours (SP-G, LIME-G, MP-G) under this sensitivity.\\n\\n\\n3. Our contribution given the previous work by Chang et al. 2019 \\n\\n- Chang et al. 2019 (ICLR 2019 paper) was the first to apply the inpainter, but on *only* one attribution method which is the Meaningful Perturbation (MP). However, qualitatively, their inpainted samples are largely unrealistic due to the unsuitable use of the \\u201cPreservation\\u201d objective function, defeating the key purpose behind using an inpainter in the context of removing input features to evaluate their importance. \\n- Importantly, they also evaluated on the Localization task as we do here. And their quantitative results were negative i.e. they found the integration of an inpainter produced worse localization performance. They did not evaluate on the Insertion or Deletion metrics. Therefore, per demonstration of Chang et al. 2019, there was NO quantitative benefits of using an inpainter.\\n- In contrast, we found in our paper that integrating the inpainter is indeed a promising direction for two *different* methods: Sliding Patch (SP) and LIME. That is, our methods LIME-G and SP-G are consistently, on both ImageNet and Places365, better in both metrics (1) Localization; and (2) Robustness to hyperparameter changes.\\n- Our finding that MP-G is inferior to MP is somewhat consistent with the finding in Chang et al. 2019. Our result is actually interesting given that we used a different objective function (i.e. Deletion) and might inform future research (our hypothesis: MP operates at the pixel level and requires an inpainter that is able to perform free-form inpainting but DeepFill-v1 is not good at that task). \\n\\nThank you!\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to improve perturbation-based explanation techniques by complementing the perturbation step with an inpainting step that removes artefacts caused by it.\\n\\nThe approach is sound and intuitive.\\n\\nThe authors show the flexibility of their approach by applying it to a variety of perturbation-based attribution techniques.\\n\\nIt is unclear what is the computational cost of the inpainter. Perturbation-based explanations are generally quite slow due to having to evaluate the function many times, and therefore a further slowdown could harm practical use. I'm curious whether the inpaiting approach could be in some way also extended to faster explanation techniques (e.g. gradient-based, or propagation-based).\\n\\nEvaluation experiments are not fully conclusive. Bounding box experiments are rather indirect and the deletion/insertion metrics do not systematically show the performance improvement of using inpainting. Perhaps the deletion metric should have been equiped with inpainting as well in order to avoid deletion artefacts. (See e.g. Samek'17 MoRF / LeRF experiments where various perturbation schemes are tested for deletion).\\n\\nThe evaluation benchmark is restricted to perturbation-based approaches. It could have been useful to broaden the comparison to non-perturbation approaches.\\n\\nAn experiment I found particularly interesting is the robustness to perturbation hyperparameters. Given the difficulty of designing evaluation metrics that can support hyperparameter selection, hyperparameter insensitivity is indeed strongly desirable.\\n\\nI'm wondering whether it is really necessary to use a strong deep neural network inpainter since the goal is just to remove artefacts. Some inpainters provided as part of standard computer vision libraries work quite well, and do not need to be trained and adapted to a certain shape of missing data.\\n\\nOverall, the paper presents an interesting and sound approach to improve perturbation-based explanations. Experiments are extensive, although some of them remain so far not fully conclusive.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper is focused on perturbation-based local explanation methods; methods that only need black-box(ish) access to the model and generally seek to find a region, pixel, etc's importance score through by removing that region, pixel, etc.. The intuition is that an important region if removed, will result in a large drop in the predicted class confidence. One main issue with such methods is that the removal itself can throw the image out of the data distribution and therefore result in a drop in confidence, not because of the region's importance, but because of the network's unpredictable behavior in such unseen parts of the input space. The work is focused on giving a solution to this problem: instead of removal through blurring, graying, etc, use inpainting; i.e. replace the removed region with using given the rest of the image. The idea has already been discussed out in the literature and the novelty of the work seems to be twofold: They introduce the same method in a way that is not curated for a specific perturbation-based method and could be concatenated with \\\"ANY\\\" given (or future) perturbation-based local explanation method (which authors notate by calling it ${existing_method}-G, they study robustness to hyper-parameter choice.\\n\\nThe paper is quite well written and the experiments are comprehensive. I have two major comments/issues with the work:\\n\\n1- The contribution of this work given existing work (more specifically the famous Chang et al work seems not to be enough for a venue like ICLR. If I want to list the contributions, it would be as follows (I would appreciate if the authors could correct me as the score is subject to change given more clarification on the matter):\\n - This work utilizes an inpainting step in combination with several methods while previous work is focused on meaningful perturbations method. This, although useful, does not introduce a novel technical contribution. The main technical contribution has been the use of inpainting (to be more exact, using generative models to approximate P(c|x_r)) which has been done before on a few previous works.\\n - The work argues that the use of inpainting in Chang et al (focused on keeping the salient object and removing background) was invalid as the inpainter model is not trained to do such a task. It could be argued that one could train another inpainting model that \\\"is\\\" capable of such a thing and therefore the general argument would not hold. One drawback of this approach, however, would be that training such an inpainting model might be difficult.\\n - Hyperparameter robustness. Studying this question is valuable. However, given that the assumption of this work and previous works is that generative approaches are generally better (even not considering the hyperparameter robustness), I am not sure how this knowledge could be used.\\n\\n2- Both this work and the previous works run on the assumption mentioned at the beginning of this review which basically says that non-generative perturbation-based methods throw the image out of data distribution and this is bad for such and such reasons. Although intuitively clear, I could not find any evidence in this work suggesting any meaningful difference using objective measures. One would assume that such phenomena would manifest itself clearly using insertion-deletion explanation metrics while as the authors report there was no significant difference. (Section 4.1 results clearly show a difference but this is not related to how the downstream explanation task is affected) For all it is know, generative methods have the drawback of being computationally more expensive than a simple blurring or replacing with random noise. (And a major elephant in the room is whether using an inpainter is actually taking the data back to the true data distribution which seems to be on an unproven assumption that these generative models are capable of learning the data manifold)\", \"minor_comments\": [\"-Section 4.2 is really interesting. Thanks!\", \"Fig 3 results: MP is more robust than MP-G and I couldn't find any explanation of why this method behaves specifically different than the other two in the experiments section. It might be better to move the explanation in the discussions to the experiments.\", \"The task of most generative perturbation-based methods is to find a way to approximate P(c|x_\\\\r) which is the conditional probability given the non-removed part of the image. Usually, they do the approximation by sampling several images from the conditional P(x_r|x_\\\\r) (conditional inpainting) using the generative model and averaging the prediction probability. This work seems not to be concerned with these specifics and directly feeds one of such samples. Could you explain this choice\", \"For studying the robustness of LIME, apart from the random seed, couldn't one change the hyperparameters of the superpixel method? Tha one seems more of a practical problem.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a deep visualization technique for black-box image classifiers that feeds modified versions of the original input by means of an off-the-shelf (black box too) image inpainting approach (DeepFill-v1), in order to capture changes in the classification performance. In particular, the substitution of the input image follows three published paradigms: Sliding Patch (SP), Local Interpretable Model-Agnostic Explanations (LIME), Meaningful Perturbation (MP). Whereas the states of the art use gray images (SP, LIME)/blurred versions (MP) as substitution on different spatial supports (regular patch SP, random-shaped superpixel regions LIME, learned continuous region MP), the proposed approach inserts there the output of the inpainting.\\nThere are problems in the paper, major and minor.\", \"major\": \"1)\\tThe technical contribute is minimal, the author combine two already existent techniques.\\n2)\\tThe results are not convincing: a) the comparison are not fair, since on two out of three techniques the authors consistently change the competitors, letting them work with zero-value occluders or random noise. Only the third competitor has been employed in is original form (blurring the images). b) results are better (higher classification drop, plus other metrics) with the proposed approach wrt the first two competitors, which looks strange. Can the authors try to apply the comparative approaches in their original versions?\\n3)\\tThe fact that we are using an inpainting tool which may work in some cases (in providing well-distributed patches) but in other may fail, corrupting consequently the overall following analysis is a price I don\\u2019t want to pay, so I prefer some synthetical but controlled artifact. Actually, in the case of inpainting failure will generate structured noise, hard to be managed. \\nMinor/improvements\\n--In the introduction the authors should spend a little more two or three words in explaining on which basis Adebayo et al. 2018 questions the correctness of the heatmap, since this is something on which the authors are building their hypothesis.\\n--The title is misleading, the authors are talking about generic feature removal but in reality we are considering the image domain only.(check)\\n--Figure 1\\u2019s caption should report the references for the three SP LIME and MP\"}"
]
} |
rJg8NertPr | Top-down training for neural networks | [
"Shucong Zhang",
"Cong-Thanh Do",
"Rama Doddipatla",
"Erfan Loweimi",
"Peter Bell",
"Steve Renals"
] | Vanishing gradients pose a challenge when training deep neural networks, resulting in the top layers (closer to the output) in the network learning faster when compared with lower layers closer to the input. Interpreting the top layers as a classifier and the lower layers a feature extractor, one can hypothesize that unwanted network convergence may occur when the classifier has overfit with respect to the feature extractor. This can lead to the feature extractor being under-trained, possibly failing to learn much about the patterns in the input data. To address this we propose a good classifier hypothesis: given a fixed classifier that partitions the space well, the feature extractor can be further trained to fit that classifier and learn the data patterns well. This alleviates the problem of under-training the feature extractor and enables the network to learn patterns in the data with small partial derivatives. We verify this hypothesis empirically and propose a novel top-down training method. We train all layers jointly, obtaining a good classifier from the top layers, which are then frozen. Following re-initialization, we retrain the bottom layers with respect to the frozen classifier. Applying this approach to a set of speech recognition experiments using the Wall Street Journal and noisy CHiME-4 datasets we observe substantial accuracy gains. When combined with dropout, our method enables connectionist temporal classification (CTC) models to outperform joint CTC-attention models, which have more capacity and flexibility. | [
"Neural network training",
"speech recognition"
] | Reject | https://openreview.net/pdf?id=rJg8NertPr | https://openreview.net/forum?id=rJg8NertPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_Nzgy7iAC7",
"ByeuDjgq2H",
"ByguumWsiH",
"rkxTnMZjor",
"ByeJnb-ojr",
"rylPEZ-jjH",
"BkxCrGtNqr",
"SJxOk6SN9S",
"ryguYP6aKH",
"Syxr1WFpFB",
"BJeeoj3XFB",
"Hkgv1no7FS"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798744295,
1574730591875,
1573749615988,
1573749429499,
1573749158725,
1573749039144,
1572274758327,
1572261087831,
1571833728252,
1571815644596,
1571175319643,
1571171295102
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2248/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2248/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2248/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2248/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2248/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2248/Authors"
],
[
"~TING_TING_SUN1"
],
[
"ICLR.cc/2020/Conference/Paper2248/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2248/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2248/Authors"
],
[
"~Huaxin_Song1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": [\"The paper proposes a top-down approach to train deep neural networks -- freezing top layers after supervised pre-training, then re-initializing and retraining the bottom layers. As mentioned by all the reviewers, the novelty is on the low side. The paper is purely experimental (no theory), and the experimental section is currently too weak. In particular:\", \"Experiments on different domains should be performed.\", \"Different models should be evaluated.\", \"Ablation experiments should be performed to understand better under which conditions the proposed approach works.\", \"For speech recognition, WER should be reported - even if it is without a LM - such that one can compare with existing work.\"], \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the common experimental finding that low level features trained end-to-end in a deep model converge (get \\\"locked in place\\\") earlier than higher level features, which may result in problematic undertraining. The focus of the study is not on skip connections, but really on getting adequate training in deeper networks. They posit a \\\"good classifier hypothesis\\\" where, once a deep network converged, they fix the top layers (the \\\"good classifier\\\") and train only the lower ones. They propose a \\\"top-down training strategy\\\" to search where to make the cut for the \\\"top layers\\\" of the \\\"good classifier\\\", based on the validation set.\\n\\n\\n (+) The experimental results seem encouraging and supporting the author's claim (consistently improve over baseline on WSJ and CHiME-4).\\n (-) No WER (not even without a language model) results on WSJ make it harder to (i) compare to other work (is it just that in this case the authors didn't optimize properly in the first place?), (ii) compare the relative gains between with and without the method in WER.\\n (-) For an experimental (no theorem) optimization paper, there should be experiments on at least another domain. And in particular one would have expected more analysis of the experimental optimization results.\\n (-) (minor) There is no discussion of the link with target propagation or other synthetic gradients.\\n\\nOverall, I think this could be an interesting paper, but more work is needed to prove the effectiveness of the method, and to analyze experimentally in more details some of the claims from this paper.\"}",
"{\"title\": \"Comments about the novelty of the paper\", \"comment\": \"Comments about the novelty of the paper\\n\\nThank you for your comments! We believe our work has many new findings and contributions.\\nFirstly, to our knowledge, this is the first time that layer-wise training is explored in a top-down manner and it is the first time that layer-wise training (not layer-wise pre-training; there is no joint fine-tuning) outperforms joint training significantly.\\n\\nThere are quite a few recent papers working on the layer-wise training [1,2,3]. However, all of them investigate this approach in a bottom-up method and do not get considerable improvements over the joint training. In fact, the authors of [2] state that their work is the first time that such training approach has comparable results to the joint training on a large scale dataset. However, we investigate it in a top-down manner and the top-down training significantly surpasses the joint training.\\n\\nSecondly, we have an insightful analysis on the reason of the effectiveness of the top-down training, from the perspective of the vanishing gradient and training classifier-feature extractor. To our knowledge, we believe it is the first time that such an analysis is made. Also, our analysis gives the reasons for why layer-wise training in a bottom-up way is in general hard.\\n\\nThirdly, many popular optimizers, such as Adadelta and Adam, dynamically adjust the learning rate. However, we find that the way of adjusting learning rate provided by these optimizers is not optimal. They have inferior results compared with freezing the upper layers. Thus, our findings lead to a new direction of designing optimization algorithms.\\n\\nIn summary, the novelty of this paper is not limited to choosing which layer to freeze. It is the first time that layer-wise training outperforms the joint training significantly. We also have an insightful analysis on the proposed training method, which explains why the top-down manner is beneficial and the bottom-up method is hard. From the proposed method, a new optimization algorithm may be developed.\\n\\n[1] Chris Hettinger, Tanner Christensen, Ben Ehlert, Jeffrey Humpherys, Tyler Jarvis, and Sean Wade. Forward thinking: Building and training neural networks one layer at a time. arXiv preprint arXiv:1706.02480, 2017.\\n[2] Eugene Belilovsky, Michael Eickenberg, and Edouard Oyallon. Shallow learning for deep networks. openreview, 2018. URL https://openreview.net/forum?id=r1Gsk3R9Fm.\\n[3] Kaikai Zhao, Tetsu Matsukawa, and Einoshin Suzuki. Retraining: a simple way to improve the ensemble accuracy of deep neural networks for image classification. In 2018 24th international conference on pattern recognition (ICPR), pp. 860\\u2013867. IEEE, 2018.\"}",
"{\"title\": \"Detailed answers for question 1(b).\", \"comment\": \"The description of the experiments is in the first paragraph of page 3 (under Table 1):\\n\\n\\u201cWe assume the top layers of the model trained on the full WSJ si284 set is a good classifier, while the top layers of the models trained on si284 subset or on tr05 multi are poorer classifiers \\u2013 since it is easier for a network trained on a relatively large, clean dataset (such as si284) to learn the underlying patterns in the data, compared with a network trained on the smaller (si284 subset) or noisier (tr05 multi) dataset. Therefore, if we can show that a good classifier (trained on si284) forces the feature extractor to learn useful patterns even from a small (si284 subset) or noisy dataset (CHiME-4), then this is evidence to support the good classifier hypothesis\\u201d\\n\\nWe have experimental results support \\u201csince the feature extractor learns more slowly, then potentially the classifier may overfit the feature extractor before the feature extractor is able to learn much about the underlying pattern of the data\\u201d\", \"the_evidence_is_stated_in_the_first_paragraph_of_page_4\": \"\\u201cOne piece of evidence for this is if we compare joint training on tr05 multi and\\non si284, the changes of weights for the first BLSTM layer is comparable. However, for all other\\nhigher layers, the weights have larger changes if they are trained on tr05 multi, the noisy dataset.\\u201d\\nOur further experiments support this argument (section 5).\\n\\nIn section 5, we find for CHiME-4, it is more beneficial to start the top-down training by freezing the layers from the middle layer to the topmost layer, rather than just freezing the topmost layer.\"}",
"{\"title\": \"We suppose almost all the questions are answered in the paper (part 2)\", \"comment\": \"8. The complexity of the algorithm is written to be O(n). However, this assumes training the model takes O(1) or did I miss something?\\n\\nHere we show the complexity of the top-down training method. The complexity of training the model is decided by the model, not the proposed training method. Thus, we do not consider the complexity of training the model.\\n\\n9. Can the authors provide more details/insights regarding the delta differences in Table 1? Did the authors use the same initializations? Did the authors try different ones?\\n\\nThe delta differences are the changes of the weights. The change of the weights indicates if a layer is \\u201clearning\\u201d. If all layers are trained jointly, top layers have larger weight changes and the model has high CERs. If trained with frozen top layers, compared to the jointly trained models, the lower layers have larger weight changes the model has much lower CERs. Thus, it shows in the joint training the top layers overfits the lower layers and the lower layer should be further trained.\\n\\nWe do not use the same initialization. Also, random initialization is not a critical factor here, since it is almost impossible to get over 20% CER (which is achieved by frozen the topmost layer in table 1) reduction by just trying different random initializations. Furthermore, to preclude the random initialization factor, in our WSJ experiments, we build three baselines with different random initialization.\\n\\n[1] Hannun, Awni Y., et al. \\\"First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns.\\\" arXiv preprint arXiv:1408.2873 (2014).\\n[2] Graves, Alex, and Navdeep Jaitly. \\\"Towards end-to-end speech recognition with recurrent neural networks.\\\" International conference on machine learning. 2014.\\n[3] Kim, Suyoun, Takaaki Hori, and Shinji Watanabe. \\\"Joint CTC-attention based end-to-end speech recognition using multi-task learning.\\\" 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017.\\n[4] Serdyuk, D., Ke, N., Sordoni, A., Trischler, A., Pal, C. and Bengio, Y. \\u201cTWIN NETWORKS: MATCHING THE FUTURE FOR SEQUENCE GENERATION\\u201d. International Conference on Learning Representations. 2018.\\n[5] Chorowski, Jan K., et al. \\\"Attention-based models for speech recognition.\\\" Advances in neural information processing systems. 2015.\\n[6] Lu, Liang, et al. \\\"Segmental Recurrent Neural Networks for End-to-End Speech Recognition.\\\" Interspeech 2016 (2016): 385-389.\\n[7] Fern\\u00e1ndez, Santiago, Alex Graves, and J\\u00fcrgen Schmidhuber. \\\"Phoneme recognition in TIMIT with BLSTM-CTC.\\\" arXiv preprint arXiv:0804.3269 (2008).\"}",
"{\"title\": \"We suppose almost all the questions are answered in the paper (part 1)\", \"comment\": \"Thank you for your comments. Indeed, we find most questions are already addressed by the paper. We pinpoint the sections of the paper which address these questions. Please find below our step-by-step answers to your comments.\\n\\n1. First, the paper is poorly written; there are many claims the authors are making without providing experiments/proofs/citations.\\n\\nWe make these claims based on carefully designed experiments (section 2 of the paper). We justify our claims through showing the weight changes during training and the CERs of each model. The change of the weights indicates if a layer is \\u201clearning\\u201d. If all layers are trained jointly, top layers have larger weight changes and the model has high CERs. If trained with frozen top layers, compared to the jointly trained models, the lower layers have larger weight changes the model has much lower CERs. Thus, it shows in the joint training the top layers overfits the lower layers and the lower layer should be further trained.\\n\\n1.(b) For example: \\\"...since the feature extractor learns more slowly, then potentially the classifier may overfit the feature extractor before the feature extractor is able to learn much about the underlying pattern of the data...\\\". Or: \\\"...We suggest that the reason for this is that when all layers are trained on the noisy dataset jointly, the middle layers overfit the bottom-most layers much faster than the bottom-most layers are able to learn input features...\\\"\\nPlease find our detailed answer in the bottom.\\n\\n2. Next, since there is no theoretical/mathematical explanation of the proposed approach, I expect the authors to run an analysis on the results to better understand the effect of using such an approach.\\n\\nWe believe the analysis in section 2 justifies the proposed approach. Also, we have detailed analysis of our experiments in section 4 and section 5.\\n\\n3. For instance, under which settings this method is most efficient? In what layer should I start the fine-tuning? Is it better to reinitialize the bottom layers or fine-tune them?\\n\\nAll these questions can be answered by Algorithm 1 in section 3, and the analysis part in section 4 and section 5. The top-down training starts from freezing the topmost layer. The bottom layers should be reinitialized.\\n\\n4. Does the proposed approach applicable to different domains? i.e. vision/nlp/other speech/signal processing tasks?\\n\\nIn the main part of the paper we show it works for both clean/noisy speech data. In the appendix, we show our initial results for training a language model.\\n\\n5. Does the proposed approach applicable to different models or only for the proposed one?\\n\\nIn the appendix, we show the results for CNN-BLSTM. We also have results for encoder-decoder models on TIMIT. The results are in phone error rate and compared with other end-to-end models.\\nBiGRU encoder-decoder [5] 18.7\\nSegmental RNN [6] 20.5\\nBLTM CTC[7] 24.6\\nBLSTM encoder-decoder (ours) 19.1\\nBLSTM Encoder-decoder + top-down 18.4 (we just tried to freeze the decoder)\\n\\n6. did the authors tried to compute WERs too\\n\\nWe state the reasons in the first paragraph of section 6. The goal of this paper is to show the effectiveness of the proposed training method. Thus, we need exact fair comparisons. If we report WERs then possibly we need to decode with a language model. However, in this case, we will have an extra component and it may make blur the comparisons.\\n\\n7. The baseline seems relatively weak, at least in Table 1.\\n\\nAs stated in section 2, the experimental results in Table 1 is to support the good classifier hypothesis. The purpose of these experiments is not to show some good CERs. Indeed, for Table 1, the datasets are a subset of the full WSJ and CHiME-4 without data augmentation. Thus, with the current models, it is hard to get very low CERs.\\nWe show that the proposed methods can lead to good CERs using full WSJ and CHiME-4 with data augmentation. For WSJ eval 92, here are more results (CER) from previous works:\\n\\nBRDNN CTC [1] 10.0\\nBLSTM CTC [2] 9.2\\nBLSTM CTC [3] 9.0\\nEncoder-Decoder [3] 8.2\\nCTC- Encoder-Decoder [3] 7.4\\nEncoder-Decoder + TwinNet [4] 6.2\\nOurs (BLSTM CTC) 6.3\\n\\nThus, we suppose we have a very strong number for our CTC models. Compared to the encoder-decoder models which have more capacity and which are more flexible, the CTC model trained with the proposed method have better/comparable results.\"}",
"{\"title\": \"A problem of ReLU activation in genearl, not a problem of the top-down training\", \"comment\": \"Hi, thank you very much for your interets in this work and the comments. If the word \\\"activated\\\" means the output of ReLU is larger than $0$, I suppose it is not a problem for this training algorithm. Rather, it is a problem of the ReLU activation itself.\\n\\nIn an abstract manner, for one hidden unit with paramter $w$, when $w$ is not frozen, although $w$ changes dynamically during training, in the current pass, still, this unit \\\"can be only activated by specific patterns\\\". \\n\\nIn a more detailed way, for one hidden unit with paramter $w$, if the distribution of the input $x$ is symmetry through the origin (which is in general assumed or achieved by normalization), then the chance of $wx$ is larger than $0$ is always 50%, no matther the value of $w$ or if $w$ is frozen. If $w$ is not frozen and the sign of $w$ flips during training, then if the sign of $x$ does not change, whether $wx$ is larger than 0 also flips. However, statistically, the chance of if $wx$ is larger than $0$ remains 50% (if the distribution of $x$ also remains symmetry through the origin). Thus, whether $w$ is frozen or not, the chance of whether $x$ will have a non-zero partial derivative is always 50%. \\n\\nIn terms of experimental results, our algorithm works for CNN-BLSTM models (taking appendix A for an example). We will do more experiments for pure CNN models.\"}",
"{\"title\": \"Some problems about the algorithm\", \"comment\": \"The upper layers of a *well trained* neural network can be only activated by some specific patterns.\\n\\nHowever, during the training process of top-down training setting, the top layers are frozen while the bottom layers are trained with random initialization. It may lead to a problem that in forward pass of early stage the feature maps in the bottom layers are noise due to random initialization and they may not pass the *relu function* because top layers can be only activated by specific patterns. It means little signal can be passed in the upper layers and it may hurt the training process.\\n\\nI am curious whether the authors encounter this situation because there seems no guarantee to avoid this problem in the algorithm.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"=========================\\nUpdate review\\nAfter reading the authors response I would like to keep my score as is.\\nI still see many unclear statements, and most importantly I feel that more analysis of the proposed method should have been done here. \\n=========================\\n\\nThis paper proposed a Top-Down method for neural networks training based on the good classifier hypothesis. In other words, after obtaining a classifier that performs well on the test set, keep fine-tuning / re-learning the data representation.\\nThe authors provide character error rate results for the task of Automatic Speech Recognition using WSJ and CHiME-4 datasets.\\n\\nAlthough being an interesting research idea, several issues in this paper make it not yet ready for publication at ICLR. \\n\\nFirst, the paper is poorly written; there are many claims the authors are making without providing experiments/proofs/citations.\", \"for_example\": \"\\\"...since the feature extractor learns more slowly, then potentially the classifier may overfit the feature extractor before the feature extractor is able to learn much about the underlying pattern of the data...\\\".\", \"or\": \"\\\"...We suggest that the reason for this is that when all layers are trained on the noisy dataset jointly, the middle layers overfit the bottom-most layers much faster than the bottom-most layers are able to learn input features...\\\"\\n\\nNext, since there is no theoretical/mathematical explanation of the proposed approach, I expect the authors to run an analysis on the results to better understand the effect of using such an approach. For instance, under which settings this method is most efficient? In what layer should I start the fine-tuning? Is it better to reinitialize the bottom layers or fine-tune them? Does the proposed approach applicable to different domains? i.e. vision/nlp/other speech/signal processing tasks? Does the proposed approach applicable to different models or only for the proposed one?\\n\\nLastly, although it is not the main point in this paper since all results are reported on ASR, did the authors tried to compute WERs too? That way, people can compare results with other ASR models. The baseline seems relatively weak, at least in Table 1.\", \"minor_comments\": \"The complexity of the algorithm is written to be O(n). However, this assumes training the model takes O(1) or did I miss something?\\nCan the authors provide more details/insights regarding the delta differences in Table 1? Did the authors use the same initializations? Did the authors try different ones?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work proposed a mechanism to freeze top layers after supervised pre-training, and re-initialize and retrain the bottom layers. For a model with n layers, when a separation index i is specified, the approach define layer 1~i as bottom layers and i+1~n as top layers. The proposed process enumerate all i from 1 to n-1, compute resulting validation errors respectively, and then pick the i with lowest validation error. The algorithm exhibited significant improvement on WSJ and some minor improvement on CHiME-4.\\n\\nThis work provides some new insight for training ASRs and the observations provide further data points for understanding the training behavior. The layer freezing trick however is relatively well-known, and thus leaving the novelty of the proposed idea to be limited at what layers they choose to freeze.\\n\\nIn algorithm 1 it describes the mechanism as having two loops while it really only needs one loop. The author mentioned they used a simplified version later in the text, and I\\u2019ll suggest to update the algorithm block to make it clearer.\"}",
"{\"comment\": \"Hi, thank you very much for your interests and your comments. I think there is no contradicts between our statement and [1]. We need to carefully define what the word \\u201cfast\\u201d means. In [1], it means the speed of learning useful features. In our statement, it means the speed of fitting other layers. Thus, it takes more epochs for the top layers to learn useful features; while in the same time, the top layers fit the bottom layers much faster than the bottom layers fit the top layers. In this work, we stop training the top layers when they begin to overfit the bottom layers; we are not saying the top layers learn useful features earlier, so we stop training it earlier. Actually, we stop the training of the top layer until the joint training converges.\\n\\nWhen we define the term \\u201cfast\\u201d in the scope of fitting other layers, in our paper, we did two experiments to show that after convergence, freezing the bottom (most) layer and retrain the top layers does not help. In fact, we found the experiments in [2] further show that freezing the bottom layers does not help in general. \\n\\n[1] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. Springer, Cham, 2014: 818-833.\\n\\n[2] Yosinski, Jason, et al. \\\"How transferable are features in deep neural networks?.\\\" Advances in neural information processing systems. 2014.\", \"title\": \"\\\"Faster\\\" here refers to the speed of fitting other layers, not the speed of learning features\"}",
"{\"comment\": \"Hello, I really enjoy reading your paper and it is insightful. In your abstract, you mentioned the top layers (closer to the output) in the network learning faster when compared with lower layers closer to the input. Could you explain more about why the gradient vanishing problem leads to faster learning in top layers?\\n\\nI read a paper [1] saying that the lower layers of the model can be seen to converge within a few epochs. However, the upper layers only develop after a considerable number of epochs (40-50), demonstrating the need to let the models train until fully converged. The authors of [1] seem to have different ideas.\\n\\n[1] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. Springer, Cham, 2014: 818-833.\", \"title\": \"Which layers learn faster? Lower or deeper?\"}"
]
} |
r1erNxBtwr | Demystifying Graph Neural Network Via Graph Filter Assessment | [
"Yewen Wang",
"Ziniu Hu",
"Yusong Ye",
"Yizhou Sun"
] | Graph Neural Networks (GNNs) have received tremendous attention recently due to their power in handling graph data for different downstream tasks across different application domains. The key of GNN is its graph convolutional filters, and recently various kinds of filters are designed. However, there still lacks in-depth analysis on (1) Whether there exists a best filter that can perform best on all graph data; (2) Which graph properties will influence the optimal choice of graph filter; (3) How to design appropriate filter adaptive to the graph data. In this paper, we focus on addressing the above three questions. We first propose a novel assessment tool to evaluate the effectiveness of graph convolutional filters for a given graph. Using the assessment tool, we find out that there is no single filter as a `silver bullet' that perform the best on all possible graphs. In addition, different graph structure properties will influence the optimal graph convolutional filter's design choice. Based on these findings, we develop Adaptive Filter Graph Neural Network (AFGNN), a simple but powerful model that can adaptively learn task-specific filter. For a given graph, it leverages graph filter assessment as regularization and learns to combine from a set of base filters. Experiments on both synthetic and real-world benchmark datasets demonstrate that our proposed model can indeed learn an appropriate filter and perform well on graph tasks. | [
"Graph Neural Networks",
"Graph convolutional filter analysis",
"representational power"
] | Reject | https://openreview.net/pdf?id=r1erNxBtwr | https://openreview.net/forum?id=r1erNxBtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HYRA5-a9WD",
"BJxgrTJ3iH",
"BkeIi3y3iH",
"Bkg0BVP8oB",
"rJxjpfv8or",
"BklViGPUsB",
"S1liHkwIiB",
"SklzDh7NcB",
"H1gL4Eq0tr",
"rygcY3OCFB",
"S1xLBztAuH",
"SJlmRFw2OB",
"rklZkk_ZuH",
"BkxQrdIyOH",
"rJgE7xBJur"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1576798744265,
1573809464418,
1573809309625,
1573446725744,
1573446338852,
1573446300236,
1573445443130,
1572252762474,
1571886126513,
1571880066282,
1570832957968,
1570695626680,
1569976025003,
1569839163073,
1569832987673
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2247/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2247/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"~Deli_Chen1"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2247/Authors"
],
[
"~Yilun_Jin1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper investigates graph convolutional filters, and proposes an adaptation of the Fisher score to assess the quality of a convolutional filter. Formally, the defined Graph Filter Discriminant Score assesses how the filter improves the Fisher score attached to a pair of classes (considering the nodes in each class, and their embedding through the filter and the graph structure, as propositional samples), taking into account the class imbalance.\\n\\nAn analysis is conducted on synthetic graphs to assess how the hyper-parameters (order, normalization strategy) of the filter rule the GFD score depending on the graph and class features. As could have been expected there no single killer filter.\\n\\nA finite set of filters, called base filters, being defined by varying the above hyper-parameters, the search space is that of a linear combination of the base filters in each layer. Three losses are considered: with and without graph filter discriminant score, and alternatively optimizing the cross-entropy loss and the GFD; this last option is the best one in the experiments.\\n\\nAs noted by the reviewers and other public comments, the idea of incorporating LDA ideas into GNN is nice and elegant. The reservations of the reviewers are mostly related to the experimental validation: of course getting the best score on each dataset is not expected; but the set of considered problems is too limited and their diversity is limited too (as demonstrated by the very nice Fig. 5).\\n\\nThe area chair thus encourages the authors to pursue this very promising line of research and hopes to see a revised version backed up with more experimental evidence.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer1 (cont.)\", \"comment\": \"Q6: The results on three real datasets do not show significant gains.\", \"a6\": \"First, our method performs the best among all the baselines that also adopt the same base filter family, and only perform worse than GAT, which has a much wider filter family space than us. However, GAT actually requires much more computation resources than us. Compared with GAT, our model needs less time and memory costs (According to our results in Table 6 and 7, GAT's time cost is at least three times of AFGNN's, and GAT\\u2019s memory cost is two times of AFGNN's.). Also, our model can deal with class imbalance issues much better than GAT (the performance of GAT on SmallRatio and Imbalanced OAG are not good, our $AFGNN_{infinity}$ achieves 93.8 and 96.3 on these two datasets, while GAT only achieves 82.1 and 95.1). Second, currently we only use a family of simple base filters, and the performance of AFGNN is expected to be further improved by enlarging the filter family. We leave the design of the filter family as future work. Finally, we want to emphasize again that this paper\\u2019s biggest contribution is to understand and evaluate GNNs\\u2019 filter rather than to propose another GNN model. We find some hard cases that existing GNNs with fixed filter can not handle well, and propose AFGNN to enhance the performance under these hard cases. Existing benchmark real-world datasets are not sensitive enough to differentiate existing baseline models, so we also analyze how to find a more powerful dataset that can differentiate different models and generate synthetic datasets based on our analytics.\", \"q7\": \"Inductive learning (e.g., PPI) is not tested.\", \"a7\": \"Thanks for pointing out. Our current proposed model is designed mainly for transductive semi-supervised node classification, and may have some limitations for the inductive setting in some cases. But our analysis result can help design the inductive filter learning model.\\n\\nOur analysis assumes that for a set of feature information (X), structure information (A), and their dependency relationship with labels (A|Y and X|Y), there should exist an optimal filter, and our algorithm is designed to learn a good filter for a single graph, which can approximate such optimal filter. For a transductive setting (for example, Cora, Citeseer, Pubmed), where we only need to deal with a single graph, our algorithm has shown to be effective to learn a good filter for it. For an inductive setting (for example, PPI dataset), where we are dealing with multiple graphs with different graphs, the structure information (A) is different for each graph, and thus the optimal filter for each graph can be different. Since our current algorithm can only learn a single filter for all the graphs, if the optimal filters for testing graphs are different from what the ones for training graphs, our current algorithm cannot improve the performance too much. For cases where the graph structure property doesn\\u2019t change too much, we can assume that there still exists a single optimal filter and our current algorithm can generalize well.\\n\\nWe evaluate our model on PPI and found our AFGNN obtains better performance than all the other baselines but is worse than GAT. This is mainly because PPI have different chemical graphs that have totally different structures, and thus fall into the first case where our algorithm that only learns a single filter cannot improve the performance too much.\\n\\nIn spite of the limitation of our current model, we\\u2019d like to point out that it is feasible to adopt our analysis result for designing an inductive filter learning algorithm. Since we\\u2019ve already found that we can infer the optimal filter based on the graph data\\u2019s properties (e.g., label imbalance will benefit column normalization, etc), we can design an model (f) that takes these graph properties as input and infer the optimal alpha, instead of learning alpha from scratch for a new graph. If we can train the f with graphs with various properties, ideally it should learn to get optimal filter for any kind of graph data. In this way, such a model can be well suitable for inductive node classification. Therefore, our analysis can still be useful to deal with inductive node classification and even other graph-related tasks. Since such model improvement is out of range to what we want to focus on in this paper, we leave it as future work.\\n\\n\\n\\n[1] https://www.openacademic.ai/oag/\\n[2] Fanjin et al. OAG: Toward Linking Large-scale Heterogeneous Entity Graphs. In Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'19)\\n[3] Arnab Sinha, et al. An Overview of Microsoft Academic Service (MAS) and Applications. In Proc. of the 24th International Conference on World Wide Web (WWW \\u201915)\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Thank you for your valuable feedback! We improved our paper based on your advice (we marked the modifications related to your suggestions with orange text, and highlighted the previous version with strikethrough)\\uff1a\", \"https\": \"//drive.google.com/file/d/1qAe72_w9Zn_mXg7rwu25o54kcQ6QhsME/view?usp=sharing\\n\\nThe reviewer is majorly concerned about whether our pointed problem indeed exists on real-world datasets, and whether our proposed method can solve it. To alleviate this concern, we added a new real-world dataset and show consistent results with our analysis, which is discussed in Q1.\", \"now_we_address_your_comments_and_concerns_in_detail\": \"\", \"q1\": \"Synthetic datasets appear to be extreme and unrealistic and look carefully selected in favor of the proposed method.\", \"a1\": \"The problem we found out is not unrealistic and carefully selected, but indeed appear in real-world datasets. Each synthetic dataset we choose corresponds to a specific graph data property we analyze in Section 3.2, including \\\"small density gap\\\" (i.e., the graph structure is not highly correlated with labels) and \\\"large label ratio gap\\\" (i.e., classes are imbalanced). These properties widely occur in real-world datasets. To further justify this, we choose the \\\"large label ratio gap\\\" as an example and try to find a graph dataset that has this problem. We download a large scale academic graph called Open Academic Graph (OAG) [1][2][3] and choose two fields that have a large disparity in the number of papers: (1) \\u201cHistory of ideas\\u201d, which consists of 1041 papers; and (2) \\u201cPublic history\\u201d, which consists of 150 papers. Obviously, these two classes are imbalanced so that the graph data has the \\\"large label ratio gap\\u201d problem. We then compare our method with baselines on this OAG graph data (We open-source this dataset in the github. Detailed experiment settings and results are in Appendix A.10). According to our results (Table 8), our proposed AFGNN_inf achieves 88.22 macro F1-score, which outperforms all the other baselines (our macro F1 is at least 3% higher than all the baselines). Such a result on the real-world dataset is consistent with what we achieve on the same synthetic dataset, which indicates that the problem we reveal is not \\u201cunrealistic\\u201d, but exists in real-world datasets. The widely adopted benchmark graph datasets (cora, citeseer, pubmed), however, do not have these potential problems. Thus our analysis can also benefit the GNN research community to find other representative benchmark datasets for the node classification task.\", \"q2\": \"Model is not novel\", \"a2\": \"We\\u2019d like to emphasize that the main contribution of our work is the proposed graph filter assessment tool (GFD score) and the insights we found with the tool, which provides a unique perspective in understanding why GNN will work and how we should choose graph filters for graph data with different properties. The AFGNN model is our first attempt to learn a flexible filter that is adaptive to the graph data by leveraging the GFD score, which has successfully demonstrated the power of our assessment tool. The basic idea of the AFGNN is simple, but it works well with much less memory and time consumption than the sophisticated model as GAT. According to our results (Table 6 and 7), on Cora and Citeseer, GAT's time cost is at least three times of AFGNN's time cost, and GAT\\u2019s memory cost is two times of AFGNN's memory cost.\", \"q3\": \"\\\"Regularization term\\\" is inadequate.\", \"a3\": \"Thanks for pointing it out. The standard definition of regularization is: regularization is the process of adding information in order to solve an ill-posed problem or to prevent overfitting. Our GFD is added to the loss function to guide the learning process of the filter and to avoid overfitting, therefore, previously we call this GFD term a regularization. To avoid confusion, we have changed it into \\u201cGFD loss\\u201d.\", \"q4\": \"$AFGNN_{infinity}$ is not equivalent to applying infinite lambda\", \"a4\": \"Thanks for pointing it out, we agree that our previous writing in this part is not accurate enough. We use the previous writing to emphasize we optimize $\\\\alpha$ and $W$ iteratively by minimizing CrossEntropy loss and GFD loss respectively. We have improved our writing now.\", \"q5\": \"For $AFGNN_1$, is it also iteratively optimized as $AFGNN_{infinity}$?\", \"a5\": \"$AFGNN_1$ and $AFGNN_{infinity}$ are different. For $AFGNN_1$, we learn $\\\\alpha$ and W simultaneously by directly minimizing the overall objective function (=CrossEntropy loss + GFD loss), but for AFGNN_inf, we learn $\\\\alpha$ and $W$ separately. We have improved our writing for this part to make it more clear.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"We appreciate your valuable comments and we are now actively working on the supplementary experiments on real-world datasets!\"}",
"{\"title\": \"Response to Reviewer3 (cont.)\", \"comment\": \"Q6: It would be better if the explanation of the training loss section is more detailed and clear.\", \"a6\": \"Generally, our overall loss is a weighted sum of cross-entropy loss in terms of node classification and GFD loss in terms of the filter\\u2019s capability in enhancing linear separability. By changing the value of the weight, we have $AFGNN_0$, $AFGNN_1$, and $AFGNN_{infinity}$. $AFGNN_0$ and $AFGNN_1$ correspond to the case that the weight of GFD loss is 0 and 1 respectively, and parameter $\\\\alpha$ and W are optimized simultaneously with overall loss. $AFGNN_{infinity}$ is different, and it is not exactly the case where the weight equals infinity. To train $AFGNN_{infinity}$, we iteratively optimize $\\\\alpha$ and W with GFD loss and classification loss respectively. Thanks for pointing out this unclear part, we have revised our writing to make it more clear.\", \"q7\": \"What is \\u201cAFGNN_P\\u201d in the experiment analysis?\", \"a7\": \"It should be $AFGNN_{infinity}$, thanks for pointing out this typo.\", \"q8\": \"It could be interesting to see the time comparison between the proposed method and the GAT.\", \"a8\": \"Thanks for the valuable suggestion! We have added the time, memory comparison table in Appendix A.9, our experiment results show our AFGNN models need less time and memory consumption. According to our results (Table 6, 7), on Cora and Citeseer, GAT's time cost is at least three times of AFGNN's time cost, and GAT\\u2019s memory cost is two times of AFGNN's memory cost. GAT does not have recorded time and memory cost for Pubmed dataset because it requires too much memory cost and is not able to run on GPU. Therefore, we claim that AFGNN needs less time and memory cost than GAT.\", \"q9\": \"For the graph filter discriminant analysis, is it fair to compare the learned layer with the other base filter using the GFD score? Since the learned layer is picked with the highest GFD score. Maybe one or two sentences on this will be helpful.\", \"a9\": \"First, we\\u2019d like to clarify that we do not pick the best filter from the filter family. Instead, the combination weights (alpha) are learned in an end-to-end fashion on the training dataset, while the evaluated GFD scores are calculated on the test dataset. Therefore, it\\u2019s not guaranteed that a learned filter will definitely generalize better than all the base filters. Second, this experiment is to verify whether we can learn an optimal filter adaptively instead of using a fixed filter. For different datasets, there exist different optimal base filters (for example, column normalization is the best for SmallRatio and row normalize is the best for benchmark citation network), and our algorithm can indeed learn a good combination of them that generalizes well, as we expected.\", \"q10\": \"The writing of the paper must be improved. Too many typos and grammar problems will impair the presentation and the reader can be distracted.\", \"a10\": \"Thank you for pointing them out. We have carefully proofread our paper again and polished the paper to alleviate typos and grammar errors.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you for your constructive comments! We improved our paper based on your advice (we marked the modifications related to your suggestions with red text, and highlighted the previous version with strikethrough):\", \"https\": \"//drive.google.com/file/d/1adtmKH61RLyBKzn46DWdEj5JQvu5M5WO/view?usp=sharing\\n\\nFirst, we want to emphasize this paper\\u2019s contribution. Our work provides a theoretical understanding to GNNs in a novel way, and we are the first to analyze GNNs for the node classification task from a data perspective. Since rich literature demonstrated that the key of GNNs lies in their graph convolutional filters, we propose a new assessment tool (GFD) to evaluate the effectiveness of filters given a specific graph data Further, this tool is applied to analyze existing filters and found some meaningful insights. Finally, we propose the AFGNN model to automatically learn the best filter from the given family (i.e., learn the best coefficients for the linear combination of a set of filters) for the given graph data.\", \"now_we_address_your_comments_and_concerns_in_detail\": \"\", \"q1\": \"Is it reasonable to use the Fisher score\\u201d to support the second term in equation (3), where the Fisher score is used to evaluate before the filters applied?\", \"a1\": \"We\\u2019d like to clarify our GFD is an assessment tool to see whether a filter is good for a particular graph data.\\nFisher Score is used to evaluate the separability of data. GFD defined in Eq. (3), which is the Difference of Fisher Score before and after the filter, is to evaluate whether a filter can increase the data separability. \\nAs we show that a good graph convolutional filter can help the non-linear separable data to be linearly separable, we can expect the GFD score is higher for these filters. Note that not every graph filter has this property for every dataset, and our final model is to learn the best filter that could enhance this property for a given dataset. Experimental results in Figure 5 and Table 2 also support this claim.\", \"q2\": \"The presentation of the last paragraph of \\u201cgraph filter discriminant score\\u201d in page 4 can be improved. The Figure references seem incorrect and confusing.\", \"a2\": \"As mentioned in Q1, we use Fisher score to evaluate the separability of two classes, we use Fisher Score Difference to evaluate the power of a filter on two classes, and we use GFD, which is a weighted sum of Fisher Score Difference of each pair of classes, to evaluate the power of a filter on the given graph. We have revised our writing and corrected our Figure reference to make this more clear.\", \"q3\": \"The analysis of the influence of label ratio seems not accurate enough.\", \"a3\": \"Suppose the density and density gap are fixed, when label ratio drops, which means the two classes become more imbalanced, and nodes in a larger class tend to have more neighbors. Then, with the column normalization strategy that does not have any constraint on the range of representation, those nodes with a larger degree tend to aggregate more information and thus have larger new representations. This would be helpful to differentiate the two classes. Take Figure 6 (g) as an example, nodes in the large-size class (green nodes) are gathered in the upper right part while nodes in the small-size class (purple nodes) are gathered in the lower left part, so the two classes become more separable after applying a column-normalized filter. We revised our writing to make this more clear.\", \"q4\": \"For the GFD score comparison in Figure 4, why choose order 1,3,7 for density and different order 2,3,6 for the density gap?\", \"a4\": \"Thanks for pointing out. Previously we just pick the orders that can show our findings most clearly, but we agree it is important to use consistent choice in two subfigures. We now choose the same set of orders in these two figures. The result remains the same.\", \"q5\": \"What is the meaning of the symbol psi(l)?\", \"a5\": \"It is a learnable intermediate weight (before normalization) for each base filter. We then apply softmax normalization to it to get alpha(l). We revised our writing to make this more clear.\"}",
"{\"title\": \"Thank you for your support for our paper!\", \"comment\": \"Thank you so much for the positive feedback! We really appreciate your support for our paper as well as your constructive suggestions. We have improved our paper based on your advice (we marked the modifications related to your suggestions with blue text, and highlighted the previous version with strikethrough):\", \"https\": \"//drive.google.com/file/d/1wJYwz1oPDK1-NbpesHUR6ZMCSBRVxdh3/view?usp=sharing\\n\\nFor Eq. (3): to get the GFD score for a multi-class setting, we first calculate the Fisher Difference for each possible pair of classes, then normalize them based on class size, and finally sum them together to get GFD score. Based on the normalization, the class imbalance would not be a problem. We have improved our writing to make this part more clear.\\nFor the writing errors, thank you for pointing them out. We have corrected all the errors pointed out by you and also have carefully gone through the paper to improve the writing.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces an assessment framework for an in-depth analysis of the effect of graph convolutional filters and proposes a novel graph neural network with adaptable filters based on the analysis. The assessment framework builts on the Fisher discriminant score of features and can also be used as an additional (regularization) term for choosing optimal filters in training. The assessment result shows that there is no single graph filter for all types of graph structures. Experiments on both synthetic and real-world benchmark datasets demonstrate that the proposed adaptive GNN can learn appropriate filters for different graph tasks.\\n\\nThe proposed analysis using the Fisher score is reasonable and interesting, giving us an insight into the role of graph filters. Even though the analysis is limited (using simple graph models and filter family) and the result is not surprising (given no free lunch theorem, there is very likely to be no single silver bullet fo graph filters), I appreciate the analysis and the result. But, I have some concerns as follows. \\n\\n1) The proposed GNN and the optimization process\\nThe proposed method is to extend CNN to a simple linear combination of different filter bases with learnable weights, which I don't think is very novel. Adding the GFD score as an additional constraint term is interesting, but the way of optimizing the whole objective function is unclear. (In addition, I think calling it the \\\"regularization term\\\" is inadequate since the term actually involves data observation, rather than a prior on parameters only.) \\nIn the case of AFGNN_inf, I don't think it is equivalent to applying infinite lamda. If lamda is infinite, L_CE needs to be completely ignored. This needs to be clarified. \\nIn the case of AFGNN1, I don't clearly understand how the whole objective function is properly optimized with fixed data representation. Is it also iteratively optimized? I hope this is also clarified in more detail. \\n\\n2) Unconvincing experiments\\nThe results on three real datasets do not show significant gains, and two of them are even worse than those of GAT. Furthermore, inductive learning (e.g., protein-protein interaction (PPI) dataset used in GAT) is not tested, which I think needs to be also evaluated. While two synthetic datasets (SmallGap and SmallRatio) created by the authors show significant improvement, these datasets appear to be extreme and unrealistic and look carefully selected in favor of the proposed method. I recommend the authors use for evaluation more realistic datasets that can be found in related research.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors raise and address three questions about graph neural network: (1) Whether there is a best filter that works for all graphs. (2) Which properties of the graph will influence the performance of the graph filter. (3) How to design a method to adaptively find the optimal filter for a given graph.\\nThe paper proposes an assessment method called the Graph Filter Discriminant Score based on the Fisher Score. It measures how well the graph convolutional filter discriminate node representations of different classes in the graph by comparing the Fisher score before and after the filter.\\nBased on the GFD scores of different normalization strategy and different order of the graph filter in the experiments on synthetic data, the authors answer the first two questions: (1) There is no optimal normalization for all graphs. (2) row normalization performs better with lower power-law coefficient, but works worse with imbalanced label classes and large density gap.\\nFor the third question, the authors propose a learnable linear combination of a limited family of graph convolutional filters as the layer of model AFGNN, which can learn the optimal arguments of the combination based on the FGD score.\\nThe paper focuses on a significant topic and proposes an assessment tool for the graph filters. Based on that, it also introduces a model to choose filters from a family of filters for any specific graph.\\nThe description of preliminaries is clear.\\nThe observations of the impact of the graph properties on the filter choice are interesting and explanations are provided.\\nThe results of the test accuracy on both bench mark and synthetic datasets demonstrate the good performance of the proposed model.\\nIt is good that the paper provides proof for the claim that the graph convolutional can help the non-linear separable data to be linearly separable, so it is reasonable to use Fisher score. However, does this claim support the second term in equation (3), where the Fisher score is used to evaluate before the filters applied?\\nThe presentation of the last paragraph of \\u201cgraph filter discriminant score\\u201d in page 4 can be improved. The Figure references seem incorrect and confusing.\\nThe analysis of the influence of label ratio seems not accurate enough.\\nFor the GFD score comparison in Figure 4, why choose order 1,3,7 for density and different order 2,3,6 for density gap?\\nWhat is the meaning of the symbol psi(l)?\\nIt would be better if the explanation of the raining loss section is more detailed and clear.\\nWhat is \\u201cAFGNN_P\\u201d in the experiment analysis?\\nIt could be interesting to see the comparison of time between the proposed method and the GAT.\\nFor the graph filter discriminate analysis, is it fair to compare the learned layer with the other base filter using the GFD score? Since the learned layer is picked with highest GFD score. Maybe one or two sentences on this will be helpful.\\nThe writing of the paper must be improved. Too many typos and grammar problems will impair the presentation and the reader can be distracted.\", \"minor_comments\": \"The layout of the sub caption of Figure 1 can be improved.\\nThe usage of capital letter in the phrase \\u201cdensity gap\\u201d is inconsistent.\\n\\u201cAs shown in figure\\u201d instead of \\u201cAs is shown in figure\\u201d.\\nMany sentences miss article.\\nThere are many typos in the writing.\\nFor example, \\u201cNote that for given (feature)\\u201d, \\u201c\\u2026make the representation of nodes in different (class) more separable.\\u201d, \\u201cNoted that there are some other (variant) of GNN filters that (does) not fall into\\u2026\\u201d in page 4.\\n\\u201cHere we give (a) empirical explanation to this phenomenon\\u201d, \\u201cthis normalization strategy (take) into account\\u2026\\u201d, \\u201cThus even in the case that the other two (doesn\\u2019t) perform well\\u2026\\u201d in page 5.\\n\\u201c\\u2026a very important factor that (influence) the choice of normalization strategy\\u201d, \\u201cwhen power-law coefficient (decrease)\\u201d, \\u201cwhen the (sizes) of each class (become) more imbalanced\\u201d, \\u201cThis is because column normalization better (leverage) \\u2026\\u201d , \\u201cin a similar manner (with) label ratio\\u201d, \\u201cwhen the density or density gap (increase)\\u201d, \\u201chigh-order filters can help gather\\u2026 and thus (makes) the representations\\u2026\\u201d, \\u201cwhen the density gap (increase)\\u201d in page 6.\\nThese can be continued but it is obvious that this paper needs proofreading.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This is a very interesting study about GNN. Authors proposed to extend LDA as a discrimination evaluator for graph filleters. Also authors proposed Adaptive Filter Graph Neural Network to find the optimal filter within a limited family of graph convolutional filters. The whole study is novel, and beneficial for the community of graph neural network study. It provides a new way to understand and evaluate GNN.\\n\\nThere are some questions authors should clarify. And some writing errors to correct.\\nEq (3) defines GFD for a pair of classes i and j. For a graph with more than two classes, the GFD will be the average of all pairs? Will class imbalance will have any impact on this GFD measure?\", \"errors\": \"\\u2022\\twe studies the roles of this two components \\n\\u2022\\tthere exist a best choice \\n\\u2022\\twe only consider to to find\"}",
"{\"comment\": \"Thank you so much for the comment!\\n\\nThe key point of our work is to analyze the GNNs for node classification from a data perspective. We pointed out that there\\u2019s no best GNN filter for all datasets, and we proposed a GFD score that can assess the power of filter and help to find the optimal filter for a given dataset as well.\", \"for_question_1\": \"We proposed the AFGNN to verify our analysis, according to the experiment result, for whichever dataset, the performance of our proposed AFGNN is always among the best, which shows AFGNN is robust and effective and indicates GFD can help to select the best filter for a given dataset. Also, the current benchmark datasets cannot clearly differentiate different graph neural networks (as shown in Table 2, while the order is the same, scores of different filters are close to each other). So in our work, we identify some challenging cases for existing GNNs, then create corresponding synthetic benchmark datasets to test all GNN models. It will also guide us to look for real-world graph data to serve as the new benchmark datasets.\", \"for_question_2\": \"For the benchmark datasets split, we used 20 samples each class for training, 500 samples in total for validation, and 1000 samples in total for test. This is a standard split strategy, and most existing works, including all of our baselines (GCN, GAT, GFNN, SGC), follow this strategy. It\\u2019s true that using the mean results of multi-splitting methods may help to reduce the impact of dataset partitioning on experimental results. But in order to have fair comparisons between our model and baselines, we follow the split convention.\", \"title\": \"Re: Two questions for this work\"}",
"{\"comment\": \"Thank you for the nice work. I really appreciate the idea of GFD score and the detailed analysis. Here are some aspects I care about.\\n\\n1. The improvement of AFGNN seems marginal on the real dataset.\\nAlthough I like the idea of AFGNN, which combines different filters and adaptively learn a graph-specific one, the performance improvement of it on the real dataset (CORA/CiteSeer/Pubmed) seems marginal according to Table1. (BTW, is this result statistically significant?) While in the two manual dataset, the results are excellent. So what is the reason for the performance gap? And can AFGNN be extended to be more suitable for the actual data?\\n\\n2. About the dataset split.\\nFor the three benchmark datasets, you adopt the same setting of (Kipf & Welling, 2017). I guess it should be 20 samples each class for training and 30 samples each class for developing? Is your dataset split fixed in all the experiment? Existing work [1] has proven that the split of dataset has a significant influence on the classification result. So I think using the mean results of multi-splitting methods may be a better idea for the node classification task likes [2][3].\\n\\n[1]Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, Stephan G\\u00fcnneman: Pitfalls of Graph Neural Network Evaluation\\n[2]Sun, K.; Koniusz, P.; and Wang, J Fisher-Bures Adversary Graph Convolutional Networks.\\n[3]Deli Chen, Yankai Lin, Wei Li, Peng Li, JieZhou, Xu Sun: Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View\", \"title\": \"Two questions for this work\"}",
"{\"comment\": \"Thank you for your comment and your interest in our work!\", \"title\": \"Thank you!\"}",
"{\"comment\": \"We've found a typo when we define training loss in formula 5. The GFD score should be the cumulative negation GFD score of the filter in each layer with respect to its previous layer's output. Previously we missed the GFD. The corrected version is in https://github.com/conferencesub/ICLR_2020/blob/master/DissectingGNN_ICLR%20(3)%20(1).pdf\\n\\nSorry for the mistake.\", \"title\": \"Corrections of Formula Typo\"}",
"{\"comment\": \"This paper tackles the problem of GNN property and explainability, by studying how graph convolution kernels discriminate nodes from different classes.\\n\\nI think the primary novelty in this paper is that it provides easy and straightforward interpretation on GNN convolution kernels, which has been previously thought of as hard to depict. In addition, the techniques and intuitions are extremely straightforward and elegant, which is really a surprise.\", \"title\": \"Interesting paper that provide direct and explainable insight towards GNN\"}"
]
} |
S1lBVgHYvr | Towards Certified Defense for Unrestricted Adversarial Attacks | [
"Shengjia Zhao",
"Yang Song",
"Stefano Ermon"
] | Certified defenses against adversarial examples are very important in safety-critical applications of machine learning. However, existing certified defense strategies only safeguard against perturbation-based adversarial attacks, where the attacker is only allowed to modify normal data points by adding small perturbations. In this paper, we provide certified defenses under the more general threat model of unrestricted adversarial attacks. We allow the attacker to generate arbitrary inputs to fool the classifier, and assume the attacker knows everything except the classifiers' parameters and the training dataset used to learn it. Lack of knowledge about the classifiers parameters prevents an attacker from generating adversarial examples successfully. Our defense draws inspiration from differential privacy, and is based on intentionally adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters. We prove concrete bounds on the minimum number of queries required for any attacker to generate a successful adversarial attack. For a simple linear classifiers we prove that the bound is asymptotically optimal up to a constant by exhibiting an attack algorithm that achieves this lower bound. We empirically show the success of our defense strategy against strong black box attack algorithms. | [
"Adversarial Defense",
"Certified Defense",
"Adversarial Examples"
] | Reject | https://openreview.net/pdf?id=S1lBVgHYvr | https://openreview.net/forum?id=S1lBVgHYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KoJ5dScvHj",
"BJeYzwt2iB",
"BJxBz-t7cH",
"r1guL36CFS",
"HJemqGSatB"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744233,
1573848849420,
1572208908632,
1571900495996,
1571799691397
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2246/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2246/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2246/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2246/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a certified defense under the more general threat model beyond additive perturbation. The proposed defense method is based on adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters, which is similar to differential privacy mechanism. The authors proved the query complexity for any attacker to generate a successful adversarial attack. The main objection of this work is (1) the assumption of the attacker and the definition of the query complexity (to recover the optimal classifier rather than generating an adversarial example successfully) is uncommon, (2) the claim is misleading, and (3) the experimental evaluation is not sufficient (only two attacks are evaluated). The authors only provided a brief response to address the reviewers\\u2019 comments/questions without submitting a revision. Unfortunately none of the reviewer is in support of this paper even after author response.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank You for Your Suggestions and Comments\", \"comment\": \"We highly appreciate the reviewers for taking the time to provide helpful reviews and suggestions. We agree that the current writing can be organized better, and empirical results can be strengthened with additional experiments. Unfortunately, the rebuttal period is too short to address all these issues, so we would like to further improve the paper and submit to a future venue.\\n\\nThat being said, we firmly believe in the value of the framework proposed in our paper. In this response we would like to clarify several concerns about the framework.\", \"q\": \"More experiments are needed.\", \"response\": \"We performed additional experiments on NES (Ilyas et al, 2018) and Sign-OPT (Cheng et al, 2019). We observed similar results as Simba (Guo et al, 2019). We will include these results in the future submission. There is certainly a gap between theory (linear models) and deep neural networks; we will invest considerable effort in bridging this gap. We will include empirical analysis of the best theoretical guarantee, and comparison with other defense methods in the next revision.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes adding noise to the output of scoring function to defend from black-box attacks. This topic is actually very interesting so I enjoyed reading the paper, although it is currently only working for logistic regression and Naive Bayes and there are several unclear parts. I have several concerns about this paper, especially the claim of robust towards arbitrary perturbation.\\n\\n- My main concern is about the assumption of the attacker. Based on the discussions in Section 3, it seems the authors assume that the query complexity of attacker relies on how many queries the attacker needs to recover a w that is close enough to w*. I don't think this is the correct assumption for the current attacks --- given an example, black box attacks are trying to find some x' for each x without trying to recover or even estimate w. Therefore I wonder why the query complexity can be linked to the complexity of estimating w and is there any further assumption you need to make? \\n\\nIf the goal is to protect w, then this has been studied in several privacy/security papers and it's a different topic from adversarial attack. So the connection here is important but somehow unclear in the current draft. \\n\\n- For the experiments, to justify it is robust to attack I think it's important to try on various black-box attacks, including ZOO (Chen et al., 2017), Natural evolution strategy (Ilyas et al., 2018), Nattack (Li et al, 2019). For decision-based black box settings Boundary attack (Brendel et al., 2018) and OPT-attack (Cheng et al., 2019). \\n(Not saying you should try all of them, but I feel more than 1 attack is needed to justify the claim). \\n\\n- Some unclear points that need further clarification: \\n\\nI feel assuming there's an optimal w* that correctly classifies data is unrealistic. Is is possible to relax this?\", \"condition_1\": \"I fail to understand how is this related to q (attacker)? This seems only guaranteeing there's a majority mass of w centered at w*.\", \"condition_2\": \"What is I ? (I didn't see the definition).\\n\\n- Some related work: \\nIn DNN defense there are some related work on adding random noise. In [1], I think they only require adding a random layer which can be in the final layer of network, corresponding to adding random to the scoring function. In [2], they assume adding randomness to each layer so only adding random to final layer is a special case of that. I know the guarantees here are very different from those papers, but it will be nice to have some discussions. \\n\\n[1] \\\"Certified Robustness to Adversarial Examples with Differential Privacy\\\" Lecuyer et al., (S&P'19)\\n[2] \\\"Towards Robust Neural Networks via Random Self-ensemble\\\" Liu et al., (ECCV '18)\\n\\n======\\n\\nThank you for the response and the additional experiments. I feel the paper has some interesting ideas and could be improved by a more careful writing and slightly adjusting the claim. I will rate the current draft borderline but slightly leaning to reject.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Although this paper's title contains \\\"certified defense\\\" and \\\"unrestricted adversarial attack\\\", what I believe this paper is doing is analyzing the query complexity of query-based black-box attacks under simple linear models such as logistic regressions (or kernelized versions). The authors considered a binary classifier with the additional capability of giving \\\"no response\\\" when the confidence is low. In addition, the output of the classifier has to be perturbed by a random Gaussian vector. The authors then define several metrics including defensibility and query privacy to develop the query complexity on the considered model. The authors tested the query performance on two attacks: (1) the sign attack proposed by the authors and (2) the simba attack proposed by Guo et al.\", \"i_have_several_concerns_regarding_this_paper\": \"1. In my perspective, the title is very misleading and does not properly justify the claims made in this paper. \\\"Certified defense\\\" usually refers to consistent top-1 prediction of a perturbed data sample under a defined threat model. The paper reads like the authors are actually certifying the defined defensibility metric but without a threat model to certify. In addition, the attack setting is limited to black-box attacks (i.e. zero-order adversary), whereas in certified defense the attack assumption is white-box. \\n\\n2. It is also very unclear how unrestricted attack plays a role in the studied problem. In the introduction, the authors' definition of adversarial examples is \\\"any input is considered a valid adversarial example as long as it induces the classifier to predict a different label than an oracle classifier.\\\" But what is the oracle classifier? How do we justify the credibility of the \\\"adversarial examples\\\" in the experiments?\\n\\n3. Only two black-box attacks were compared in this paper, one is the sign attack proposed by the authors, the other is the simba attack proposed by Guo et al. To my knowledge, simba attack paper has not been published at any peer-reviewed venue. In other words, both attacks are not widely recognized attacks or methods from published papers. Therefore, the performance evaluation is not fully justified. Since there are many black-box attacks from published papers, why not do performance analysis on those attacks?\\n\\n4. Similar to 3, the classifier setting is also uncommon. Although I am happy to see classifiers have the ability to give no-response, admittedly this type of classifier is rarely used in practice, not to mention the analysis is tied with Gaussian perturbation on the output. The technical contributions can be limited if the main contribution of this paper is characterizing the query complexity (or defensibility) of an uncommon classifier with Gaussian perturbation on the output. I believe providing more insights on how the analysis can be useful to mainstream classifiers are critical and necessary.\\n\\n***Post-rebuttal comments\\nI thank the authors for the response. I hope the comments areuseful for preparing a future version of this work.\\n***\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a new certified defense strategy that considers unrestricted black box attacks. The paper provides bounds for the minimum number of queries needed for the attacker to attack the classifier successfully and then the authors prove that they can devise a defender to be robust against that attack. Here are a few points to consider:\\n1.\\tThe paper is a bit difficult to understand (also not well-structured). The specific contributions are not quite clear with respect to the existing literature (which is reviewed in a sparse manner in the paper). Especially, the novelty of the theory and analysis presented here is a bit difficult to assess. \\n2.\\t The results are not really validating the points they make in the analysis (for example in part 4.2 they talk about upper and lower bounds for number of queries as a function of \\u03c4, \\u03b1, etc. but they never provide some plots or tables regarding that in the results section).\\n4.\\tAlso, in Fig 2, it is hard to grasp the performance of the defended vs undefended classifiers with respect to the lower and upper bound that they have computed theoretically in the previous section.\\n5.\\tThey emphasize on the \\u201cdefensibility\\u201d and \\u201cquery privacy\\u201d in the analysis but they do not provide anything in the results section considering them.\\n6.\\tAs this is primarily a theoretical paper, focusing on simple classifiers is probably okay, but some sort of empirical comparison with other certified defense strategies is necessary. Just claiming that none of the existing methods would work for unrestricted attacks will not work is not sufficient. Some empirical results to show the specific advantages (e.g., at what budget the existing methods start to fail and the proposed method continues to perform well).\"}"
]
} |
SylVNerFvr | Permutation Equivariant Models for Compositional Generalization in Language | [
"Jonathan Gordon",
"David Lopez-Paz",
"Marco Baroni",
"Diane Bouchacourt"
] | Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding. | [
"Compositionality",
"Permutation Equivariance",
"Language Processing"
] | Accept (Poster) | https://openreview.net/pdf?id=SylVNerFvr | https://openreview.net/forum?id=SylVNerFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"i9Sjg7i5zz",
"H1eazy1iiS",
"rJgX-aA5iS",
"ByeyyTRqor",
"BJgrD305oB",
"Hyluf3R9jB",
"SJlQ3BLkqr",
"Hyl3-5ICFS",
"HkgCK-1RKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744204,
1573740309432,
1573739770765,
1573739735222,
1573739613332,
1573739535953,
1571935659134,
1571871235975,
1571840389551
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2245/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2245/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2245/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2245/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2245/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2245/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2245/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2245/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes an equivariant sequence-to-sequence model for dealing with compositionality of language. They show these models are better at SCAN tasks.\", \"reviewers_expressed_two_major_concerns\": \"1) Limited clarity of section 4 which makes the paper difficult to understand.\\n2) Whether this could generalize to more complex types of compositionality.\\n\\nAuthors responded by revising Section 4 and answering the question of generalization. While the reviewers are not 100% satisfied, they agree there is enough novel contribution in this paper. \\n\\nI thank the authors for submitting and look forward to seeing a clearer revision in the conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewers\", \"comment\": \"We thank the reviewers for their detailed reviews, and many helpful comments. We have now uploaded a revised version of the manuscript, reflecting the suggestions. We address specific comments of each reviewer separately, as responses to their reviews. The main revisions and efforts have gone in to improving the clarity of section 4, which we agree is quite dense, and not easy to follow. To this end, we have\\n\\t1. improved the notation, taking the advice of the reviewers on several points.\\n\\t2. included wording to clarify the dimensions and representations of objects and variables.\\n\\t3. included more examples and intuition regarding the proposed computations and models.\\nWe have also added a discussion regarding the computational complexity of our model, as well as addressed several other points raised by the reviewers.\\nWe believe these changes have improved the quality of the paper, and will lead to greater impact. We thank the reviewers for their time and highly useful feedback.\"}",
"{\"title\": \"Response to review (2 / 2)\", \"comment\": \"c. The G-Conv is implemented differently for different uses. We can represent words as 1-hot vectors, and elements of $g$ as permutation matrices, and apply $gw$ (to a word), $gh$ (the binary group operation) using standard matrix multiplication. With this in place, when used to embed words, we can implement the G-Conv by inheriting regular word embeddings, and letting the word embedding be the embedding of the word after each group element is applied. For convolving two representations on the group, we can similarly use the binary group operation, and inherit standard convolutional operations to implement the convolutional form in definition 4. We have added such discussions to Section 4 to help understand these operations. Further, full PyTorch code will be released following the review process implementing all of these operations.\\n\\td. As mentioned in our response to R2, the G-Conv \\u201ckeeps track\\u201d of the representation of the word under each element of the group, by stacking these representations in a matrix. This allows the model to share parameters across words in a set, while keeping track of which word gave rise to each representation. \\n\\nR3.3: Is the whole model G-equivariant?\\n\\nWe thank you for raising this issue. Yes, the complete model is G-equivariant, and this is extremely important to the crux of our argument. As discussed in [2], stacking equivariant computations is itself equivariant. Therefore, since our model is composed only of equivariant operations, the complete model is itself equivariant. We have added an explicit statement to this end in the revised manuscript.\\n\\nR3.4: Provide a visualization of the full model.\\n\\nWe thank you for this suggestion. This was our intention in providing Figure 2 in the manuscript, though we agree with you that a more detailed visualization could be useful to some readers. For the current revision, we have not been able to provide such a visualization. However, if the manuscript is accepted we will work on providing better visualizations of the model for the final version of the paper. \\n\\nR3.5: Why are words represented as infinite 1-hot sequences?\\n\\nThis is general notation that was useful for our derivations. However, we agree that this may be confusing to many readers. As such, we have taken your advice and changed the notation such that the vocabularies are strictly finite, and words are represented as 1-hot vectors rather than sequences. Thank you for pointing this out.\\n\\nR3.6: What is the dimension of of the hidden state in G-RNN? How does G act on it?\\n\\nThe hidden state in the G-RNN is a representation on the group G, such that it is represented as a matrix of size $|G| \\\\times \\\\R^k$. Here, $k$ is a hyper-parameter that is analogous to the number of hidden units in a standard RNN (in our experiments, we set $k=64$). $G$ acts on the hidden state by permuting the rows of the matrix, which is implemented as matrix multiplication with the permutation matrix representation of the elements of $g \\\\in G$. We have added clarifications and specifications of dimensions throughout section 4.\\n\\n\\n[1] B. Bloem-Reddy, and Y. W. Teh. Probabilistic symmetry and invariant neural networks. 2019.\\n[2] R. Kondor, and S. Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. 2018.\\n[3] T. S. Cohen, and M. Welling. Group equivariant convolutional networks. 2016.\\n[4] B. M. Lake, and M. Baroni. Generalization without systematicity: on the compositional skills of sequence-to-sequence recurrent networks. 2017.\"}",
"{\"title\": \"Response to review (1 / 2)\", \"comment\": \"We thank the reviewer for his time and effort in providing the detailed review of our work. We are grateful that you found the core idea of the paper well-motivated and interesting, and found most sections of the paper to be well-written. We address your comments in order below.\\n\\nMajor comments\\n\\nR3.1: Section 4 is far less clear.\\n\\nWe agree that the writing of section 4 requires improvement. We have put significant effort into revising this section in the manuscript, focusing on the comments and suggestions that you and the other reviewers made. We believe that the section is now easier to follow. Further, we appreciate your concerns regarding the reproducibility of the model and experiments. Along with improving the clarity of the writing, we would like to state that code will be made available to reproduce all of our models and experiments.\\n\\nR3.2: In addition to deeper error analysis, the authors can hold out other phrases (e.g., \\u201caround left\\u201d, and many others). \\n\\nWe chose to focus on tasks for which baseline models\\u2019 performance was available from the related literature, without which it is difficult to judge the usefulness of our proposed model. We note that due to the symmetry in the data generation process, there is not a meaningful difference between holding out \\u201caround right\\u201d and \\u201caround left\\u201d, and similarly for the verbs. \\n\\nRegarding an ablation study, our standard seq-2-seq model with a matched architecture is meant to play this role, i.e. ablating the use of permutation equivariance. We are happy to provide further ablations that we may not have considered -- is there something specific you were thinking of?\\n\\nFinally, regarding error analysis: the key purpose of the experiments is to validate our hypothesis that permutation equivariance as we have defined it leads to the improved performance in tasks requiring the type of compositional generalization required by the SCAN tasks. We believe that our experiments provide this evidence. Detailed analysis of the errors made by sequence-2-sequence models in this setting is explored in considerable depth in [4]. Regardless, we agree that further analysis of the errors made by our model may be of interest. However, relevant analyses of this form require some thought, and we are reluctant to publish analyses at very short notice. We will put in thought and effort to relevant analyses, and will provide these for future versions of the manuscript.\\n\\nR3.1: Explicitly specifying the group G.\\n\\nYou are correct in stating that the group G may be a product of several cyclic groups, one for each set of words. In particular, in the experimental section we state what group are used for each experiment ($\\\\sS_4$ over verbs for \\u201cAdd Jump\\u201d, $\\\\sS_2$ over directions for \\u201cAround Right\\u201d, and their product group for \\u201cLength\\u201d), and we have made this statement more explicit in the revision.\\n\\nR3.2: Additional options for equivariant layer implementations.\\n\\nThank you for pointing this out \\u2014 we agree that this is an important discussion missing from the submitted manuscript. We address you points in order:\\n\\ta. You are correct in stating that there are additional options for this layer. However, as discussed in [1,2], the form we use is the most general, and many of the methods you mention can be seen as special cases of the convolutional form [1]. Further, while the form provided by Zaheer et al. 2017 is more efficient (as you say, it relieves the need to sum over group elements), it is also far more restrictive. Examining Lemma 3 and eq. 4 of that paper, one can see that their proposed layer has only 2 free parameters (for the 1d case), and relies on extremely restrictive parameter sharing to achieve permutation equivariance. Thus, both for generality and expressivity of the layers, we opted for the convolutional form.\\n\\tb. As discussed in [3], we can apply a G-Conv to a function on an input domain which is acted on by the group elements (see e.g. Eq. 10 of [3], and Definition 4 in our manuscript). To this end, we define words as functions from indices (integers) to {0,1} (intuitively, 1-hot vectors), which are acted on by the elements in G. This allows us to fully define the G-Conv operator for words.\"}",
"{\"title\": \"Response to review\", \"comment\": \"We would like to thank the reviewer for a kind and helpful review and useful comments which we believe will significantly improve the paper. We are grateful that you have recognised the novelty in our work and are happy that you find the ideas interesting. Further, your major and minor comments are well-made, and we address these below.\\n\\nA major comment, also shared by R3, is on the density and difficulty of Section 4. To this end, we have put considerable effort into revising Section 4, and have taken your advice to add examples where possible to help clarify certain concepts. Below we address your more specific comments (which we have also addressed in the revised manuscript).\\n\\nMajor comments\\n\\nR2.1: How expensive is the method?\\n\\nThis is an important issue, and we thank you for raising it. Generally speaking, the approach scales linearly with the size of the acting group $|G|$. While this may pose an issue, we believe there are several ways to improve this computational issue, e.g., by using less expressive layers for permutation equivariance that do not require summation over the elements of G. In scaling the notion of equivariance to natural language, this is an important issue that must be considered. We have added a discussion on this point to Section 7. \\n\\nR2.2: How does the G-RNN know what the last word it generated was?\\n\\nIn the decoder model, the output of the G-attention mechanism ($\\\\tilde{a}_t$) is combined (via concatenation) with the G-embedding of the last word ($e(\\\\tilde{w}_{t-1}$). The resulting variable is then passed through a G-convolution before being processed by the G-decoder. This can be seen in Figure 2.b, and is stated on page 6 of the paper (first paragraph on the page). We agree that needs to be made more explicit, and have added wording to the section stating this more clearly. \\n\\nR2.3: The notation $g^{-1}w$ in the first equation of section 4.1.\\n\\n$\\\\psi^i$ operates on integers, or equivalently (as in our implementations), on 1-hot vectors. In order to consider our words as functions (so that we may properly define notions of convolutions on words), we represent one-hot vectors as functions from indices to $\\\\{0,1\\\\}$. With the representation of words $w$ as one-hot vectors, we can represent elements $g \\\\in G$ as permutation matrices, in which case $g^{-1}w$ results in another one-hot vector (which can then be passed to $\\\\psi^i$). We have reworked some of the notation, and added an explicit statement of this nature to section 4, which we hope clarifies the issue.\\n\\nR2.4: Some of the notation is confusing. This makes it hard for me to get the main point.\\n\\nWe agree with your point on the notation, and have reworked large parts in section 4 to make things clearer. We believe that this makes the main points easier to follow. You are correct in saying that, intuitively, the equivariance is achieved by \\u201ckeeping track\\u201d of a representation in the G-matrices for every member of the acting group. To this end, we have also added some examples and intuition to Section 4.\\n\\n\\nMinor comments\\n\\nR2.5: Equation numbers.\\n\\nWe have added equation numbers to assist in the reviewing.\\n\\nR2.6: This seems related to Higgins et al., 2018.\\n\\nThank you for pointing this reference out to us \\u2014 it is indeed related and interesting, and we have added a discussion on it to the paper.\\n\\nR2.7: Why didn\\u2019t performance on SCAN go to 100%?\\n\\nThis is an interesting question. We have not focused on this issue as there may be factors unrelated to the equivariance (which is our main focus) that may influence this. [1] provides a very detailed analysis of the performance of sequence-to-sequence models on the SCAN dataset, and many of those analyses carry over to our experiments.\\n\\n[1] B. M. Lake, and M. Baroni. Generalization without systematicity: on the compositional skills of sequence-to-sequence recurrent networks. 2017.\"}",
"{\"title\": \"Response to reivew\", \"comment\": \"We thank the reviewer for their time and effort in reviewing our paper. We are excited that you found our experimental results \\u201cimpressive\\u201d.\", \"we_completely_agree_with_your_comment\": \"capturing global equivariances in language is an important, interesting, and challenging problem. Despite not having addressed global equivariance in this work, we believe that modelling local equivariances in language is an important first step, and our work provides a proof of concept for this idea, as well as an important link between equivariance and several forms of generalization that are of broad interest to the domain of modelling language. In the future, we certainly intend to expand the investigation to global equivariances as well.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n---\\n\\n(motivation)\\nConsider SCAN, a synthetic task where setences like S1=\\\"jump twice and run left\\\" are supposed to be translated into action sequences like A1=JUMP JUMP LTURN RUN. One might replace the word \\\"jump\\\" in S1 with \\\"walk\\\" then translate to get A2=WALK WALK LTURN RUN. If instead S1 is translated into A1 and then the action JUMP is replaced with the action WALK then we should still get the same A2. Such a translation model is equivariant to permutations of \\\"jump\\\" and \\\"walk\\\".\\n\\nThis paper aims to\\n1) define a general notion of compositionality as equivariance,\\n2) build a model which is compositional in this general sense, and\\n3) apply the model to SCAN.\\n\\n(approach - theory)\\nThis work considers this kind of compositionality as equivariance to group actions. Previous work (Kondor & Trivedi, 2018) viewed convolution as equivariance to actions by translation groups. This work views language compositionality as equivariance to actions by permutation groups applied to a set of similar words (e.g. verbs in SCAN).\\n\\n(approach - model)\\nThe paper proposes G-Embed, G-RNN, G-Attention, and G-Conv (not new) layers that are equivariant to word permuatations (e.g., switching \\\"jump\\\" and \\\"walk\\\"). It then composes these modules in a fairly standard fashion to build a new G-seq2seq model which is invariant to group actions.\\n\\n(experiments)\\nExperiments apply a G-seq2seq model to the SCAN tasks, comparing to strong baselines. G-seq2seq requires slightly more knowledge (a set of related words like verbs) than all the baselines, but less knowledge than Lake 2019.\\n1. G-seq2seq outperforms all baselines except Lake 2019 (unfair comparison) on basic compositional tasks (\\\"Add Jump\\\" and \\\"Around Right\\\").\\n2. Like other models, G-seq2seq fails on the \\\"Length\\\" task, though it is still among the best performers.\\n\\n\\nStrengths\\n---\\n\\nThe theory of compositionality as invariance to actions by permutation groups is new, interesting, and could turn out to be significant.\\n\\nThe proposed models are also new, interesting, and could be significant.\\n\\nExperiments on SCAN verify that the proposed models work about as expected, sometimes beating strong baselines in the process.\\n\\n\\nWeaknesses\\n---\\n\\nIt's hard to know what the impact of this paper will be because 1) it's unclear whether this model can generalize to more useful domains and 2) the presentation may turn some readers away. While neither of these issues can really be solved, I think paper could be substantially better in both aspects. Corresponding suggestions:\\n1) How expensive is this? It seems quite expensive because the representation size scales with the number of permutation of the set of words equivariance is with respect to. How will it scale to larger problems in terms of computation/memory costs (especially larger vocab sizes)? What knowledge is required for applying this method to new domains--i.e., how do I choose a set of permutation equivariant verbs in general? More discussion of these issues may help increase the impact of the paper.\\n2) See next section.\\n\\n\\nPresentation Weaknesses / Points of Confusion / Missing Details\\n---\\n\\nTo mimic a typical decoder RNN there should be another input which copies the word \\\\tilde{w}_{t-1} from the previous iteration as input, somehow fused with the attention feature \\\\tilde{a}_t. How does the G-RNN know what the last word it generated was?\\n\\nThe notation $g^{-1} w$ in the first equation of section 4.1: I think $\\\\psi^i$ is supposed to take an integer as input but $g^{-1} w$ is a permutation applied to a function. I'm not sure how to apply permutations to functions like w and it doesn't seem like the output should be an integer in any case so I find this notation confusing.\\n\\nTaking a step back, I find some of the notation (e.g., previous point) a bit confusing. This makes it hard for me to get the main point. I think the idea is that equivariant models can be achieved by tracking a representation (e.g., via rows of the G-Embed matrix) for (almost?) every member of the acting group.\\n\\nIt may help the presentation to more frequently demonstrate the general concepts with examples, though doing so may be in conflict with the general nature of the paper's theoretical contribution. I'm sure this is a familiar tradeoff, but from my perspective the paper would probably be more impactful if the presentation leaned more on examples.\\n\\nEquation numbers would be a really great addition. I found it hard to reference some of the material in writing my review.\\n\\n\\\"and the use of algebraic computation\\\"\\n* This seems specific to the chosen example whereas the rest of the sentence is trying to be general.\\n\\n\\nSuggestions\\n---\\n\\n* This seems related to [1], which uses group theory to define a notion of disentangled representation similar to compositionality. That may inspire future work and would be useful to mention in the related work.\\n\\n* Why didn't performance on SCAN get to 100%? It would be useful to spend some time addressing points of failure for the model other than compositionality.\\n\\n* The G-RNN doesn't have a bias. It's not necessary, but it may be interesting to describe why this design choice was made.\\n\\n\\n[1]: Higgins, Irina et al. \\u201cTowards a Definition of Disentangled Representations.\\u201d ArXiv abs/1812.02230 (2018): n. pag. \\n\\n\\nPreliminary Evaluation\\n---\", \"quality\": \"The theoretical contributions make sense and the experiments show they lead to useful models.\", \"clarity\": \"The technical parts of the paper are somewhat unclear, but the rest of the paper is well written.\", \"significance\": \"As discussed in the Weaknesses section this could turn out to be very significant or not significant at all, but that's true for a lot of good research.\", \"originality\": \"The general notion of equivariant neural networks and good performance on SCAN are novel.\\n\\nOverall, this is a very clear accept.\\n\\nPost-Rebuttal Update\\n---\\n\\nThere was a lot of agreement between reviewers, though we came to slightly different conclusions about ratings. Though there is significant uncertainty about the impact of this work, I still think 8: Accept is the most appropriate rating. Overall, the other reviews and author responses only increased my confidence that this paper should be accepted.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an architecture that captures equivariance to certain transformations that happen in text, like synonym words and some simple transformation over word order.\\n\\n* General comments: \\n \\nIncreasing compositional generalization using equivariance is a very interesting idea. Sections 1-3 are well written and the solution of modeling the translation function as a G-equivariant function is well motivated. \\n\\nSection 4 is far less clear. In its current form, it is very hard to understand the model construction as well as the design choices. This section should be significantly improved in order for me to increase my score. A direct by-product of the confusing writing is that the experiments cannot be reproduced.\\n\\nThe experiments show improvement in one out of four tasks, where the single phrase \\u201cAround right\\u201d is held from the training set. There are no examples, not qualitative analysis, no ablation experiments. Overall, more evidence needed to convince that the approach is useful. In addition to deeper error analysis, the authors can hold out other phrases (e.g., \\u201caround left\\u201d, and many others). \\n\\n* Specific comments which I hope the authors address:\\n\\n1. To the best of my understanding, the authors do not explicitly specify the group G that they want to be invariant to. Is it a product of a few cyclic groups? (a cycle for each set of words that are interchangeable?)\\n\\n2. The authors suggest using G-convolution, i.e. the group convolution on G. This is in contrast to the (arguably) more popular choice of using linear layers that are G-equivariant (as in, for example, deep sets (Zaheer et al. 2017), Deep Models of Interactions Across Sets (Hartford et al. 2018),Universal invariant and equivariant graph neural networks (Keriven and Peyr\\u00e9 ) and in general convolutional layers for learning images).\", \"i_have_several_questions_regarding_this_choice\": \"2a. Can the authors discuss the differences/advantages of this approach over the approach mentioned above? It seems like the approach mentioned above will be more efficient (as there is no need to sum over all group elements)\\n2b. In order to use G-convolution, one has to use functions defined on G. Can the authors explain how they look on the input words as functions on G?\\n2c. How is the G-Conv actually implemented? \\n2d. Can the authors provide some intuition to what this operator does? \\n\\n3. Is the whole model G-equivariant? The authors might want to clearly state this. To the best of my understanding, this is the main motivation of this construction.\\n\\n4. It might be helpful for readers that are not familiar with deep learning for NLP tasks to provide a visualization of the full model (can be added to the appendix)\\n\\n5. Why are words represented as infinite one-hot sequences? Don\\u2019t we assume a finite vocabulary? This is pretty confusing.\\n\\n6. As a part of the G-RNN the authors apply a G-conv to the state h_{t-1}. What is the dimension of this hidden state? How does G act on it? \\n\\n7. Please explicitly state the dimensions of each input/output/parameter in the network (this can be combined with the illustration above illustration)\\n\\n* Minor comments:\\n\\nSection 4.1 pointwise activation are in general equivariant only to permutation representations\\nPage 2 - typo - \\u2018all short\\u2019-> \\u2018fall short\\u2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work focuses on learning equivariant representations and functions over input/output words for the purposes of SCAN task. Basically, the focus is on local equivariances (over vocabulary) such that the effect of replacing and action verb like RUN in the input with the verb JUMP causes a similar change in the output. However, effects requiring global equivariances like learning relationship between \\\"twice\\\" and \\\"thrice\\\", or learning relationships between different kinds of conjunctions are not handled in this work. For learning equivariant functions over vocabulary, group convolutions are used at each step over vocabulary items in both the sequence encoder and decoder. The results on SCAN task are impressive for verb replacement based experiments and improve over other relevant baselines. Also, improvement is shown on another word replacement task (\\\"around right\\\"), which requires learning corresponding substitutions in output based on the word changes in the input. As expected, for experiments that require global equivariances or no equivariance (simple, length), the difference ion performance is not very pronounced over other baselines.\\nWhile this paper does show that modelling effects of word substitution can be handled by the locally equivariant functions, it still cannot account for more complex generalization phenomena which are likely to be much more prevalent especially for domains dealing with natural language that are other than SCAN. Therefor, I think the applicability of the proposed equivariant architectures is rather limited if interesting.\"}"
]
} |
BJg4NgBKvH | Training binary neural networks with real-to-binary convolutions | [
"Brais Martinez",
"Jing Yang",
"Adrian Bulat",
"Georgios Tzimiropoulos"
] | This paper shows how to train binary networks to within a few percent points (~3-5%) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its real-valued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at https://github.com/brais-martinez/real2binary | [
"binary networks"
] | Accept (Poster) | https://openreview.net/pdf?id=BJg4NgBKvH | https://openreview.net/forum?id=BJg4NgBKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JbQRtKvp0",
"rkedHE72sS",
"B1eQf7Whor",
"HygFsW-hir",
"rJe8vWZ2iH",
"Skl1k-bniB",
"B1eNcAxhsS",
"S1lgK6m-oB",
"S1eork6liS",
"SygUKEvt5S",
"HyeRiKbtcH",
"B1lW7dAwqH",
"B1lneTXwcH",
"Hkls16xPqS",
"HJx2ejZS5r",
"rygS_Rv15B"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744176,
1573823551544,
1573815051468,
1573814689121,
1573814621964,
1573814487330,
1573813899782,
1573105015645,
1573076802941,
1572594814052,
1572571558240,
1572493337168,
1572449523690,
1572437219309,
1572309748371,
1571941996919
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2244/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"~da_quexian1"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"~Zhaohui_Yang1"
],
[
"ICLR.cc/2020/Conference/Paper2244/Authors"
],
[
"~Zhaohui_Yang1"
],
[
"ICLR.cc/2020/Conference/Paper2244/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2244/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2244/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2244/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes methodology to train binary neural networks.\\n\\nThe reviewers and authors engaged in a constructive discussion. All the reviewers like the contributions of the paper.\\n\\nAcceptance is therefore recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"All my questions have been answered.\", \"comment\": \"R4.1: yes, 2bits NNs and 1bit NNs are not straightforwardly comparable. R4.4 this makes the comparison between models difficult as the overall performance (e.g. latency) highly depends on the implementation (see the answer from Da Quexian: about latency on real devices). R4.3 as data augmentation and mix-up are common practice, you are indeed bound to use them.\\n\\nBased on the above answer and the overall discussion, it seems that the authors should be able to resolve most of the problems mentioned by the reviewers. Furthermore, the feedback should allow improving the writing, which remains one important downside of the paper.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"R1.1 \\u201cMy biggest concern is that ResNet is itself a very wasteful architecture in terms of compute and parameter count. If the goal is to develop a compute- and memory-efficient architecture, it would be good to also consider real-valued-network baselines that were proposed with computational and/or memory efficiency as a design goal.\\u201d\\n\\nTraining a highly accurate binary ResNet has been an open problem for many years and to our knowledge, our work is the first one that bridges most of the accuracy gap. We believe we are taking a very important step forward considering the actual current state of research on the topic. Moving into more compute- and memory-efficient architectures is definitely one of our next goals. To our knowledge we are not aware of any existing work aiming at solving this very challenging problem.\\n\\n\\nR1.2 On specific choices for student-teacher, and new scaling network, being adhoc\\n\\nWe believe that the loss of Eq. (2) is relatively straightforward: its objective is to transfer normalized attention maps from the teacher to the student. See also R4.2. \\n\\nFurthermore, please see R3.7 for a more detailed discussion on 2-stage vs. multi-stage teacher-student optimization strategy. Moreover, the architecture for the scaling block works as follows: it computes a global spatial average (since we\\u2019re interested in computing a single factor per channel) which is then processed by 2 FC layers implementing a simple bottleneck. Thank you for this comment we will further clarify in the paper.\\n\\n\\nR1.3 On \\\"this implies a reduction of 32\\u00d7 in memory usage\\\", assuming the parameter count is held constant:\\n\\nWe hold the parameter count constant, except for the scaling function parameters. which account to around 200k extra parameters, less than 2% those of a a ResNet18, which has 11.7M.\\n\\n\\nR1.4: On \\u201cFig 1 right: This is motivated in terms of preserving scaling factors that are lost by the binarization, but the functional form for this makes it look a lot like a learned gating operation. If the sigmoid is dropped from the architecture, does performance worsen? It would be nice to see some discussion of the degree to which this is helpful because it reverses information loss due to binarization, vs. introduces a new architectural feature which is itself helpful.\\u201d\\n\\nThis is an interesting suggestion, and we thank the reviewer for taking the time to think about the methodology. We have tried without the sigmoid at the end and found performance to degrade by a large margin for the final model (stage 2), while stage 1 works only slightly worse (0.5% worse). \\n\\n\\nR1.5 On what \\\"double skip connections\\\" are. \\n\\nApologies, we will add the following to the manuscript to clarify:\\n\\n\\u201cA double skip connection modifies the ResNet basic block to have a skip connection per convolution rather than one per block. This is illustrated in Fig.1, left.\\u201d\\n\\n\\nR1.6 On the functional form of Eq. 2:\\n\\nQ^j(h,w) captures the energy of the activations at location h,w. The important thing to note here is that Q^j is normalized in Eq. 2 by its norm. This is required so that the scaling of the activations does not change the representation. Given that they are normalized, the L2 norm is a way of comparing them - irrespective of how they were computed.\\n\\n\\nR1.7: On using the term \\\"Inference\\\": \\n\\nChanged to \\u201care determined by data\\u201d, thank you!\\n\\n\\nR1.8: On using the fractional vs percentages.\\n \\nWe agree that, although they give complementary information, changing between the two is not good practice. We\\u2019ve given the paper a pass to homogenize this.\\n\\n\\nR1.9: On comparing with computational and memory efficiency architectures\\n\\nIn our ResNet-18 implementation, we used a very expensive 7x7 convolution for the first layer (118M FLOPs). Note here that the first convolution is always real-valued even for binary networks. This was done in order to make a fair comparison with previous work on binary networks, which also use the same ResNet-18 implementation. However, the first convolution can be replaced with negligible drop in performance by more efficient layers (e.g. like the ones in the daBNN paper), which reduces the complexity of the first layer to 42M FLOPs. Considering this, the comparison with MobileNet-V2, one of the state-of-the-art efficient architectures is:\", \"mobilenet_v2\": \"-- 300M FLOPs\\nOurs 1676M BOPs 150M FLOPs\", \"ours_with_stem_as_in_dabnn\": \"1676M BOPs 80M FLOPs\\n\\n\\nWe note that on a modern x64 CPU, using bit-packing and SIMD the theoretical throughput is around 64 binary instructions per clock, making a FLOP roughly equivalent with 64BOPS. This is in line with the theoretical speedups report in XNOR-Net (Rastegari et al). Finally, we note that the model we trained for our paper produced accuracy of 65.4% while a fully real-valued MobileNet-v2 gets 72.0%.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"R2.1: On comparing with other compression techniques, and exploring their combination with binarization:\\n\\nIn general, the 32x compression provided by binary models is hard to beat. However, we opted not to include comparison with non-binary methods because we felt that providing a fair comparison between different approaches would be extremely complex. The suggestion for a combination is very interesting but it requires a comprehensive investigation which could fit into a new paper.\\n\\n\\nR2.2: On latency and real-timing results:\\n\\nPlease see R4.4: practical performance depends dramatically on hardware implementation which is not available to us at the moment.\"}",
"{\"title\": \"Response to Reviewer #3 (part 2/2)\", \"comment\": \"R3.4: On not being clear how the data-driven way helps:\\n\\nThe scale function is the only difference between the full Real-to-Bin and the strong baseline + att trans. + kd entries on Table 3. Below are the results for ImageNet (the binary downsample results were not included in the original submission):\\n\\nImageNet - Binary downsample top1/top5\\nStrong baseline (SB) 57.746 / 80.406\\nSB + Att trans + KD (TS) 59.386 / 81.854\\nReal-to-Bin (SB + TS + Scale Gating function) 62.106 / 83.996\\n\\nImageNet - Real downsample top1/top5\\nStrong Baseline (SB) 60.9 / 83.0\\nSB + Att trans + KD (TS) 63.1 / 84.8\\nReal-to-Bin (SB + TS + Scale Gating function) 65.4 / 86.2\\n\\nThis shows a 2.7% gain for the binary downsample gain in terms of top-1 accuracy, and a 2.3% for the real downsample case. This is a very large performance gain, especially considering that the baseline is already extremely well optimized.\\n\\nSimilarly, for CIFAR-100:\\n\\nCIFAR - Binary downsample top1/top5\\nStrong baseline (SB) 68.03 / 88.28\\nSB + Att trans + KD (TS) 71.93 / 90.93\\nReal-to-Bin (SB + TS + Scale Gating function) 73.49 / 91.55\\n\\n\\nCIFAR - Real downsample top1/top5\\nStrong baseline (SB) 70.49 / 88.81\\nSB + Att trans + KD (TS) 74.62 / 91.79\\nReal-to-Bin (SB + TS + Scale Gating function) 76.15 / 92.67\\n\\nWe found however that optimization of the scaling function is hard without the extra help from the teacher student. That is shown in Table 3 as SB + G (strong baseline + Gating function). We believe that we were not clear enough there, as this same comment was raised in a previous post by Zhaohui Yang. We apologize for the confusion.\\n\\n\\nR3.5: On improving the writing of the paper:\\n\\nWe are giving the paper a thorough pass and will improve the clarity of the text.\\n\\n\\nR3.6: On real-valued and binary maps, and their visualizations:\\n\\nWe thank the reviewer for the suggestion regarding their visualization, we will include a figure in the supplementary material. \\n\\nYes, maps are more similar. The difference with real-valued attention maps for the validation set goes down from 0.0073 to 0.0012 - more than 6 times lower.\\n\\nWe compare the activation maps at the end of each stage. Thus, they both have the same shapes and encode information at corresponding points of the network.\\n\\nPlease see also R3.2.\\n\\n\\nR3.7: On 2-stage vs. multi-stage teacher-student optimization strategy, and how they are combined:\\n\\nWe will clarify this, as we agree that this could be explained better in the text:\\nThe idea of the 2-stage optimization strategy is to first train a model with binary activations/real-valued weights, and then train a model with both activations and weights binarized using the stage 1 model as initialization. When using teacher-student, we do train a sequence of real-valued teacher-student networks so that the teacher used for stage 1 is more adequate.\"}",
"{\"title\": \"Response to Reviewer #3 (part 1/2)\", \"comment\": \"R3.1: On paper novelty, and teacher-student \\n\\nTo our knowledge, we are 1) the first to construct and train appropriately a very strong baseline which outperforms all previous work, and then 2) propose the ideas of Sections 4.2 and 4.3 that show how to surpass this baseline by a large margin (~4.5% on ImageNet, ~5.5% on CIFAR-100). \\n\\nAs pointed out by the reviewer there are several papers on student-teacher networks, but to our knowledge none of them has been successfully applied for training a binary network reporting such high accuracy gains as in our work. To this end, we have devised the multi-stage training described in Section 4.2, which results in much improved performance. \\n\\nMoreover, in Section 4.3 we propose a novel architecture (the input-dependent scaling function) that improves performance by a large margin for both CIFAR-100 and ImageNet, on top of an already well-optimized network (2.7% and 2.3% extra top-1 performance on ImageNet for binary and real-valued downsample respectively).\\n\\n\\nR3.2: On why a real-valued network would be able to teach a binary network, since they have quite different information flow:\\n\\nThank you for providing this comment. It is not true that the binary and real networks have such different flow. When analyzed, the features from a binary network follow the same typical expected structure: the low level features represent edges while the higher ones more abstract, class specific, concepts. As such it is not surprising that by having an explicit reference to the desired distribution of a real-valued model we can further boost the performance.\\n\\nThat said, we agree with the reviewer that there is a gap between the two that renders a direct application of standard teacher-student difficult. That is exactly why we devised the multi-stage optimization. Through this procedure, the teacher network used to train the stage 1 network (binary activations, real-valued weights) will have a much closer flow and thus provide better improvement. Similarly, for stage 2, we use stage 1 network as teacher as this has a closer flow than a full precision network.\\n\\nWe will show feature maps from the real and binary network in the supplementary material, also providing discussion based on your question.\\n\\n\\nR3.3: On re-scaling, and comparison of our approach with previous work:\\n\\nRecent papers have shown that learning scaling values through backpropagation improves performance (see Xu & Cheung 2019 and Bulat & Tzimiropoulos 2019 on our submission). Our baseline does indeed use such scaling variables. When we show improvements due to the input-dependent scaling function, we are comparing against the best-performing scaling strategy to date.\", \"the_following_paragraph_from_our_paper_describes_the_differences_between_the_scaling_factors_used_in_prior_work_and_in_our_paper\": \"\\u201cPrevious works have shown the effectiveness of re-scaling binary convolutions with the goal of better approximating real convolutions and in turn achieving large accuracy gains. XNOR-Net (Rastegari et al., 2016) proposed to compute these scale factors analytically while (Bulat & Tzimiropoulos,2019; Xu & Cheung, 2019) proposed to learn them discriminatively in an end-to-end manner, showing additional accuracy gains. For the latter case, during training, the optimization aims to find a set of fixed scaling factors that minimize the average expected loss for the training set. We propose instead to go beyond this and obtain input-dependent scaling factors \\u2013 thus, at test time, these scaling factors will not be fixed but rather inferred from data\\u201d\\n\\nFinally, the following paragraph from our paper provides intuition about why our method is needed and works:\\n\\n\\u201cLet us first recall what the signal flow is when going through a binary block. The activations entering a binary block are actually real-valued. Batch normalization centers the activations, which are then binarized, thus losing a large amount of information. Binary convolution, re-scaling and eventually PReLU follow. We propose to use the full-precision activation signal, prior to the large information loss incurred by the binarization operation, to predict the scaling factors used to re-scale the output of the binary convolution channel-wise.\\u201d\\n\\n\\u201cWhy is this important?: An optimal mechanism to modulate the output of the binary convolution clearly should not be the same for all examples as in Bulat & Tzimiropoulos (2019) or Xu & Cheung(2019). Note that in Rastegari et al. (2016) the computation of the scale factors depends on the input activations. However the analytic calculation is sub-optimal with respect to the task at hand. To circumvent the aforementioned problems, our method learns, via backpropagation and for the task at hand, to predict the modulating factors using the real-valued input activations. By doing so, more than 1/3rd of the remaining gap with the real-valued network is bridged.\\u201d\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"R4.1: On comparison with TTQ:\", \"it_is_not_straightforward_to_compare_ttq_with_our_method_because_of_the_following_reasons\": \"1) Our setting is significantly more challenging: TTQ uses 2 bits for the weights and full precision for the activations. We use 1-bit for both weights and activations. \\n2) TTQ uses a bigger network, called ResNet-18B, for their ImageNet experiments, which has 50% more channels than the standard ResNet18 used in our work.\\n \\nIn any case, the main methodological novelty of TTQ, learning the quantization points, is also standard on binary networks since the XNorNet paper (Rasteragi et al, ECCV 2016).\\n\\n\\nR4.2: On equation 2, \\\"transfer points\\\", and normalization:\\n\\nA transfer point is the location within the network at which the attention maps are computed, and compared between the teacher and the student. We used 4 transfer points each at the end of a stage (spatial resolution). Normalization is used so that the attention maps do not depend on the magnitude of the response maps. These magnitudes are different throughout the network. Thank you for this comment, we will clarify this in the paper. \\n\\n\\nR4.3: On Data augmentation and Mix-up:\\n\\nWe believe it is necessary to use them to show how far binary networks can get in terms of accuracy. If previous methods didn\\u2019t use them, we believe it is something worth reporting in our paper. Furthermore, all data augmentations used are standard for the full precision case.\\n\\nMoreover, we used these techniques to train our strong baseline upon which we improve later on by 4-5%. So overall, we believe comparison is fair. Exactly the same mixup and data augmentation techniques are considered for binary and real-valued networks. \\n\\n\\nR4.4: On including TTQ + adding a 4th column that measures the overall performance:\\n\\nTTQ has no BOPS (xnor + popcount ), as TTQ has real-valued activations, and their flop count would be the same as for the real-valued base architecture used. Please see also R4.1.\\n\\nUnfortunately we cannot provide speed-up related figures because we don\\u2019t have a hardware implementation. This is crucial to achieving the speed-ups, which depend on the implementation. Existing deep learning frameworks don\\u2019t support this kind of quantization (they support up to 8 bits). \\n\\n\\nR4.5: On table 3 and abbreviations:\\nWe are currently revising the manuscript, thank you for pointing this out.\\n\\nR4.6: On comparison with TTQ:\\nPlease see R4.1 and R4.4.\"}",
"{\"title\": \"About latency on real devices\", \"comment\": \"I'm the author of daBNN [1], a highly optimized BNNs inference framework for mobile.\\n\\nIt is not so easy to measure the latency of BNNs on real devices. The latency heavily depends on the implementation. One needs to re-implement convolutions using bit-wise operations by assembly if he/she wants them to be fast. For example, on BMXNet [2], which only uses pure c/c++, BNNs are even slower than full-precision networks in most cases, despite their high theoretical speedup.\\n\\nAs a result, it is normal that the authors of this paper didn't report the real latency -- there is no highly optimized implementation for BNNs until daBNN.\\n\\nHowever, I'd like to recommend daBNN if the authors are also curious about the real latency of their BNNs. daBNN is many times faster than BMXNet and full-precision TF Lite on mobile phones. I will provide the necessary help on GitHub if someone wants to use daBNN and has trouble. \\n\\n[1] Jianhao Zhang, et.al. daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices, 2019.\\n[2] Haojin Yang, et.al. BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet, 2017\"}",
"{\"title\": \"the provided paper was uploaded to Arxiv after ICLR 2020 deadline + our block is different from gated residual\", \"comment\": \"1) The paper mentioned was uploaded to arXiv after the ICLR deadline and, to the best of our knowledge, it is not published in a peer-reviewed venue.\\n\\n2) The method described in the provided paper is different to our method: \\n\\nTheir method learns scaling factors \\u2014 applied to the data going through the skip connection \\u2014 that DO NOT DEPEND on the input data: they learn one scalar per channel, which is the same irrespective of the input. \\n\\nOur scaling factors \\u2014 applied to the output of the binary convolution \\u2014 depend directly on the input data, so they will be different for each example. There is an actual function (an encoder-decoder mechanism) that computes them as a function of the batch-normalized input.\"}",
"{\"title\": \"I'm clear now, thanks!\", \"comment\": \"I'm clear now, thanks!\"}",
"{\"title\": \"Thank you for the comments\", \"comment\": \"Hi Zhaohui,\\n\\nthank you for taking the time to read our paper, for your positive words, and for the good questions. Please do let us know if some point is not sufficiently clarified.\", \"q1\": \"Ablation on ImageNet\", \"r1\": \"The ablation study is done on CIFAR100 for computational reasons. We test two settings (binary downsample and real downsample) and each model involves (at least) two training rounds. That said, we have some results on ImageNet on what we believe are the most relevant configurations of the ablation study. They however are not as comprehensive as for CIFAR100 due to the aforementioned reasons:\\n\\nBinary downsample (top1/top5)\\nStrong baseline 57.746 / 80.406\\nStrong baseline + Att trans + KD 59.386 / 81.854\\nReal-to-Bin (full model) 62.106 / 83.996\\n\\nReal downsample (top1/top5)\\nStrong Baseline 60.9 / 83.0\\nStrong baseline + Att trans + KD 63.1 / 84.8\\nReal-to-Bin 65.4 / 86.2\\n\\nThe difference between the full model (referred to as \\\"Real-to-Bin\\\") and the \\\"Strong baseline + Att trans + KD\\\" is the use of the gating function to re-scale the convolution output. This gives approx 2.7 and 2.3 extra top1 for the binary/real downsample cases. \\nAlso, it is possible to see that the binary downsample follows a similar trend to the real downsample, with the baseline also being already SOTA and each configuration improving results by a large margin\", \"q2\": \"Binarization function used\", \"r2\": \"We simply use the sign function. We are aware that several works have reported improvements when using approximations, and that might help our method even further. However, we haven't experimented with these options.\", \"q3\": \"result of ImageNet w/o the re-scale factor\", \"r3\": \"the results included above (on R1) clarify the crucial importance of the re-scale branch. Specifically:\\n\\nbinary downsample without/with: \\n 59.4 vs 62.1 (2.7 top1 improvement)\\n\\nreal downsample without/with: \\n 63.1/65.4 (2.3 top1 improvement)\\n\\nIt is true that table 3 might give the impression that the gating function does not work on some cases due to the \\\"SB+G\\\" entry. We should have made this clearer on the text. What we wanted to show with the \\\"SB+G\\\" entry is that attention transfer is fundamental so that the re-scale branch has healthy gradients and a clear target. Otherwise, the optimization is not successful.\\n\\nTable 3 shows that the re-scale branch adds 1.5% on top of the best-performing CIFAR-100 model (74.62 without, 76.15 with). This is consistent roughly with the performance on imagenet - it is just that the gap with respect to the full precision network on CIFAR is smaller\", \"q4\": \"hyper-parameter $r$ and parameters increase\", \"r4\": \"The parameter increase is shown in table 1, where operations are split between full precision operations and binary operations. The difference when using the gating function vs. not using it is 1.544*10^8 vs 1.564*10^8 FLOPs (binary operations stay the same at 1.676*10^9). Thus, it is a very marginal increase on computational cost in exchange for >2 top1 performance increase\\n\\nThe reduction ratio is set to 8\", \"q5\": \"The re-scale branch used for the downsampling layers' shortcut or all the shortcuts?\", \"r5\": \"Downsampling layers do not have the gating (re-scaling) layer. It is used on all of the 3x3 convolutions and only on those.\"}",
"{\"title\": \"Interesting paper\", \"comment\": \"Hi,\\n\\nI found this paper to be very interesting, and the results are very strong. This paper proposed a strong baseline, a multi-stage distillation strategy, and a re-scale branch on the shortcut. Some concerns about the paper and experiments are as follow,\", \"about_baseline\": \"1. The ablation study of the results on the ImageNet using ResNet18? \\n1.1 Strong Baseline - 2 stage optimization strategy\\n1.2 Strong Baseline - 2 stage optimization strategy and replace bn-conv-prelu by conv-bn\\n2. What is the Binarization function? The sign function?\", \"about_re_scale_branch\": \"1. From Tab. 3, it seems that the re-scale does not guarantee improvements on accuracy. What is the result of ImageNet w/o the re-scale factor?\\n2. What is the hyper-parameter $r$ used in the re-scale branch, and what about the number of parameters increase?\\n3. The re-scale branch used for the downsampling layers' shortcut or all the shortcuts?\\n\\nThanks!\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"A. Summary\", \"problem\": \"Binary NNs promise to make neural networks compatible with devices that only have access to limited computational resources (fundamental to embed NNs in mobiles or IoT devices). However, the loss of computational accuracy comes with a great loss of performances. Designing the right learning algorithm and the right binary architecture remains an open issue.\", \"contributions\": \"1. this paper reviews the current literature in Binary Neural Networks and the authors compile the existing methods to build a strong baseline. The strong baseline outperforms existing methods, which is impressive in itself.\\n2. the authors introduce a novel layer-wise objective that pushes the binary activations to match the real activations, which are given by a teacher model (real-to-binary). This is simpler and more efficient than existing alternatives.\\n3. they propose to train the real-to-binary model using a multi-stage teacher-student procedure. \\n4. the authors introduce a data-dependent re-scaling term for the binary activations.\", \"experiments\": \"1. SOTA on ImageNet for BinaryNNs: The combined methods allow bridging the gap between real-valued and binary-valued classifiers on ImageNet (65.4% vs 69.3% top-1 acc).\\n2. Comparisons with existing methods: On ImageNet, the model is compared with a complete list of alternative methods (low-bit quantization, larger binary nets, binary nets and real-valued nets). This is however unclear how the method compares with TTQ.\\n3. An ablation study is performed on CIFAR-100. It tests the gains that come with the attention matching, the data-dependent gating mechanism and the multi-stage teacher-student mechanism.\\n\\nB. Decision\", \"6\": \"Weak Accept.\\n\\nC. Argumentation\\nThe paper clearly states the problem and what are the contributions. The solution is mostly iterative but clearly brings the binary NNs one step forward. The claims are supported by a comparison with a great variety of baselines and an ablation study. Furthermore, it is laudable that great efforts were put into designing such a strong baseline. \\n\\nHowever, the paper could be easier to read and some points remain unclear:\\n1. Is it possible to compare TTQ on the same scale? this is difficult to precisely asses how real-to-binary convolutions compete with this method in the paper.\\n2. The equation 2. is intuitive yet not perfectly clear. What are the \\\"transfer points\\\"? Why using such a normalization?\\n\\n\\nD. Feedback\\n1. Data augmentation and Mix-up: is it necessary to use them here as they should yield improvements for all methods? `\\n2. table 1: is it possible to include TTQ? suggestion: is it possible to add a 4th column that measures the overall performance (estimated runtime, speedup?)\\n3. table 3: please define all the abbreviations.\\n\\nE. Question\\n1. Could you please draw a more precise comparison between TTQ and your method?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the problem of training binary neural networks. The authors first provide a strong baseline by assembling a group of training techniques that appeared in recent work that achieves state-of-the-art performance. Then the authors proposed two methods to further boost the performance gain. The first method is to use a teacher-student mechanism that uses a fully real-valued network to teach a binary network. The process is divided into three stages involving two intermediate models to reduce the gap within each teacher-student pair. The second method is to learn a re-scale factor for binary activations using real-valued activations from the previous block. Experiments show that the proposed methods improves the performance on ImageNet and CIFAR-100.\\nThe experimental results seem promising. The proposed model reduces the gap to real-valued network to within 3-5%. However, the novelty of the paper is limited and why the proposed methods would help increase the performance gain is not well demonstrated. The teacher-student model is a well-known technique for vision tasks. The authors observed in Section 4.2 that it is very important for the teacher and student to have similar architectures, but did not explain the more important question that why a real-valued network would be able to teach a binary network, since they have quite different information flow. For re-scaling, the authors did not give a detailed comparison between their approach and previous work, and it is not clear how the data-driven way helps. As the ablation study shows the gating function actually hurts for binary down-sampling layers.\\nThe writing of the paper needs improvement. A workflow/framework/algorithm description is helpful to better understand the whole framework, and the methodology part in Section 4 requires more details. Some notations need to be defined or clarified. For instance, in Figure 1 Left, what is A? The definition is given only in Section 4.2, where it is not stated in detail either. In Figure 1 Right, what is r? In Table 3, what do the abbreviations mean respectively?\", \"some_specific_questions\": [\"Why the real-valued teacher can help train the binary network while they have different information flow? What is the intuition behind the consistency assumption?\", \"The authors did not visually show the maps of real-valued and binary activations. How are they aligned in the proposed framework? And are they more similar with each other compared with previous approaches?\", \"In Section 4.1 for Initialization a 2-stage optimization strategy is used, while in Section 4.2 a multi-stage teacher-student optimization strategy is used. How are the two strategies combined?\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper is on building binary network. The steps for building binary network takes several components: traditional strategy to binary/optimize a model (like data augmentation, binary initialization using 2-stage optimization, etc), real-to-binary attention matching that tries to match the output of real values and binarized model, and data-driven channel rescaling to better approximate real convolutions. All these components together makes a strong binary network.\\n\\nAlthough there are so many steps/tricks mentioned in the paper, I think the explanation and reason for each step is easy to understand. The outcome of the model is quite impressive-- 5% improvement over the best binary model. \\n\\nIt would be interesting to compare with some other compression techniques, like low-rank, sparsity, weight sharing, etc. Or it will be also interesting to see how these techniques can combine with binary model to further compress the model.\\n\\nAlso it would be interesting to see how the latency changes using the proposed binary model. As I can see from Table 1, which outlines the FLOPS and BOPS, to my understanding BOPS is much faster than FLOPS, so in the latency wise, the proposed model will be much faster than the original model for inference. Therefore I am looking forward to the real-timing results.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper greatly reduces the gap between binarized and real valued imagenet, using a variety of techniques. The most significant contributions of this paper are engineering based, and the careful combination and integration of approaches from previous papers. I believe that this is of significant practical importance to the field. I particularly appreciate the effort put into developing a very strong baseline that combined ideas from many previous papers!\\n\\nMy biggest concern is that ResNet is itself a very wasteful architecture in terms of compute and parameter count. If the goal is to develop a compute- and memory-efficient architecture, it would be good to also consider real-valued-network baselines that were proposed with computational and/or memory efficiency as a design goal.\\n\\nAdditionally, the specific choices for the new student-teacher loss, and new scaling network architecture, seem fairly ad-hoc.\", \"detailed_comments\": \"\\\"this implies a reduction of 32\\u00d7 in memory usage\\\" , assuming the parameter count is held constant.\", \"fig_1_right\": \"This is motivated in terms of preserving scaling factors that are lost by the binarization, but the functional form for this makes it look a lot like a learned gating operation. If the sigmoid is dropped from the architecture, does performance worsen? It would be nice to see some discussion of the degree to which this is helpful because it reverses information loss due to binarization, vs. introduces a new architectural feature which is itself helpful.\\n\\nAdd a sentence describing what \\\"double skip connections\\\" are. I wasn't familiar with this phrase.\\n\\neq. 2:\\nThis functional form is pretty weird.\\nWhy is Q a square norm rather than a norm? Square error on an already-squared property is an unusual choice.\\nWhy is the denominator itself a norm? Taking the norm of a square norm is similarly an unusual choice. (eg, why not just take an average or sum over Q) \\nSay what Q_s and Q_t are (student and teacher network from context)\\n\\n\\\"thus, at test time, these scaling factors will not be fixed but rather inferred from data\\\" nit: Would not generally call this an inference process. \\\"Inference\\\" typically refers to values that are computed indirectly (eg by Bayesian reasoning), while in this case the values are computed directly. Would rather say that scaling factors are a function of data, or are determined by data, or similar.\\n\\n\\\"By doing so, more than 1/3 of the remaining gap with the real-valued network is bridged.\\\" text is shifting back and forth between using % and fractional gap to describe benefits. Would just use one measure consistently.\", \"computational_cost_analysis\": \"This is very useful.\\nNote though that ResNet is a very wasteful architecture in terms of compute! It would be good to include a comparison to imagenet architectures that have computational and memory efficiency as a design goal. (eg, MobileNet comes to mind)\\n\\nVery nice on the ablation studies.\"}"
]
} |
r1e7NgrYvH | DO-AutoEncoder: Learning and Intervening Bivariate Causal Mechanisms in Images | [
"Tianshuo Cong",
"Dan Peng",
"Furui Liu",
"Zhitang Chen"
] | Some fundamental limitations of deep learning have been exposed such as lacking generalizability and being vunerable to adversarial attack. Instead, researchers realize that causation is much more stable than association relationship in data. In this paper, we propose a new framework called do-calculus AutoEncoder(DO-AE) for deep representation learning that fully capture bivariate causal relationship in the images which allows us to intervene in images generation process. DO-AE consists of two key ingredients: causal relationship mining in images and intervention-enabling deep causal structured representation learning. The goal here is to learn deep representations that correspond to the concepts in the physical world as well as their causal structure. To verify the proposed method, we create a dataset named PHY2D, which contains abstract graphic description in accordance with the laws of physics. Our experiments demonstrate our method is able to correctly identify the bivariate causal relationship between concepts in images and the representation learned enables a do-calculus manipulation to images, which generates artificial images that might possibly break the physical law depending on where we intervene the causal system. | [
"Causality discovery",
"AutoEncoder",
"Deep representation learning",
"Do-calculus"
] | Reject | https://openreview.net/pdf?id=r1e7NgrYvH | https://openreview.net/forum?id=r1e7NgrYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"rpuGy8Snh9",
"HyxLcHJnsB",
"BJxLxHJ2sr",
"ByePp4yniH",
"SkgN4O6TFB",
"SyxvfpcntB",
"HyeKwG52tH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744145,
1573807501975,
1573807341834,
1573807294643,
1571833899583,
1571757326725,
1571754592778
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2243/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2243/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2243/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2243/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2243/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2243/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The idea of integrating causality into an auto-encoder is interesting and very timely. While the reviewers find this paper to contain some interesting ideas, the technical contributions and mathematical rigor, scope of the method, and the presentation of results would need to be significantly improved in order for this work to reach the quality bar of ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"We thank reviewer #3 for his review.\\n\\n-We will enhance the writing ability to make the narrative of the paper more professional and rigorous.\\n\\n-There are a number of variables and complicated causal graph in natural images, we want to use the artificial data to conduct a preliminary exploration. Our next work will focus on discovering causal relationship in real world. \\n\\nThank you again for your time sincerely.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"We are grateful for R2's constructive suggestions and we believe they could greatly improve our paper.\\n\\n-As your proposition said, though for our dataset the two variables are given, it is quite different from causal discovery from measurement data. Our model fits for high dimensional visual data, which is a main contribution of this work. \\n\\n-We separate the image into two part for two reasons:\\n 1.Reduce the mutual interference between variables. \\n 2.Our paper is mainly based on the following assumption: The Kolmogorov complexity of conditional and marginal distributions is smaller in causal direction than that in anti-causal direction. In Figure 2, the part I of DO-AE is to estimate K(P_x), the part II is to estimate K(P_{y|x}). The outputs of these two part make up the whole image, we want to intervene in whole images generation process, so two parts of the DO-AE are indispensable.\\n\\n-We will access related work you recommend\\uff0cand cite the related work about causality with VAE.\\n\\nThank you again for your detail comments.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"We thank reviewer #1 for his comments of our paper.\\n\\n-First of all, we are sorry about the grammatical errors in this paper, we will fix them to increase the readability of the paper.\\n\\n-Our aim is to construct the causal graph from the given images. This kind of exploration is challenging, we provide a physics dataset to explore the possibilities that this problem can solve. Our model functionality is built on the presence of known arrows\\uff0cand the causal graph reflected by the images in the dataset has an arrow. DO-AE focus on learning the direction of the arrow(No arrow situation is not within our consideration). By the way, the right causal graph for the spring example is: A <-> B.\\n\\n-We decide the net is rich or not by determining the quality of the generated images visually and intuitively. We agree that increasing the statistical experiments and setting quantitative estimate index could make the results more convincing. We will improve this part in next version. \\n\\nThank you again for your feedback.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Thank you for your submission.\\n\\n- What is the specific question/problem tackled by the paper?\\n\\nThe paper proposes a VAE architecture to learn causal relations and allow for interventions. The architecture requires knowledge of the causal graph, and the direction of the causal arrows are inferred by comparing the log-likelihoods of generated images. The architecture may also require knowledge that an arrow exists between two vertices. This relies on the principle that \\\"low-capacity\\\" neural networks can predict better along the causal arrows (with the cause as input and the effect as the output) than in the opposite direction (with the effect as input and the cause as the output).\\n\\nThe paper focuses on the graph (A, B) where one wants to understand whether A causes B, or B causes A. The paper also discusses intervening in this graph.\\n\\nThe paper uses a new dataset for evaluating the approach, based on simple Newtonian systems. \\n\\n- Is the approach well motivated, including being well-placed in the literature?\\n\\nI think the motivation is adequate, but the review of the literature glosses over related work (or the absence thereof) in predicting the direction of arrows in causal graphs. The comparison of the proposed dataset against existing ones is missing.\\n\\n- Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.\\n\\nThe procedure for determining whether A causes B (or B causes A) is qualitative. The paper demonstrates that the performance gap between the correct and incorrect explanations is consistently distinguishable across multiple experiments.\\n\\nVisual inspection of the generated images is also used for assessing the quality of the models.\\n\\nBecause the results are qualitative, the support for the claims is not as strong as it could be (with quantitative results).\\n\\n- Summarize what the paper claims to do/contribute. Be positive and generous.\", \"the_paper_has_two_main_contributions\": \"* Evidence to the Independent Mechanism principle (in a setting different from Bengio et al.'s transfer setup).\\n* A new dataset for evaluating learning causal arrows (with accessible ground-truths).\\n\\nI think these are interesting contributions.\\n\\n- Is the paper clearly written?\\n\\nThe paper has a number of grammatical errors that should be fixed.\\n\\nThe explanation of how the latent interventions are made is important and should be included.\\n\\n- Clearly state your decision (accept or reject) with one or two key reasons for this choice.\\n\\nI vote for a weak accept.\\n\\n- Provide supporting arguments for the reasons for the decision.\\n\\nI trust that the writing issues will be addressed in due course, but I am also concerned about the fact that evaluations are qualitative. The qualitative results provide support for the contributions that could be strengthened. \\n\\nThe dataset is also an interesting contribution and it is a good idea to give it visibility. For this, though, it is important that the paper assess its strengths and limitations in comparison to alternative datasets.\\n\\n- Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.\\n\\nI am not convinced that mentioning Kolmogorov complexity is an efficient use of the space. I think the content could be improved by making the motivation section more concise and adding a few more experimental results or discussion.\\n\\nWhich discussions would be good to have? I think it should be noted that the intervention on effect should behave as demonstrated (creating implausible scenarios). Also some more development on the spring example: What is the right causal graph for it, and can the arrows in that graph be learned?\\n\\nQuantitative results would also improve the paper. Maybe decide between A->B or A<-B based on a statistical test?\\n\\nYou give an example about elephant-grassland association. Please cite a source for that.\\n\\nSuppose that both likelihoods for A->B and B<-A are about the same. How do you decide if your model is too rich, or if there's no relationship? (This is an important question to understand if the method requires knowledge that an (A,B) arrow exists or not.)\\n\\nThe panels in Figure 5 do not support the claim. The simple net gets better at the cause, but in some cases the rich representation does a better job at the effect.\\n\\nI think the physics dataset is also a contribution, so its originality & impact should be discussed in comparison to related work. Why is this an adequate benchmark? How does it address limitations of other benchmarks that could be used to evaluate proposed solutions for the problem in question?\\n\\nIn summary, my suggestions for improving the paper are:\\n1) Make sure & demonstrate (by adequate discussion of related work) the originality of the contributions:\\n1.1) The method for detecting the direction of causal arrows.\\n1.2) The dataset as a benchmark for the problem being studied.\\n2) Report quantitative results across the dataset and maybe across multiple setups for each name/physical law, with good coverage. You may consider a test set where the parameters are within the sampling range of your training set, and also outside the sampling range (where success of the method would be even more interesting).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In summary: This paper is not ready for publication. The paper contains some potentially interesting ideas, but the presentation quality is not sufficient for publication. The paper should be substantially improved before re-submission.\", \"strengths\": [\"Causality is an important and established research area, and papers on this topic would be timely.\", \"Paper contains some interesting ideas to integrate causality into an auto-encoder (but see weaknesses below)\", \"Paper proposes a new dataset for evaluating causal mechanisms (but the approach is not evaluated)\"], \"weaknesses\": [\"The quality of the writing is inappropriate for a scientific venue. Language throughout the paper is loose, eg \\\"physics is a hot topic\\\" or \\\"People have studied causality for a long time\\\" or \\\"Causality is a bridge between science and philosophy\\\" The paper should be re-written so that it is precise and clear.\", \"The technical approach has several typos and lacks discussion of the approach. Instead, several high-level statements are made, with long equations. This makes appreciating the contribution of the paper difficult.\", \"The dataset is potentially interesting, but it is artificial. A much more exciting dataset would be realistic data.\", \"The experiments only evaluate the likelihood, but it is not clear whether this is on a training or testing set.\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presented an image data that are generated from two variables using some physics law. It also proposed a model to identify the causal relationship between the two variables using the image dataset. The method, in general, utilize the general idea that the causal direction is easier for the model to describe than the anti-causal direction. So the image is fad into a VAE based model in two different ways. The one with lower loses represents the correct causal direction.\", \"pros\": \"1. Causal discovery is, in general, an interesting problem and causal discovery based on representation learning are of great importance. \\n2. The dataset presented can be used for generic causal discovery evaluation which can be useful for the community.\", \"cons_and_other_details\": \"1. The method assumes that A and B are known and given which is very unrealistic in natural images. Also with this assumption, the problem is not much different from causal discovery from measurement data rather than image data. \\n2. Based on the previous point, the method, in general, does not match the motivation in the introduction where a causal representation needs to be learned as the images are already separated into different components. \\n3. The method cannot be scaled to more than two variables even with all components given as it requires exponentially many trials of the method. This setting is not so interesting anymore with image input. \\n4. There is much-related work with causality and representation learning also causality with NN or VAE. None of these related work has been discussed. for example Leon Bottou https://arxiv.org/pdf/1907.02893.pdf; Many works from Mingming Gong etc\\n5. The math is not very rigorous in general. For example, Eq(2) s a valid-loss but not likelihood. Also, the work did not say what likelihood under what distribution. This is propositional to Gaussian likelihood which may work fine in practice but the math presentation is not rigorous. \\n6. For the method (see figure 2), I did not see why the first part needs to be there as the second part takes the ground truth A as input. Using only the second part of the model which tries to see whether A->B is easier or B->A is easier is sufficient for the aim of identifying the relationship between given A and B. \\n7. The dataset may be more useful to the causality community if it is released as a simulator rather than the images.\"}"
]
} |
BJgQ4lSFPH | StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding | [
"Wei Wang",
"Bin Bi",
"Ming Yan",
"Chen Wu",
"Jiangnan Xia",
"Zuyi Bao",
"Liwei Peng",
"Luo Si"
] | Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering. Inspired by the linearization exploration work of Elman, we extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. As a result, the new model is adapted to different levels of language understanding required by downstream tasks.
The StructBERT with structural pre-training gives surprisingly good empirical results on a variety of downstream tasks, including pushing the state-of-the-art on the GLUE benchmark to 89.0 (outperforming all published models at the time of model submission), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on SNLI to 91.7. | [
"structbert",
"incorporating language",
"bert",
"accuracy",
"new model",
"language structures",
"downstream tasks",
"deep language",
"language model"
] | Accept (Poster) | https://openreview.net/pdf?id=BJgQ4lSFPH | https://openreview.net/forum?id=BJgQ4lSFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4-Ry-UlJY",
"rJxVMwHhsH",
"H1eXLxr2oB",
"S1gP2REhjH",
"H1lYNb_e5S",
"rJer0FEJqB",
"HJgNRaZ0tr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744114,
1573832460092,
1573830731211,
1573830318949,
1572008241041,
1571928525076,
1571851723988
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2242/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2242/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2242/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2242/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2242/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2242/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a pair of complementary word- and sentence-level pretraining objectives for BERT-style models, and shows that they are empirically effective, especially when used with an already-pretrained RoBERTa model.\\n\\nWork of this kind has been extremely impactful in NLP, and so I'm somewhat biased toward acceptance: If this isn't published, it seems likely that other groups will go to the trouble to replicate roughly these experiments. However, I think the paper is borderline. Reviewers were impressed by the results, but not convinced that the ablations and analyses were sufficient to motivate the proposed methods, suggesting that some variants of the proposed methods could likely be substantially better. In addition, I agree strongly with R3 that framing this work around 'language structure' is disingenuous, and actively misleads readers about the contribution to the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you so much for going through the paper carefully and providing positive and useful feedback to our work!\", \"please_see_our_responses_below\": \"C1. Thanks for the suggestion. In StructBERT, the new sentence objective is designed to replace the Next Sentence Prediction (NSP) objective in original BERT, while the new word objective is a supplement to the masked LM objective. We will pretrain only with the new sentence and word objectives (without masked LM) to study how well it performs compared with BERT.\\n\\nC2. Following BERT's methodology, we experimented with different configurations of shuffled N-grams and sampling rates, and found out the best setting of trigrams and 5% sampling rate. Analysis of the experiments will be detailed in the final version to justify our configuration choices.\\n\\nC3. Although original BERT can capture some syntactic information from text, we believe that our new structural objectives can inject more capability into the language model: 1) The new word objective forces the model to correct locally shuffled trigrams, and thus enhances its capability in modeling local syntax. Besides, the new word objective used for word ordering can be effective in controlling local fluency, which is also indicated in [1]. 2) The new sentence objective, on the other hand, enables the model to capture discourse-level coherence properties between sentences (e.g., strengthening, contrast and causality). Similar findings are also reported in [2, 3].\\n\\n[1] Discriminative Syntax-Based Word Ordering for Text Generation, Zhang and Clark 2015\\n[2] Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning, Jernite et al. 2017\\n[3] ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, Lan et al. 2019\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you so much for going through the paper carefully and providing such a positive feedback. Please see below our response to your comments:\\n\\nC1. The intuition of word ordering task is from the task of Grammatical error correction, while the intuition of sentence ordering task is inspired from the discourse-level coherence property and causal relationship between the natural sentences. Yes, we have also tested some other tasks: 1) mask only entities or nouns, 2) increase the mask rate, 3) predict the next sentence, nonadjacent sentence in the same document, and random sentence from another document. But we did not observe more improvement.\\n\\nC2. We agree with the reviewer about this and it has been fixed accordingly.\\n\\nC3. We have been in touch with SQuAD's administrator to evaluate our submitted model and update its score on the leaderboard. This process involves much manual effort from both us and the administrator. We have not got the updated score from the administrator yet. We will update our results on SQuAD upon receipt.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you so much for going through the paper carefully and providing positive and useful feedback to our work!\", \"please_see_our_responses_below\": \"C1. Thanks for your valuable suggestion. In our paper, structure refers to the word and sentence ordering inherent in natural language. Word trigrams are unscrambled to uncover the word ordering structure. We understand that the references to Elman 1990 are not very effective in clarification, and will remove them as suggested.\\n\\nC2. Thanks for the comment. We will add the missing citations accordingly.\\n\\nC3. The pretraining objective in XLNet belongs to the autoregressive (AR) language modeling, where they use all permutations of the factorization order to approximate the AR objective. By contrast, our new word objective is still an autoencoding (AE) one, which is designed to model local language structures by incorporating word ordering into pretraining. Moreover, our word objective also differs from XLNet's in that ours permutates word order while XLNet's objective permutates factorization order. Specifically, the order of words in XLNet does not change given their fixed positional embeddings. In contrast, StructBERT shuffles words by changing their positions in text. We will include the elaboration in our final version.\\n\\nC4. We did try bigram and four-gram shuffling orders, but did not observe further improvement over trigram shuffling. We speculate that shuffling less words (bigrams) cannot take full advantage of the word ordering structure, while shuffling more words (4+-grams) can introduce more noise and harm the robustness of the model.\\n\\nC5. The Binary Ordering of Sentences in [2] models the ordering of two consecutive sentences. Despite the similarity, our new objective differs in two ways: 1) It is defined on textual segments rather than natural sentences. 2) It is 3-way classification of segments while the objective in [2] determines binary ordering of sentences. We will add this work and its difference from ours in our final version.\\n\\nC6. It has been fixed.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces two new tasks for large scale language model pretraining: trigram word unscrambling and contextual sentence ordering. Using these tasks to pretrain on top of masked language modelling shows improvements when the resulting model is finetuned on downstream tasks. The proposed tasks are simple to implement, and particularly the sentence ordering task is an improvement over the original BERT next sentence task, which is widely regarded as too simple to drive learning good representations. For this reason, I recommend acceptance of this paper.\", \"some_minor_quibbles\": \"1) Structure in language usually means syntactic structure. How does unscrambling word trigrams help uncover syntactic structure? The references to Elman 1990 also don't serve to clarify anything, I suggest that they are removed.\\n2) Some prior work on word ordering (e.g. [1] and older papers cited therein) is missing.\\n3) The permutation objective seems very similar to the XLNet objective. Could the authors elaborate more on this in the paper?\\n4) Did the authors try with other n-gram shuffling orders?\\n5) The sentence ordering task has been used previously (e.g. [2]).\\n6) Table 1 overhangs the right margin.\", \"references\": \"[1] Discriminative Syntax-Based Word Ordering for Text Generation, Zhang and Clark 2015\\n[2] Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning, Jernite et al. 2017\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposed a new pre-trained language model based on BERT, called StructBERT. The key contributions are the two new pre-train objectives, (1) word structural objective, where the goal is to reconstruct the right order of intentionally shuffled word tokens, and (2) sentence structural objective, a three-class sentence-pair prediction, either the 2nd sentence precedes the 1st, the 2nd succeeds the 1st, or the 2nd is randomly selected. Unlike the original NSP (next sentence prediction) task, which is simple but tends out to be not so helpful in many downstream tasks, both proposed pre-train objectives seem to be rather useful in benchmarks tested in the paper, including GLUE, SNLI, and SQuAD.\", \"The paper is well written and understandable for anyone who has a basic background about BERT or pre-train. The experimental results are impressive. Some of my questions / suggestions:\", \"The two auxiliary tasks are evidently helpful. I wonder what intuition/theory leads to the selection of these two tasks? If the authors have test multiple other tasks that were not as helpful, it is also interesting to know them.\", \"The wording of the text should be revised to reflect the up-to-date leaderboard results. Personally, I don't think the leaderboard results are that critical, but just want to make sure the writing is accurate at the time of publishing.\", \"Please also update the results from SQuAD 1.1 CodaLab.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes to use additional structures within and between sentences for pre-training BERT. The basic idea is to shuffle either some n-grams within sentences or the sentences in texts, then train the model to predict the correct orders. Experiments in this work show that, with this additional training objective, the proposed pre-trained model, StructBERT, obtains good performance on the tasks including natural language understanding and question answering.\", \"Overall, I think the experiments and results in this work are not sufficient enough to support the claim:\", \"It is necessary to show the performance of BERT only trained with the proposed word and sentence objectives. Otherwise, it is not clear how much benefit the model can get from them and the work is basically incremental.\", \"Some justification is needed about why choosing trigrams and why 5% is a good number of sampling trigrams from texts\", \"Besides, there are some recent work on analyzing why BERT encodes any linguistic properties of texts, for example\", \"Goldberg. Assessing BERT's syntactic abilities. 2019\", \"Tenny et al. BERT Rediscovers the Classical NLP Pipeline. ACL 2019\", \"Tenny et al. What do you learn from context? ICLR 2019\", \"All of them show positive results on BERT can capture some syntactic information from text automatically. Which makes me wonder why the simple additional training objective proposed in this work can still lead to performance improvement. Is there an explanation?\"]}"
]
} |
r1xQNlBYPS | Multichannel Generative Language Models | [
"Harris Chan",
"Jamie Kiros",
"William Chan"
] | A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning but through two separate channels corresponding to their languages. In this work, we present Multichannel Generative Language Models (MGLM), which models the joint distribution over multiple channels, and all its decompositions using a single neural network. MGLM can be trained by feeding it k way parallel-data, bilingual data, or monolingual data across pre-determined channels. MGLM is capable of both conditional generation and unconditional sampling. For conditional generation, the model is given a fully observed channel, and generates the k-1 channels in parallel. In the case of machine translation, this is akin to giving it one source, and the model generates k-1 targets. MGLM can also do partial conditional sampling, where the channels are seeded with prespecified words, and the model is asked to infill the rest. Finally, we can sample from MGLM unconditionally over all k channels. Our experiments on the Multi30K dataset containing English, French, Czech, and German languages suggest that the multitask training with the joint objective leads to improvements in bilingual translations. We provide a quantitative analysis of the quality-diversity trade-offs for different variants of the multichannel model for conditional generation, and a measurement of self-consistency during unconditional generation. We provide qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages. | [
"text generation",
"generative language models",
"natural language processing"
] | Reject | https://openreview.net/pdf?id=r1xQNlBYPS | https://openreview.net/forum?id=r1xQNlBYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GOaCZmAxuZ",
"rJeGc5YniH",
"r1xTOcthsH",
"Hye_Pqt3oS",
"SJlT6ASJcS",
"S1gYvmxpFB",
"H1g0adr3Kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798744084,
1573849737801,
1573849716811,
1573849695719,
1571933892679,
1571779424803,
1571735749587
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2241/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2241/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2241/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2241/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2241/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2241/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a multi-view generative model which is applied to multilingual text generation. Although all reviewers find the overall approach is important and some results are interesting, the main concern is about the novelty. At the technical level, the proposed method is the extension of the original two-view KERMIT to multiviews, which I have to say incremental. At a higher level, multi-lingual language generation itself is not a very novel idea, and the contribution of the proposed method should be better positioned comparing to related studies. (for example, Dong et al, ACL 2015 as suggested by R#3). Also, some reviewers pointed out the problems in presentation and unconvincing experimental setup. I support the reviewers\\u2019 opinions and would like to recommend rejection this time.\\nI recommend authors to take in the reviewers\\u2019 comments and polish the work for the next chance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for taking the time to review our paper. We address your questions below:\\n\\n1. Indeed, we agree that Multi30k is not a typical large scale machine translation dataset. However, we chose the Multi30k dataset because it provided us with multiple high quality channels that expresses the same underlying meaning (i.e. the image) under different viewpoint (languages in this case), in order to highlight our approach. While we have not performed experiments with datasets that have partially (m<k) parallel data (such as bilingual pairs or monolingual data as suggested), we believe that our approach should also be able to take advantage of partially parallel data. The partially parallel data should also be weighted by the prior on how often we believe that combination of channels is encountered. \\n\\n2. While the Multilingual KERMIT approach can be considered incremental conceptually, we believe that we contributed the novelty in terms of the empirical investigations and characterizing the (unconditional and (partially) conditional) samples from the model. In addition, multilingual KERMIT is only one possible implementation of the proposed MGLM framework, with the hopes that this helps encourage others in the community to pursue this line of generative modeling of multichannel texts. \\n\\n3. We will clarify our interpretation of these results in the main paper with more details about the test sets and our hypothesis of the model\\u2019s performance. The Flickr test sets are considered \\u201cin-domain\\u201d, while the MSCOCO are the harder \\u201cout-of-domain\\u201d where the captions were selected to contain ambiguous verbs, which makes the translation task harder. In those cases, we found that training the model on the more difficult task (i.e. multi-target (any language -> rest) helped with generalization to the MSCOCO test set in the case of English -> German. In the case of English -> French, we hypothesize that the bilingual model performed the best because there is a high mutual information between English and French, such that training on additional languages do not help the model generalize, but rather even distracts the model.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for taking the time to review our paper, especially for bringing to our attention an important related work by Dong et al. We will revise the paper to include their work in the related work section and discussion.\\n\\nOne difference with Dong et al. to our approach is that during the multi-target generation, our model\\u2019s output at each time step can be conditioned on both the source sentence and the partial translations of all the target languages, while Dong et al.\\u2019s model only conditions on the input and the partial translation of the particular target language (in parallel). Our experiment compares the effects of conditioning only on one partial target versus all partial targets inference time.\", \"on_novelty\": \"while our specific implementation can be considered incremental from KERMIT, we believe that the task of learning a generative model over several channels is an underexplored direction that is worthwhile to pursue. Despite the vast interest in unconditional generative modeling in images (GANs, VAEs, etc.), we have seen much less interest in the text domain. We also believe that our empirical contributions will help increase interest in this direction by showing what is possible even with using a relatively simple model.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for taking the time to review our paper, especially for the constructive feedback on how to improve the clarity of the presentation. We will revise section 2 and 3 to be more clear and self-contained in the future revision, without relying too much on the diagram. We will also expand on the discussion related to Table 1.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This submission belongs to the area of multi-view modelling. In particular, the submission describes construction of multi-view language models that (i) can generate text simultaneously in multiple languages, (ii) can generate text in one or more languages conditioned on text from another language. This submission extends previously proposed KERMIT from two views to more than two views. I believe this paper could be of interest to multi-view modelling/learning community.\\n\\nThough the original KERMIT approach is very interesting and you application of it to more than two views is also interesting I find the presentation to be poor. In particular I find section 2 to be hard if not impossible to understand without referring to the original paper where the story, equations, nomenclature are much more clearly explained. Even though your extension from two views to multiple is simple I find reliance on a diagram to be a mistake as I find your description not to be very clear. Given that there are no equations to support the reader and that the original equations are not adequate I find it hard to understand Sections 2 and 3. The key experimental result in Table 1 is only briefly commented on despite featuring multiple models with different strength and weaknesses, multiple types of inference. If space is of concern I would suggest removing Figure 2 (or changing input from non-English to English and removing or removing another qualitative table).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a multichannel generative language model (MGLM), which models the joint distribution p(channel_1, ..., channel_k) over k channels. MGLM can be used for both conditional generation (e.g., machine translation) and unconditional sampling. In the experiments, MGLM uses the Multi30k dataset where multiple high quality channels are available, in the form of multilingual translations.\", \"i_feel_that_this_paper_is_not_ready_for_publication_at_iclr_due_to_the_following_major_issues\": [\"Missing important related work: This paper seems unaware of an important related work \\\"Multi-Task Learning for Multiple Language Translation\\\" by Dong et al, ACL 2015. In fact, Dong et al. investigated the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Although machine translation is just an example of MGLM, Dong et al. is highly relevant to the conditional generation with MGLM, needless to say that they share the same multi-language translation problem domain. Thus, this paper will be much stronger if comparison with important baseline methods is provided.\", \"Limited novelty: This paper extends Chan et al.'s KERMIT by applying its objective on tasks with more than 2 sequences, in order to learn the joint distribution p(channel_1, ..., channel_k) over k channel sequences. Most of the math in this paper can be found in the original Chan et al.'s paper. The extension to the multichannel case is incremental as it is hard to justify the challenge of such extensions.\", \"Besides, as minor suggestions, it would help readers if more illustrations of Figure 1 (especially the inference part) can be provided.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Paper summary]\\nThis work is an extension of KERMIT (Chan et al., 2019) to multiple languages and the proposed model is called \\u201cmultichannel generative language models\\u201d. KERMIT is an extension of \\u201cInsertion Transformer\\u201d (Stern et. al, 2019), a non-autoregressive model that can jointly determine which word and which place the translated words should be inserted. KERMIT shares the encoder and decoder of insertion Transformer, and the source sentence and target sentence are concatenated to train a generative model (also, various loss functions are included). In this work, parallel sentences from more than two languages are concatenated together and fed into KERMIT. Each language is associated with a language embedding. This work demonstrates that a joint distribution p(x1, . . . , xk) over k channels/languages can be properly modeled through a single model. The authors carry out experiments on multi30k dataset.\\n\\n[Pros] Some discoveries of this work are interesting, including: (1) It is possible to use a single model to translate a sentence into different languages in a non-autoregressive way. (2) The unconditional multilingual generation in Section 4.5 is interesting, especially, the generation order is determined by the model rather than left-to-right.\\n\\n[Questions]\\n1.\\tThe authors work on multi30k dataset, which is not a typical dataset for machine translation. \\n(A)\\tThe dataset and the corresponding information is at https://github.com/multi30k/dataset. The number of words in a sentence is smaller than 15, which is too short for a machine translation. Also, the pattern of sentences is relatively simple.\\n(B)\\tFor real world application, I am not sure whether it is possible to collect a large amount of k-parallel data where $k>2$. Therefore, the application scenario is limited. What if we have a large amount of bilingual data instead of k-parallel data? How should we leverage the large amount of monolingual data?\\n2.\\tFor novelty, this is an extension of KERMIT to a multilingual version, which limits the novelty of this wok.\\n3.\\tThe best results on En->De in Table 1 are inconsistent. On tst16, bilingual en<->de is the best; on tst17, en<->{rest} is the best; on mscoco, any<->rest is the best. In Table 2, seems using bilingual data only is the best choice. This makes me confuse about how to use your proposed method. However,\"}"
]
} |
B1xMEerYvB | Smooth markets: A basic mechanism for organizing gradient-based learners | [
"David Balduzzi",
"Wojciech M. Czarnecki",
"Tom Anthony",
"Ian Gemp",
"Edward Hughes",
"Joel Leibo",
"Georgios Piliouras",
"Thore Graepel"
] | With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games. We therefore introduce smooth markets (SM-games), a class of n-player games with pairwise zero sum interactions. SM-games codify a common design pattern in machine learning that includes some GANs, adversarial training, and other recent algorithms. We show that SM-games are amenable to analysis and optimization using first-order methods. | [
"game theory",
"optimization",
"gradient descent",
"adversarial learning"
] | Accept (Poster) | https://openreview.net/pdf?id=B1xMEerYvB | https://openreview.net/forum?id=B1xMEerYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4kseDFG3Ub",
"S1x3tao5iB",
"SJljlpi9sr",
"H1gVKATRFS",
"SJepamwKYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798744055,
1573727619663,
1573727475072,
1571901052311,
1571546053456
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2240/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2240/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2240/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2240/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper discusses smooth market games and demonstrate the merit of the approach. The reviewers agree on the quality of the paper, and the comments have been addressed well by the authors.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for their time and detailed feedback.\\n\\n1. More background material. \\nThe reviewer is correct, the paper covers a wide range of topics quite rapidly. We will provide more discussion in the related work section and also the Appendix to help orient readers.\\n\\n2. Move Lemma 1 before Definition 2? \\nYes, will do.\\n\\n3. Use \\u201cclassic definition of Nash\\u201d. \\nYes\\n\\n4. Define potential games. \\nWe will put this in the appendix to save space.\\n\\n5. How can stop_grad cause problems? \\n\\nStop_grad can be used to construct essentially any smooth game, see discussion in Appendix C. Stop_grad thus opens the door to a huge variety of pathological behaviors and intractable dynamics. We will provide a more detailed explanation in the final version.\\n\\n6. Near zero-sum. \\n\\nAlthough GANs are adversarial, they are not always zero-sum games. A concrete example is discussed in sections 3.2.2 and 3.2.3 of Goodfellow\\u2019s tutorial (https://arxiv.org/abs/1701.00160). In short, it turns out that gradients tend to saturate in the original minmax setting, so Goodfellow introduced a \\u201cheuristic, non-saturating game\\u201d. By now there is a huge number of GANs in the literature, and it is difficult to find any mathematical property that is common to all of them. In particular, we do not have a precise definition of \\u201cnear zero-sum\\u201d. Nevertheless, GANs do share adversarial dynamics as a unifying theme. Appendix B of the paper contains a brief discussion of some implications of loosening the pairwise zero-sum constraint.I\\n\\n7. Why are forecasts always positive? \\n\\nSection 5 imposes the condition that \\u201cproduction updates\\u201d are either gradients or gradients rescaled by positive learning rates. Firms that update their production vectors using gradients will therefore always forecast profits to infinitesimally increase -- precisely because the updates are in the direction of steepest ascent. However, in reality, profits may of course go down due to the actions of other firms.\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for their time and feedback. We have two responses:\\n\\n1. Pompous writing. \\n\\nWe agree with R2 and will tone down the writing -- see the next point. \\n\\nOn the other hand, it\\u2019s worth defending the James Scott quote. As we see it, overly focusing on (Nash) equilibria has been a major blocker for research on n-player games. We find the Scott quote inspiring because it suggests to let go of equilibria and instead search for measurements that can be pieced together to provide a synoptic view (a map) of a game. That is, we don\\u2019t need to find Nash equilibria to understand what is happening or predict what will happen in an interacting population. \\n\\n2. R2 points out that we used a confusing mix of terminology. We agree. In particular, the analogy with GDP adds nothing to the paper. We will edit the paper to make the terminology more clear and consistent.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper draws from concepts and patterns of game theory and economics to re-interpret common machine learning algorithms, and introduces a paradigm of n-player games such that there exists a simple way to study such games where each player is individually optimized by machine learning algorithms.\\n\\nOverall, this is a very interesting and inspiring paper with its interdisciplinary touch, but at the same time doesn\\u2019t lose readability for audience with mostly machine learning background. The paper is clear, concise, and well-written, and hence I do not have any overarching comments. There are several suggestions and questions, though, that I\\u2019d like to propose.\\n\\n1. Since many concepts are not from the machine learning domain, further and more detailed touch on related work are very beneficial. The current section of related work (Section 1.2) is more succinct than laying the right background information. Authors could consider giving some examples of \\u201cmarket mechanism\\u201d, and similarly for \\u201cdesign pattern\\u201d. This can give readers some ideas on how this present work\\u2019s use of \\u201cexisting design pattern\\u201d is different from prior works\\u2019 proposals of \\u201cmarket mechanism\\u201d. Again for the second paragraph, although these concepts are more familiar to the audience, a short description for each mentioned concept is good to include (e.g. monotone games, etc)\\n\\n2. Move Lemma 1 before Definition 2? Since Lemma 1 is most related to Definition 1 and not related to Definition 2.\\n\\n3. For Definition 2, authors could consider adding \\u201cthe classic definition [of Nash equilibrium]\\u201d to the list to clarify the difference. This classic Nash equilibrium definition will be referred to again in later sections. \\n\\n4. In Section 2.1, consider giving a formal definition of \\u201cpotential game\\u201d.\\n\\n5. In Section 2.3, it is not immediately clear after the text why stop_gradient becomes a problem.\\n\\n6. In Section 3.1, could we elucidate a bit on what is \\u201cnear zero sum\\u201d?\\n\\n7. In Section 5, why the equation following the text \\u201cfirm i\\u2019s forecast\\u201d is always positive? Since by definition, forecast is meant to represent the change of profit, which can be negative?\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces smooth market games (SM-games), a class of smooth games characterized by pairwise zero sum interactions, and show SM-games possess a number of appealing properties:\\n- A fixed point is a local Nash equilibrium iff it is stable\\n- Local convergence to stable fixed points\\n- Unstable fixed points are repellent\\n- Dynamics are bounded, assuming diminishing returns-to-scale for sufficiently large vectors.\\nThe need for a class of games with such properties is motivated by a discussion of the pathologies of smooth games under simultaneous gradient ascent. \\n\\nThe paper has an extensive literature review, addresses an important problem, and has informative discussions. It is well written and there are nice examples to illustrate the central ideas. However, I do have some criticisms:\\n\\n1. The writing in this paper is sometimes pompous. Quoting James Scott has no added value to this work. Reference to Adam Smith\\u2019s invisible hand not only has no added value but is also confusing and detracts from the paper \\u2014 SM-games are not purporting to be real economic models.\\n2. Throughout the paper there is a confusing mix between ideas from economics and ideas from accounting. For example, it is not correct to say that aggregate revenue is the same as GDP. Revenue is an accounting term to show how much benefit you record in a year. GDP measures the value of\\u00a0production. It would be more accurate to write that aggregate output and GDP are the same. But if SM-games are reflecting the perspective of an accountant, as is stated, why is GDP being discussed at all? Similarly, the idea of a dummy player with off the book costs is not consistent with an accounting perspective. I think that the paper would be clearest if it did not attempt to use terminology from other fields. But if it is going to do so, it should make a more substantial effort to do so consistently.\"}"
]
} |
B1xfElrKPr | Enhancing the Transformer with explicit relational encoding for math problem solving | [
"Imanol Schlag",
"Paul Smolensky",
"Roland Fernandez",
"Nebojsa Jojic",
"Jürgen Schmidhuber",
"Jianfeng Gao"
] | We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure.
Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems.
The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of regular attention.
The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems.
Pretrained models and code will be made available after publication. | [
"Tensor Product Representation",
"Transformer",
"Mathematics Dataset",
"Attention"
] | Reject | https://openreview.net/pdf?id=B1xfElrKPr | https://openreview.net/forum?id=B1xfElrKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JdgFOOBdA7",
"BylEoglDsS",
"r1lwBglviS",
"r1gnfxeDsH",
"rJezC1lDoH",
"Hyx0sLGk5S",
"HJeOaw0CKr",
"rkx9a33nKS",
"rkxFHLa2dH",
"Syx3-8jBOH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798744026,
1573482652322,
1573482559333,
1573482516511,
1573482441935,
1571919526398,
1571903424052,
1571765441994,
1570719297014,
1570252292317
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2239/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2239/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2239/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2239/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2239/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2239/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2239/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2239/Authors"
],
[
"~Hyunjae_Kim1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a change in the attention mechanism of Transformers yielding the so-called \\\"Tensor-Product Transformer\\\" (TP-Transformer). The main idea is to capture filler-role relationships by incorporating a Hadamard product of each value vector representation (after attention) with a relation vector, for every attention head at every layer. The resulting model achieves SOTA on the Mathematics Dataset. Attention maps are shown in the analysis to give insights into how TP-Transformer is capable of solving the Mathematics Dataset's challenging problems.\\n\\nWhile the modified attention mechanism is interesting and the analysis is insightful (and improved with the addition of an experiment in NMT after the rebuttal), the reviewers expressed some concerns in the discussion stage:\\n\\n1. The comparison to baseline is not fair (not to mention the 8.24% claim in conclusion). The proposed approach adds 5 million parameters to a normal transformer (table 1, 5M is a lot!), but in terms of interpolation, it only improves 3% (extrapolation improves 0.5%) at 700k steps. The rebuttal claimed that it is fair as long as the hidden size is comparable, but I don't think that's a fair argument. I suspect that increasing the feedforward hidden size (d_ff) of a normal transformer to match parameters (and add #training steps to match #train steps) might change the conclusion.\\n2. The new experiment on WMT further convinces me that the theoretical motivation does not hold in practice. Even with the added few million more parameters, it only improved BLEU by 0.05 (we usually consider >0.5 as significant or non-random). This might be because the feedforward and non-linearity can disambiguate as well. \\n\\nI also found the name TP-Transformer a bit misleading, since what is proposed and tested here is the Hadamard product (i.e. only the diagonal part of the tensor product). \\n\\nI recommend resubmitting an improved version of this paper with stronger empirical evidence of outperformance of regular Transformers with comparable number of parameters.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your enthusiastic review. We appreciate that you share our excitement with regards to our progress on this extremely challenging dataset. One of the great features of this dataset is the opportunity to investigate deeply compositional problems. Due to its careful creation and design, it does not suffer from good-performing but shallow heuristics as is often the case in natural-language-based datasets. This makes it a great testbed to develop models that can represent and reason over more symbolic structures. We are happy to answer your questions:\\n\\n1.) Please see point 1 of our response to AnonReviewer3 who raised a very similar question. Please let us know if you then still consider the conclusion misleading as it is. \\n\\n2.) The dataset is designed such that the answers are virtually impossible to guess. For an answer to be correct, all characters of the answer sequence must be predicted accurately. We observed that our trained model often only gets a single character wrong which renders the whole answer wrong. This results in a performance drop throughout all tasks -- we did not analyse this in detail. The four tasks on which our model performs worst are tasks such as generating the prime factors of a number. Consider this character sequence input: \\\"What are the prime factors of 2104900?\\\" from which the TP-Transformer has to predict the sequence \\\"2, 5, 7, 31, 97\\\". The other tasks are similar in difficulty; this is best explained through an actual sample.\", \"div_remainder\": \"\\\"What is the remainder when 61720 is divided by 183?\\\" Answer: \\\"49\\\"\", \"simplify_power\": \"\\\"Simplify n**(-31)*n**(2/73)*(n/(n/(n/n**(1/3))))/n**(2/19)*n**21 assuming n is positive.\\\" Answer: \\\"n**(-39160/4161)\\\"\\n\\nWe added an anonymous colab notebook enabling you to experiment on your own in your browser. You\\u2019ll find the link in our general comment which summarizes all our revisions. \\n\\n3.) We followed your advice and included a neural machine translation experiment. Throughout training our TP-Transformer model achieves a lower negative log-likelihood when compared to the regular Transformer with the same hyper-parameters. \\n\\nWe hope you find our modifications sufficient for the revised version of this paper to be accepted, and welcome any further questions or suggestions.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your review and your questions. We are pleased to hear that you appreciated our manuscript and consider it well written.\\n\\n1.) Your first question is understandable. In section 4 of our manuscript, we provide a controlled comparison to justify our modifications to the Transformer architecture. Note the boldface on the 700k step version. However, the claim of a new state of the art in our conclusion is different, as it does not assert the source of the improvement. Here, as done throughout the literature, a claim of a new state of the art with x% improvement is a simple comparison of the best results before vs after our work. Such claims are routinely made when the \\\"before\\\" and \\\"after\\\" models differ substantially in training data and time and parameter count. That said, we are willing to change this statement if the reviewers consider it misleading. \\n\\n2.) This result is no longer missing. \\n\\nWe understand that you are not entirely familiar with this line of work. Nevertheless, we hope that you find our arguments convincing enough to accept the revised version. We are happy to answer any further questions you might have.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your detailed review.\\n\\nPlease note that our motivation is to better incorporate structure into the hidden representations of the transformer. We propose a relation vector to explicitly label the information-flow from one cell to another. We argue in the manuscript that such a representational space is best represented as a Tensor Product Representation (TPR). Unfortunately, a full Tensor Product is not feasible for models of this size which is why we instead make use of an optimal approximation: the Hadamard product of the relation vector and the attention-weighted value vectors. We added two additional sections to the appendix that justify our decisions in great detail. We\\u2019d appreciate your feedback on those. \\n\\nWe'll now address your comments: \\n\\n1. We considered fairness in the sense that both architectures have the same hidden state size. For this type of model, it is not possible to satisfy this constraint and equal number of parameters reasonably. Also, keeping the hidden state size the same maintains the closest match to the regular transformer model. \\n\\n2. We followed your advice and added a preliminary Machine Translation experiment to the appendix of the manuscript. In our experiment, our TP-Transformer achieved a lower negative log-likelihood compared to the regular Transformer. \\n\\n3. As you have pointed out: this section is merely a theoretical exercise that highlights the drawbacks of multiple layers of attention. We want to stress that this is not our main motivation but a closely related insight. Even though a neural network with a non-linear activation function is a general function approximator, that doesn't mean that the network is likely going to learn to disambiguate representations in the right way. Therefore, we argue to incorporate a simple inductive bias in the form of our TPR approximation between the relation vector and the linear combination of value vectors. \\n\\n4. This is a valid concern. We agree and decided to defer a detailed analysis of the learned structure for future work. We removed this section from our manuscript. \\n\\nWe believe that the arguments and experiments are now sufficient evidence for the generalizability of our contribution. In future work, we plan to apply the TP-Transformer to language modelling, and we believe our empirical and theoretical results so far give us good reasons to believe that this will be a promising direction. \\n\\nWe are looking forward to your response.\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your valuable time. We have revised our manuscript. We will summarize our changes below and address the details of each review directly. \\n\\n1.) We have improved the readability of our text and equations throughout the manuscript. \\n2.) We have added the previously missing extrapolation results for our 500k step TP-Transformer model (as pointed out by R3) \\n3.) We made sure the comparison in Figure 2 is fair by adding the per-task performances of the 700k step TP-Transformer (this was a concern raised by R1). \\n4.) We have removed section 6 \\\"Interpretation of a TP-Transformer column\\\". We believe that an analysis of the attention weights of our TP-Transformer is valuable if done adequately. During our revision, we decided that this section not on par with the quality of the other sections. As such, we decided to leave such an analysis for future work. R3 had a similar comment. \\n5.) We have added two sections to the Appendix to clarify the relations between the Hadamard- and Tensor-Product-Binding (A.1) and the appropriateness of the Hadamard product as a compression of the Tensor product (A.2). We kindly ask the reviewers to examine these two additional sections. They address comments made by all reviewers. \\n6.) We have added an experiment to the appendix (A.3) to attest the generality of the TP-Attention. We compare the regular Transformer and our TP-Transformer (using the hyperparameters of the regular Transformer) on the WMT\\u201914 en-de translation dataset with additional data from WMT\\u201917. Throughout training we find that the TP-Transformer achieves a lower negative log-likelihood when compared with the regular Transformer. This addresses comments made by R1 and R2. \\n\\nAdditionally, we\\u2019d like to announce the following: \\n\\n7.) We have prepared an anonymous Google colab notebook for anyone to experiment with our best TP-Transformer: https://colab.research.google.com/drive/1hXUmTXkN2mXF07mcv9uim14BhjpPpqiY \\n8.) We have prepared our source code, pretrained models, and preprocessed data for publication. We\\u2019ll include the respective links to the final deanonymized version of this manuscript.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Motivated by the fact that the attention mechanism in transformers is symmetric which might not be able to disambiguate different orders, this work proposes to use a subject vector (in addition to query, key states) for each attention head, and multiply it elementwise with the context vector for each head before merging the heads. Experiments on a mathematics dataset shows superior performance compared to the normal transformer. Qualitatively, the proposed model exhibits attentions that are more interpretable, and clustering by the subject vector gives some insights into how the model solved this problem.\", \"pros\": \"1. This work shows better performance than baseline transformer.\\n2. The clustering of the subject vectors gives some insights into model's behavior .\", \"cons\": \"1. In terms of experiments, the proposed approach adds a few million parameters to normal transformer (table 1), but in terms of interpolation it only improves 3% (extrapolation improves 0.5%) at 700k steps. The comparison would be fairer if the normal transformer can be given more parameters.\\n2. In terms of experiments, this approach is only evaluated on the mathematics dataset, but the argument for relational encoding is pretty general. It would be nice if experiments on other tasks are shown in addition to the math dataset.\\n3. In terms of motivation, the claim that there're ambiguities introduced by multiple layers of regular attention needs to be supported by evidence. I think (which authors also pointed out) the feedforward network and non-linearties can disambiguate as well.\\n4. In terms of interpretablity, there's claim that the learned attention maps more interpretable than transformer. Can there be more quantitative measures? It appears to me that both are hard to interpret.\\n\\nWhile this work shows superior performance on the mathematics dataset, I have a few concerns about the generalizability of this proposed architectural change to other problems, as well as the fairness of comparison to baseline. Therefore, I am inclined to reject this paper.\\n\\n----updates after reading rebuttal----\\nThanks for adding the new NMT experiment in Appendix A3. My concern is that the proposed TP-Transformer is not very effective on NMT. Therefore, I'm keeping my score.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors incorporated tensor-product representations within the Transformer. By creating an attention mechanism called TP-Attention, they explicitly encode the relations between each Transformer cell and the other cells, whose values are retrieved by attention. By introducing tensor products, the proposed algorithm can empirically perform well for noncommutative operations with multiple arguments, such as division. The authors trained models with the proposed algorithm on the Mathematics Dataset and compared the performances with two baselines (simple LSTM and the original Transformer). At last, several model snapshots are provided to help interpret several key elements of the model: the learned roles, the attention maps, the TP-transformer columns and so on.\\n\\nOverall, the paper is well-written. The experimental results generally support the high-level intuition behind the introduction of tensor-product representation. I would recommend accepting this paper.\", \"some_quick_questions\": \"1. It was claimed in the Conclusion section that the performance of the proposed algorithm beats the previously published state of the art by 8.24%. I guess the number comes from the 2nd and the last row of interpolation accuracy in Table 1. However, these two results are obviously trained for different numbers of iterations: The baseline algorithm was trained for 500k steps, while the proposed algorithm is trained for 1.7M steps. Is it a fair comparison? If the proposed algorithm is also trained for 500k steps, the improvement is around 2.3%. \\n\\n2. Why is the extrapolation accuracy results for TP-Transformer missing in Table 1?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper illustrates the TP-Transformer architecture on the challenging mathematics dataset. The TP-Transformer combines the transformer architecture with tensor-product representations. The experiments show a dramatic improvement of accuracies compared with SOTA models. Moreover, the paper also explains the reason why the TP-Transformer can learn the structural position and relation to other symbols with a detailed math proof.\\n \\nOverall, this paper is nice as it makes a milestone for math problem solving from unique perspectives. To be specific, the paper makes the following contributions:\\n\\n1. Demonstrate a novel architecture TP-Transformer in details;\\n2. Achieve a better accuracies in the challenging mathematics dataset than the SOTA transformer models;\\n3. Illustrate in fundamental math that why TP-Transformer can learn the structural position and relation, and solve the binding problems of stacked attention layers.\", \"here_are_a_few_minor_questions_that_may_further_improve_the_paper\": \"1. The conclusion states that TP-Transformer beats the previously published SOTA by 8.24%. However, it does not match to the experiment results (see section 4).\\n\\n2. In figure 5, there are 4 tasks in the bottom with accuracies lower than 0.5. It would be nice to provide more insights on this.\\n \\n3. It would be interesting to see whether it transferable to the other downstream tasks (such as natural language understanding) besides the experiments on the challenging mathematics dataset.\"}",
"{\"comment\": \"Thank you for your comment. What is used in the version of the TP-Transformer studied in this paper is not actually the inner product, which returns a single scalar, but the Hadamard or element-wise product, which returns a vector: there is no summation of products in the Hadamard product as there is in the inner product.\\n\\nA crucial property of the tensor product for its use in vector representations of structure is that a structure like a/b is not confusable with b/a, unlike the frequently-used bag-of-words encoding: in the BOW encoding of a/b, the pair of arguments to the operator are encoded as A + B, where A and B are the vector encodings of a and b respectively. Obviously, this cannot be distinguished from the BOW encoding of the argument pair in b/a, B + A. (Hence the name symbol \\u201cbag\\u201d, as opposed to symbol \\u201cstructure\\u201d.)\\n\\nIn a tensor-product representation of the argument pair in a/b, we have A * N + B * D, where N and D are respectively distinct vector embeddings of the numerator (or first-argument) and denominator (or second-argument) roles, and * denotes the tensor product. This is distinct from A * D + B * N, the embedding of the argument-pair in b/a. (In Sec. 6.2 of the paper, an aspect of this general property, in the context of attention models, is discussed. In Sec. 5, visualization of the roles and the per-role-attention show that this particular distinction, between the numerator and denominator roles, is learned and used by the trained TP-Transformer model.)\\n\\nThis crucial property of the tensor product, that A * N + B * D =! A * D + B * N, is shared by the Hadamard product, so if we now take * to represent the Hadamard product, the inequality remains true. To achieve this important property, the full tensor product is not required: the Hadamard product is the diagonal of the tensor product, which retains much of the product structure of the tensor product. In any application, it is an empirical question how much of the full tensor product is required to successfully encode distinctions between bindings of symbols to roles; in the TP-Transformer, it turns out that the diagonal of the tensor product is sufficient to get improvement in performance over having no symbol-role-product structure at all. Unfortunately, the compute requirements of training on the Mathematics Dataset made using the full tensor product infeasible, unless the vector representations of symbols and roles were reduced to dimensions that proved to be too small for the task. When future compute makes it possible, we expect that expanding from the diagonal to the full tensor product will provide further improvement in performance and interpretability.\", \"title\": \"We use the tensor product property of the Hadamard product. Not the inner product.\"}",
"{\"comment\": \"Thank you for the interesting work.\\n\\nYou highlighted the tensor-product operation in the introduction of the paper,\\nbut in practice, the inner product was used.\\n\\nI wonder if any mathematical property of the tensor-product was used in the paper.\", \"title\": \"tensor-product ?\"}"
]
} |
HkxZVlHYvH | Ergodic Inference: Accelerate Convergence by Optimisation | [
"Yichuan Zhang",
"José Miguel Hernández-Lobato"
] | Statistical inference methods are fundamentally important in machine learning. Most state-of-the-art inference algorithms are
variants of Markov chain Monte Carlo (MCMC) or variational inference (VI). However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias.
In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation. The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency. We show that our method produces promising results on popular benchmarks when compared to recent hybrid methods of MCMC and VI. | [
"MCMC",
"variational inference",
"statistical inference"
] | Reject | https://openreview.net/pdf?id=HkxZVlHYvH | https://openreview.net/forum?id=HkxZVlHYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"uCZlwGXaz",
"R7m1AAby6N",
"yNvVbsNy0f",
"HJgOa5gnoS",
"SJlVrce2ir",
"Bkx2W9xhir",
"HJlkXwkAtS",
"Bkgsjnv_Yr",
"BJxNl7DdOH"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576873962882,
1576867337785,
1576798743997,
1573812927602,
1573812795968,
1573812740332,
1571841814833,
1571482787424,
1570431724038
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2238/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2238/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2238/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2238/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2238/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2238/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2238/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2238/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Request for the justification on the *seemingly* situations where the algorithm could \\\"cheat\\\" to failure\", \"comment\": \"I believe the program chair would agree with that it is not fair to reject a paper based on some *seemingly* failure situations. If so, I would like to request the program chair to provide further justification on how to *easily* find those *seemingly* situations our algorithm can fail to converge considering the well-known monotone convergence of MCMC chains.\\n\\n*In the paper, we have verified empirically it is unlikely to find such situation our algorithm leads straight downhill to an atypically high-density region.*\\n\\nFirst, our algorithm is essentially optimising the convergence speed of ergodic Markov chains. It is well known that ergodic Markov/MCMC chains converge *monotonically* towards the target distribution and the target distribution *only*. This is the most known result in MCMC literature and the foundation of all MCMC methods. Given an initial approximation distribution to the target that is not in the high density area as the optimisation objective constraint Eq. 5, it is simply impossible for the ergodic Markov chain marginal distribution p_T to converge to a delta/nearly delta distribution in the high density area of the target distribution.\\n\\nSecond, hypothetically, it is possible to construct such *seemingly* situations where the algorithm could fail as pointed out the comment above, it is clearly not easy to find such simulations. We have verified our algorithm converges to most common benchmark target distributions, like highly correlated Gaussian in Section 3.3. In our experiment session, the valid convergence of our algorithm is verified on 6 2-d benchmarks without single failure and our algorithm significantly improves the training speed of deep convolutional generative models on MNIST.\"}",
"{\"title\": \"Request for Further Clarification on Differentiating Discontinuous Functions\", \"comment\": \"I appreciate the opinion of the program chair and the reviewers on differentiating discontinuous function, but I have the following questions if would be great if the program chair and reviewers could answer.\\n\\nFirst, differentiating through discontinuous MH step is nothing new! It has been used in previous works on gradient based adaptive/auto-tuning MCMC methods, like one of our baseline method, Generalised HMC (GHMC). Generalised HMC was published in the paper \\\"Generalizing Hamiltonian Monte Carlo with Neural Networks\\\" from Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein, *accepted by ICLR 2018*. GHMC uses gradient of loss function based on HMC samples to tune HMC parameters, which requires differentiation through HMC samples involves discontinuous MH accept-reject step. The question about discontinuity of MH accept-reject in the gradient computation in GHMC appeared as https://openreview.net/forum?id=B1n8LexRZ¬eId=ryeffj94z\\n\\nConsidering existing literature like Levy et al. accepted by ICLR 2018, may I ask what the program chair want to imply by the claim \\\"This (differentiate through a discontinuous function) is a big part of why adaptive HMC is hard.\\\"? It is reasonable to assume the program chair of ICLR should be aware of the paper of Levy et al. accepted by ICLR 2018. Then, I cannot help to ask why *our paper is rejected on the ground of no sense to differentiate discontinuous MH accept-reject step*, but the exact same differentiation is used in the GHMC of Levy et al. was accepted by ICLR 2018?\\n\\nNow let's see the sense of differentiating discontinuous function with random input variables in a formal mathematical point of view.\\n\\nFirst, indicator function (the source of discontinuity in the MH step) is equivalent to the heaviside step function, which is differentiable as stated on the wikipedia page https://en.wikipedia.org/wiki/Heaviside_step_function\\n\\nAs mentioned on the wikipedia page, \\\"This (Heaviside step function) was originally developed in operational calculus for the solution of differential equations, ..\\\". So, I can't help to ask why the program chair claim differentiating the indicator function makes no sense, given the fact it is a solution of differential equations in well known literature.\\n\\nIf the words from wikipedia are not convincing/precise enough, I would like to point the program chair to the concept of \\\"almost everywhere\\\". (See the technical explanation from https://en.wikipedia.org/wiki/Almost_everywhere or any textbook on measure theory.)\\nYou can search \\\"differentiable almost everywhere\\\" for the criteria of functions that is differentiable almost everywhere on the wikipedia page. It is clear that indicator function is differentiable almost everywhere.\\n\\nThe authors of \\\"Generalizing Hamiltonian Monte Carlo with Neural Networks\\\" also mentioned the validity of their differentiation through MH step due to *differentiable almost everywhere* in the discussion of ICLR 2018 see the link https://openreview.net/forum?id=B1n8LexRZ¬eId=ryeffj94z\\n\\nFinally, let's look into the mathematical nitty-gritty of how *differentiable almost everywhere* gives the validity of differentiating through the discontinuous MH accept-reject step (I have discussed about this in my rebuttal to Reviewer 1, I assume the program chair ignored this):\\nIndicator function is not continuous but differentiable *almost everywhere*.\\nLet I(x) be an indicator function in Equation 8, that is I(x) = 1 if p_MH(x) > u otherwise I(x) = 0. \\nThen, we can try to differentiate I(x) w.r.t. x as following three cases: \\nCase 1 p_MH(x) > u: I'(x) = \\\\partial_x 1 = 0; (the derivative of constant 1 w.r.t. *any* variable is 0)\\nCase 2 p_MH(x) < u: I'(x) = \\\\partial_x 0 = 0; (the derivative of constant 0 w.r.t. *any* variable is 0)\\nCase 3 p_MH(x) = u: I'(x) is not defined. (I(x) is not differentiable in this case)\\nTherefore, the derivative of g_{\\\\phi}(x) is well defined a.e. except the case p_MH(x) = u.\\nHowever, the probability of Case 3 where p_MH(x) happens to be *exactly* equal to u is 0, because u is a uniform *continuous* variable between 0 and 1. This explains why g_{\\\\phi} in Equation 8 is differentiable in practice.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a way of adapting an HMC-based posterior inference algorithm. It's based on two approximations: replacing the entropy of the final state with the entropy of the initial state, and differentiating through the MH acceptance step. Experiments show it is able to sample from some toy distributions and achieves slightly higher log-likelihood on binarized MNIST than competing approaches.\\n\\nThe paper is well-written, and the experiments seem pretty reasonable.\\n\\nI don't find the motivations for the aforementioned approximations very convincing. It's claimed that encouraging entropy of P_0 has a similar effect to encouraging entropy of P_T, but it seems easy to come up with situations where the algorithm could \\\"cheat\\\" by finding a high-entropy P_0 which leads straight downhill to an atypically high-density region. Similarly, there was some reviewer discussion about whether it's OK to differentiate through the indicator function; while we differentiate through nondifferentiable functions all the time, it makes no sense to differentiate through a discontinuous function. (This is a big part of why adaptive HMC is hard.)\\n\\nThis paper has some promising ideas, but overall the reviewers and I don't think this is quite ready.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your valuable feedback. We would like to address concerns and questions in the review as following:\\n\\n1)\\\"First, for Equation 4, the explanation behind \\\"replacing\\\" H(P_{T}) with ELBO w.r.t. P_{0} is confusing...\\\": \\n\\nYes, this statement could be confusing, we will rephrase it in a better word if the paper is accepted. However, it is important to clarify that we explained in the paper that including the ELBO w.r.t. P_{0} in ergodic objective (Equation 4) is motivated by the similar effect as including H(P_{T}) in the ELBO of P_{T}, that is stopping P_T collapsing to the mode of \\\\pi. This is briefly explained in the paper \\\" we instead replace this\\nterm with ...This also prevents PT from collapsing to the mode ... and prevent PT from collapsing to a delta.\\\" on Page 4. \\n\\nIt is very important to know the *fact* that, if the ELBO of P_T is finite, H(P_T) cannot be minus infinity (which is equivalent to P_T collapsing to delta at the model of \\\\pi). As mentioned in the paper, maximising ELBO of P_0 by guarantees maximising the ELBO of P_T. Therefore, maximising ELBO of P_0 guarantees that the ELBO of P_T is finite, that is H(P_T) > ELBO(P_0) - E_{P_T}[\\\\log \\\\pi] > -\\\\infty.\\n\\n\\n2) \\\"nothing is stated about the reason on why P_{T} gets closer to \\\\pi when Equation (5) is satisfied.\\\": \\n\\nThe convergence of MCMC chain is the *reason* to why P_{T} gets closer to \\\\pi *no matter* Equation (5) is satisfied or not. This is well known in MCMC literature [1, 2]. In particular, we mentioned the two important literature [1] and [2] in the submitted paper to clarify this. We have explicitly clarified this in the revised version.\\n\\nThe reason to including Equation (5) is to avoid a pathological case in optimising the ergodic objective (Equation 4). The pathology has been explained in our paper, see the paragraph: \\\"The constraint in Equation 5 is necessary to eliminate the following pathology...\\\". To clarify this important pathology further, we dedicated Section 3.3 to demonstrate the pathological case in a correlated 2-d Gaussian example when Equation 4 is optimised without Equation 5 and showed this pathological case can be fixed by including Equation 5. This is illustrated as Figure 1 and 2.\\n\\n[1] Iain Murray and Ruslan Salakhutdinov. Notes on the KL-divergence between a Markov chain and its\\nequilibrium distribution. preprint, 2008.\\n\\n[2] Christian P. Robert and George Casella. Monte Carlo Statistical Methods (Springer Texts in Statistics).\\nSpringer-Verlag New York, Inc., Secaucus, NJ, USA, 2005. ISBN 0387212396.\\n\\n3)\\\"I was unable to understand why the algorithm is named \\\"ergodic\\\" inference. Both HVI and the proposed EI rely on the ergodic property of Markov chain for improving the variational distribution. I hope the authors could better illustrate on this point. I also think the term \\\"ergodic approximation\\\" in page 3. is hard to understand.\\\":\\n\\nThe ergodicity is about convergence of MCMC chain to target distributions *as the length of the chain increases*. Ergodic inference is more than just using a few MCMC steps to reduce bias, but EI provides to *accelerate* the convergence with finite number of MCMC steps. This is why we demonstrate optimising the proposed Ergodic Objective can significantly improve the decay of bias in log likelihood and maximum mean discrepancy (MMD) score in Figure 5.\\n\\nWe explained the key difference in the loss function between EI and HVI in the paragraph \\\"It is interesting to compare the EMLBO with the objective function optimised by Salimans et al. (2015),...\\\". It is important to mention that HVI and HVAE only use 1 HMC step to reduce some bias in their experiments. In contrast, EI work better in general with multiple MCMC steps. To demonstrate the advantage of Ergodic Objective over the ELBO used in HVI, in Section 4.1 we showed that EI outperforms HVI in sample bias in all 2-d benchmarks, even with exactly the same setting of HMC chains with multiple HMC steps.\"}",
"{\"title\": \"Part 2\", \"comment\": \"Following Part 1....\\n\\n3) \\\"...P_T will decorrelate with P_0. How to prevent P_T from collapsing to a delta function? Also intuitively, there should be a weight balancing the two terms of the loss; why a weight of 1 is used?\\\":\", \"there_are_two_important_details_prevent_p_t_from_collapsing_to_a_delta\": \"1) The convergence of MCMC chain: Well-known in literature like [1] and [2] as mentioned in the paper, the total variation (Theorem 6.53 in [2]) and the KL divergence [1] of P_T to the target distribution \\\\pi decrease *monotonically* after every MCMC transition. Therefore, as long as the ELBO of P_0 is finite, it is guaranteed that the ELBO of P_T is also finite. Therefore, P_T is *impossible* to be a delta distribution (the KL between any non-delta target and a delta distribution is negative infinity), because it is contradictory with the monotonic decay of the KL proved in [2] under the assumption of P_0 with finite ELBO.\\n\\n2) Maximising the ELBO of P_0 in Ergodic Objective (Equation 4): maximising the ELBO of P_0 prevents P_0 to be a delta. Any P_0 that is a delta distribution has the ELBO of P_0 equal to negative infinity. Therefore, maximising the ELBO of P_0 guarantees that our assumption of P_0 with finite ELBO is true.\\n\\nCombine the two details together, it is not possible under *any circumstances (unless the target is a delta)* for P_T to be a delta distribution.\\n\\nThere can be a weight balancing in the loss, but it is most likely to be not useful: \\n1) with sufficient MCMC steps, the first term has strong dependency with MCMC parameters but very weak dependency with the P_0 parameters\\n2) In contrast, the second term only depends on the parameters of P_0 but not MCMC parameters at all. \\n\\nBecause these two ergodic objective terms depend on two independent sets of parameters separately, the weight balancing strategy does not have much impact in practice. We have verified this by experiments and we are happy to add this in the paper if the paper is accepted.\\n\\n[1] Iain Murray and Ruslan Salakhutdinov. Notes on the KL-divergence between a Markov chain and its\\nequilibrium distribution. preprint, 2008.\\n\\n[2] Christian P. Robert and George Casella. Monte Carlo Statistical Methods (Springer Texts in Statistics).\\nSpringer-Verlag New York, Inc., Secaucus, NJ, USA, 2005. ISBN 0387212396.\\n\\n4) \\\"In equation 8, the function g_{phi} is not continuous because of the indicator function 1()\\\": \\n\\n\\\"Continuous\\\" and \\\"differentiable\\\" are not the same concept. Indicator function is not continuous but differentiable *almost everywhere* (a.e.).\\nLet I(x) be an indicator function in Equation 8, that is I(x) = 1 if p_MH(x) > u otherwise I(x) = 0. \\nThen, we can try to differentiate I(x) w.r.t. x as following three cases: \\nif p_MH(x) > u: I'(x) = \\\\partial_x 1 = 0; (the derivative of constant 1 w.r.t. *any* variable is 0)\\nif p_MH(x) < u: I'(x) = \\\\partial_x 0 = 0; (the derivative of constant 0 w.r.t. *any* variable is 0)\\nif p_MH(x) = u: I'(x) is not defined. (I(x) is not differentiable in this case)\\nTherefore, the derivative of g_{\\\\phi}(x) is well defined a.e. except the case p_MH(x) = u.\\nHowever, the probability of the case p_MH(x) happens to be *exactly* equal to u is 0, because u is a uniform *continuous* variable between 0 and 1. This explains why g_{\\\\phi} in Equation 8 is differentiable in practice.\\n\\n5) \\\"how to choose the hyperparameter h?\\\": \\n\\nAs motivated in Section 3.1, the hyperparameter h is introduced to avoid pathological cases in optimising MCMC parameter in the case variational approximation P_0 of \\\\pi significantly underestimates the target entropy. This pathology is demonstrated in toy example of correlated Gaussian in Section 3.3. Therefore, the entropy hyperparameter h can be chosen by grid search or any other standard hyperparameter tuning strategy and relatively high value of h should be preferred.\\n\\nAs Reviewer 3 mentioned \\\"as the entropy of the prior, as in practice prior and posterior might be different dramatically.\\\" This is true. But, priors p(x) are mostly likely to have higher entropy than the posteriors p(x | y) due to the likelihood terms p(y | x). Therefore, the entropy of priors is likely to be a good heuristic choice of h.\"}",
"{\"title\": \"Rebuttal Part 1\", \"comment\": \"Thanks for your valuable feedback. We would like to address concerns and questions in the review as following:\\n\\n1) Notation of \\\\pi and \\\\pi^*: The definition of \\\\pi and \\\\pi^* are explicitly stated in the Section 2.1 \\\"MONTE CARLO STATISTICAL INFERENCE\\\". \\\\pi denotes the target distribution and \\\\pi^* is the unnormalised density function of \\\\pi (as mentioned in line 4 in Section 2.1). In the case of Bayesian inference for sampling posterior p(x | y), the target distribution \\\\pi is p(x | y) = p(x, y) / p(y), where p(y) is the normalising constant given observed y, then the unnormalised target density \\\\pi^* is p(x, y) = p(x)p(y | x).\\n\\nThe marginal distribution of the last state of the MCMC chain is denoted by p_T, which is explicitly defined in Equation 3.\\n\\n2) Advantages over other combined MCMC and VI: \\n*How to optimising MCMC parameters is the main contribution of our ergodic inference (EI) method.*\\n\\nWe compared EI with two relevant hybrid methods of MCMC and VI that *also optimise MCMC parameters*, Hamiltonian variational Inference (HVI)[2] and Hamiltonian Variational Autoencoder (HVAE)[3]. Our experiment results show the advantage of EI over HVI and HVAE in the both the training efficiency (test log likelihood under the same training time) and the best test log likelihood (See the Table 4 on page 9).\\n\\nWe did not consider other works on MCMC and VI combination, like \\\"A Contrastive Divergence for Combining Variational Inference and MCMC\\\" [1] as the reviewer mentioned, because *the MCMC parameters are not tuned in these methods*. To verify that the MCMC parameters is not optimised in [1], one can simply check on the parameter of variational approximation defined (Equation 3), which does not include any MCMC parameters, and the loss function (Equation 7) in [1]. *For this reason, other works like [1] are less relevant to our EI compared to [2] and [3].*\\n\\n[1] Francisco J. R. Ruiz and Michalis K. Titsias., 2019.A Contrastive Divergence for Combining Variational Inference and MCMC. International Conference on Machine Learning (ICML).\\n\\n[2] Salimans, T., Kingma, D.P. and Welling, M., 2015. MCMC and Variational Inference: Bridging the Gap.\\n\\n[3] Caterini, A.L., Doucet, A. and Sejdinovic, D., 2018. Hamiltonian variational auto-encoder. In Advances in Neural Information Processing Systems (pp. 8167-8177).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents a new hybrid method to unify MCMC and VI. The key idea is to interpret a \\ufb01nite-length MCMC/HMC chain as a parametric procedure, whose parameters can be optimized via a VI-motivated objective. Specifically, the authors propose to modify the well-known ELBO (which is now non-trivial due to the intractable entropy) to form a new constrained and tractable objective. The presented techniques are tested on synthetic datasets and with the experiments of a VAE on MNIST.\\n\\nThe presented technique is interesting. However, there are several concerns of mine that should be addressed, as detailed below.\\n\\nThe notations of \\\\pi and \\\\pi^* are very confusing. I guess \\\\pi represents the marginal distribution of the last state of the MCMC chain, while \\\\pi^* is the target distribution. Is that right? Please clarify their meanings. \\n\\nThere are related works that combine MCMC and VI, such as [1]. What are the advantages of the proposed method compared to that method? \\n[1] Francisco J. R. Ruiz and Michalis K. Titsias. A Contrastive Divergence for Combining Variational Inference and MCMC. International Conference on Machine Learning (ICML). 2019.\\n\\nIn equation 4, given fixed P_0 and a long enough MCMC chain, P_T will decorrelate with P_0. How to prevent P_T from collapsing to a delta function? Also intuitively, there should be a weight balancing the two terms of the loss; why a weight of 1 is used?\\n\\nIn equation 8, the function g_{phi} is not continuous because of the indicator function 1(). How do you back-propagate through that function? In the paragraph before Section 3.3, how would you defend the adopted stop-gradient trick?\\n\\nIn the paragraph before Figure 1, how to choose the hyperparameter h? It might not be suitable to set h as the entropy of the prior, as in practice prior and posterior might be different dramatically.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The presented method is very useful to deep learning in the era of uncertainty modelling, which requires the use of Bayesian inference arguments. It's a valuable improvement upon variational inference, it's novel, and the derivations are correct. The presentation is elaborate and covers all expected aspects. The literature review is up to date.\\nThe experimental results are diverse enough and convincing. The authors have considered both proof of concept experiments and deep learning architectures. The comparisons are valid.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new combination of Markov chain Monte Carlo (MCMC) and variational inference (VI) for improving approximate inference. The main contribution is the optimization objective that allows improving the quality of samples obtained from the combination of VI and MCMC. Specifically, the authors minimize the \\\"approximate\\\" version of the Kullback-Leibler (KL) divergence between the distribution of MCMC + VI and the true distribution. The authors validate the effectiveness of their formulation through experiments on 6 synthetic benchmarks and generative modeling of MNIST (experiments on Bayesian neural networks are also provided in the appendix).\\n\\nOverall, I think the paper provides a solid contribution towards combining MCMC and VI by proposing a way to optimize the MCMC part. The experiments validate the method by showing consistent improvement over existing methods. However, I believe the justification behind the proposed formulation, i.e., Equation (4) and (5), needs to be improved before being published at the conference.\\n\\nFirst, for Equation (4), the explanation behind \\\"replacing\\\" H(P_{T}) with ELBO w.r.t. P_{0} is confusing. Specifically, it is reasoned that ELBO w.r.t. P_{t} only increase after MCMC steps. This statement is misleading since the replacement was done for H(P_{T}), not the ELBO w.r.t. P_{T}. \\n\\nI also think the Equation (5) is not properly justified. it is stated that the constraint is needed for preventing P_{T} to be closer to P_{0}. However, nothing is stated about the reason on why P_{T} gets closer to \\\\pi when Equation (5) is satisfied. Note that even if the expected log-likelihood of the distribution is high, it does not necessarily mean that the distribution is more similar.\", \"minor_comments\": [\"I was unable to understand why the algorithm is named \\\"ergodic\\\" inference. Both HVI and the proposed EI rely on the ergodic property of Markov chain for improving the variational distribution. I hope the authors could better illustrate on this point. I also think the term \\\"ergodic approximation\\\" in page 3. is hard to understand.\", \"I (weakly) suggest changing y-axis of Figure 5. to log-scale for better readability. It almost seems that the brown plot does not converge in Fig 5-(a).\", \"The paper could have been strengthened by performing experiments on more challenging datasets, e.g., CIFAR-10 or CIFAR-100.\"]}"
]
} |
r1l-VeSKwS | SemanticAdv: Generating Adversarial Examples via Attribute-Conditional Image Editing | [
"Haonan Qiu",
"Chaowei Xiao",
"Lei Yang",
"Xinchen Yan",
"HongLak Lee",
"Bo Li"
] | Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee “subtle perturbation" by limiting the Lp norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate “unrestricted adversarial examples". Such semantic based perturbation is more practical compared with the Lp bounded perturbation. In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various “adversarial" targets. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high targeted attack success rate against real-world black-box services such as Azure face verification service based on transferability. To further demonstrate the applicability of SemanticAdv beyond face recognition domain, we also generate semantic perturbations on street-view images. Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches. | [
"adversarial examples",
"semantic attack"
] | Reject | https://openreview.net/pdf?id=r1l-VeSKwS | https://openreview.net/forum?id=r1l-VeSKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ctV0cFEYcE",
"rJxyQVhiiB",
"rkg5HFI5sS",
"rkxZTuIcoH",
"Skx6qOI9iH",
"rke1SHLqiB",
"ryx3grU9iS",
"rJlh8NU5sB",
"S1e7sXjM5H",
"Hke-61rAFH",
"H1lWBP4oFH",
"SkxDlf-ttr",
"rkxCcRlYtH",
"SJeZgu5juS",
"ByeBgbotOr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798743969,
1573794839472,
1573706049772,
1573705913400,
1573705876731,
1573705015038,
1573704947908,
1573704788355,
1572152218681,
1571864504609,
1571665720735,
1571521007248,
1571520150320,
1570641896727,
1570513132910
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"~Zhenhua_Chen1"
],
[
"ICLR.cc/2020/Conference/Paper2237/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2237/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2237/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2237/Authors"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"I had a little bit of difficulty with my recommendation here, but in the end I don't feel confident in recommending this paper for acceptance, with my concerns largely boiling down to the lack of clear description of the overall motivation.\\n\\nStandard adversarial attacks are meant to be *imperceptible* changes that do not change the underlying semantics of the input to the human eye. In other words, the goal of the current work, generating \\\"semantically meaningful\\\" perturbations goes against the standard definition of adversarial attacks. This left me with two questions:\\n\\n1. Under the definition of semantic adversarial attacks, what is to prevent someone from swapping out the current image with an entirely different image? From what I saw in the evaluation measures utilized in the paper, such a method would be judged as having performed a successful attack, and given no constraints there is nothing stopping this.\\n\\n2. In what situation would such an attack method would be practically useful?\\n\\nEven the reviewers who reviewed the paper favorably were not able to provide answers to these questions, and I was not able to resolve this from my reading of the paper as well. I do understand that there is a challenge on this by Google. In my opinion, even this contest is somewhat ill-defined, but it also features extensive human evaluation to evaluate the validity of the perturbations, which is not featured in the experimental evaluation here.\\n\\nWhile I think this work is potentially interesting, it seems that there are too many open questions that are not resolved yet to recommend acceptance at this time, but I would encourage the authors to tighten up the argumentation/evaluation in this regard and revise the paper to be better accordingly!\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Responses and Revisions\", \"comment\": \"We thank all reviewers for their valuable comments and suggestions. We appreciate the reviewers recognizing our work interesting (R1, R2, R3), technically sound with concrete experiment results (R2, R3), broadening the study of adversarial examples and encouraging a good deal of follow-up research (R3). Based on the reviewers\\u2019 suggestions, we have made the following changes in our revision. \\n 1. Adding StawnMan baseline proposed by R3 in Table H and I. \\n 2. Selecting additional different layer\\u2019s feature map for interpolation and evaluating the results. (Table F)\\n 3. Changing the notations of the equations in Section 3.\\n 4. Fixing some typos.\"}",
"{\"title\": \"Continue #2\", \"comment\": \"Q5. Notations.\", \"q5a\": \"The difference between $x^{tgt}$ and $x^{adv}$, or between $x^{new}$ and $x^{*}$.\", \"a5a\": \"We consider the adversarial attack in the targeted setting, where $x$ is the original image and $x^{tgt}$ is the image with the target label we aim at misclassifying $x^{adv}$ to.\\n$x^{new}$ is the intermediate image we produced by the attribute-conditional image editing from the original $x$ (without adversarial attack).\\n$x^{*}$ represents the intermediate image in the optimization step (e.g., $x^{*}$ equals to $x^{adv}$ at the end of optimization).\", \"q5b\": \"Equation 3.\", \"a5b\": \"Thanks for the suggestion. We will improve the equation 3 denoting with optimization variable alpha.\", \"q5c\": \"$M(x^{tgt}) = y^{tgt}$\", \"a5c\": \"Thanks for pointing this out and we realize it causes confusion and will remove this notation in the revision. Basically, this notation is used to guarantee the unperturbed instances evaluated by our algorithms can be predicted correctly by $M$ otherwise it would be a challenge to distinguish the source of the error of the generated instances.\", \"q5d\": \"Position of y.\", \"a5d\": \"Thanks for pointing it out! It is a typo, it should be $y^{*}$. We will fix it in the revision.\", \"q5e\": \"Missing argument of $L_{smooth}$.\", \"a5e\": \"We admit that this is an abbreviated form of $L_{smooth}(\\\\alpha)$, which has been defined in Equation (3). We will update this in our revision.\\n\\nReferences\\n[a1] \\u201cSpatially transformed adversarial examples.\\u201d Xiao et al. In ICLR 2018.\\n[a2] \\u201cTesting robustness against unforeseen adversaries.\\u201d Kang et al. arXiv preprint arXiv:1908.08016.\\n[a3] \\u201cWasserstein Adversarial Examples via Projected Sinkhorn Iterations.\\u201d Wong et al. In ICML 2019\\n[a4] \\u201cUnrestricted adversarial examples.\\u201d Brown et al. arXiv preprint arXiv:1809.08352.\"}",
"{\"title\": \"Continue #1\", \"comment\": \"Q2. Is the argument that this is a more powerful attack surface, so adversaries should take note (and defenders should figure out how to defend against this)?\", \"a2\": \"Thanks for the interesting question. There are several definitions for \\u201cmore powerful attack\\u201d. For instance, in terms of the magnitude of perturbation, SemanticAdv is more powerful as it is able to tolerate a larger perturbation. In terms of attack success rate, both SemanticAdv and other $L_p$ norm based pixel level attacks can achieve almost 100% attack success rate. Therefore, we believe semantic based adversarial examples are important mainly because it explores different properties that traditional $L_p$ based ones have missed and provide diverse adversarial attacks.\\n\\nQ3. Capture a more realistic part of the data distribution over all natural images.\", \"a3\": \"Thanks for the very interesting point. At this point, we can only show that such semantic based attacks do provide diverse adversarial examples in addition to existing ones, but whether it actually captures more realistic part of the data distribution is challenging to verify and we will definitely explore it as the future work by proposing different evaluation process and metrics for the benign and adversarial data distributions.\\n\\nBased on the reviewer\\u2019s suggestion, we conduct the StawnMan baseline. We generate adversarial examples by using StawnMan baseline. It shows 100% attack success rate under the white-box setting. We further evaluate its performance of query-free black-box API attacks and transferability. The results are shown below. We can observe there is a noticeable gap between our proposed SemanticAdv and the StawnMan baseline in terms of performance. This result justifies the argument that our SemanticAdv is able to produce novel adversarial examples that cannot be simply achieved by combining attribute-conditional image editing model with $L_p$ bounded perturbation.\", \"table_r3t1\": \"Transferability of StawnMan\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| M_test / M_opt | R-101-S |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| R-50-S | 0.035 (0.108) |\\n| R-101-S | 1.000 (1.000) |\\n| R-50-C | 0.145 (0.202) |\\n| R-101-C | 0.085 (0.236) |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n(G-FPR = $10^{-3}$, T-FPR = $10^{-3}$, SemanticAdv in blankets)\\n\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| M_test / M_opt | R-101-S |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| R-50-S | 0.615 (0.862) |\\n| R-101-S | 1.000 (1.000) |\\n| R-50-C | 0.570 (0.837) |\\n| R-101-C | 0.695 (0.888) |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n(G-FPR = $10^{-4}$, T-FPR = $10^{-3}$, SemanticAdv in blankets)\", \"table_r3t2\": \"Quantitative analysis on query-free black-box attack of StawMan\\n\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| API name | Face++ | AliYun |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| Attacker / Evaluation Metric | T-FPR = $10^{-3}$ | T-FPR = $10^{-4}$ | T-FPR = $10^{-3}$ | T-FPR = $10^{-4}$ |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n| StawnMan (G-FPR = 1e-3) | 10.71 | 4.08 | 3.00 | 0.00 |\\n| SemanticAdv (G-FPR = 1e\\u22123) | 27.32 | 9.79 | 7.50 | 2.00 |\\n| StawnMan (G-FPR = 3e-4) | 21.32 | 9.14 | 7.50 | 1.50 |\\n| SemanticAdv (G-FPR = 3e\\u22124) | 57.22 | 38.66 | 29.50 | 17.50 |\\n| StawnMan (G-FPR < 1e-4) | 27.69 | 15.38 | 10.00 | 3.00 |\\n| SemanticAdv (G-FPR < 1e\\u22124) | 64.63 | 42.69 | 35.50 | 22.17 | \\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\n\\n\\n\\n\\nQ4. It is just good to be able to generate examples that models get wrong? If so, why, and why is this method better than other methods?\", \"a4\": \"According to the evaluation in our paper and the addition StawnMan baseline results, it shows that SemanticAdv is not only effective to generate adversarial examples different with $L_p$ based attacks but also indeed contain unique properties (e.g. different from applying $L_p$ perturbation on the manipulated image guided by semantic attributes). One of the potential reasons for the good performance is that SemanticAdv is able to explore the stronger adversarial space which can achieve higher transferability. For more detailed motivation of the semantidAdv please refer to A1.\\n\\n\\n(To be continued.)\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We appreciate the reviewer\\u2019s precious comments and suggestions. We thank the reviewer for recognizing our work as helpful to broaden the study of adversarial examples and encourage a good deal of follow-up research. We will first provide the high-level motivation of why we need to generate adversarial examples and then answer the individual questions. We have revised the notations and equations in our updated manuscript.\\n\\nQ1. Why it is important to generate adversarial examples in the way they do?\", \"a1\": \"Thanks for the question, the reasons/motivations are described below.\\n\\nDeep Neural Networks (DNNs) have achieved great success in a variety of applications. However, various security threats are emerging with the deployment of machine learning models. Without a deep understanding of how neural networks fail under attacks, it would be concerning to apply them in security-critical systems such as face verification and autonomous driving systems. Additionally, learning systems are usually required to be immune to *reasonable variations* of the input. \\n\\nSo far, such *variations* have been focused on imperceptible perturbation added to the given inputs whose magnitude is bounded by pixel-space $L_p$-norm. Some works have discussed the limitations of only measuring and evaluating the $L_p$ bounded perturbation [a1,a3,a4]. Therefore, it is important to explore other non-$L_p$ bounded perturbation, especially semantically meaningful perturbation, and more detailed reasons are listed below.\\n\\nFirst, the semantic based perturbation is new and interesting, which contains different intrinsic properties compared with the traditional $L_p$ bounded attacks. For instance, the semantic perturbation could be very large to cover the other side of $L_p$ bounded perturbation.\\n\\nSecond, in our proposed semantic based adversarial examples, we can explicitly control the desired editing attribute (e.g. hair color), and successfully preserve the high perceptual quality of the generated images as shown in Figure 4. This would help to explore the vulnerability/sensitivity of different semantic attributes. \\n\\nThird, various methods have been proposed to defend against adversarial attacks. Adversarial training based methods are currently the most efficient. Currently most adversarial training methods are only effective against a small set of seen attacks [a1], and researchers (e.g., Kang, et. al. [a2]) have shown that generating diverse attacks can help improve adversarial training performance against unseen attacks. Therefore, we believe that our semantic adversarial examples can potentially benefit adversarial training to improve model robustness by providing diverse unseen adversarial examples. \\n\\nIn addition, partially based on the reasons above, Brown, et. al.[a4] proposed the unrestricted adversarial example challenge to encourage the community to explore the adversarial space beyond $L_p$, which would potentially benefit the adversarial learning research, and we do hope SemanticAdv can contribute as well. \\n\\n\\n(To be continued.)\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for the constructive suggestions and comments, and we have conducted additional experiments based on the comments.\\n\\nQ1. Effectiveness of using other layers?\\n\\nWe have tested another two feature maps ($f_{1}$, $f_{2}$) after the first/second up-sampling operations as shown in Table E (see Section D in our appendix) in the submitted paper; and we also conducted additional experiments on two extra feature maps ($f_{-2}$, $f_{-1}$) based on the suggestions. $f_{-2}$ indicates the first feature map after the last down-sampling operations and $f_{-1}$ represents the feature map after $f_{-2}$. The full results are shown in the revision Table E and F.\\nWe also present the results as below. The result shows that samples generated by interpolating on our selected layer ($f_0$) achieve the highest attack success rate. \\n\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\n| T-FPR(G-FPR) | $10^{\\u22123}(10^{\\u22123})$ | $3\\\\times10^{\\u22123}(3\\\\times10^{\\u22123})$ | $10^{\\u22124}(10^{\\u22124})$ |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\n| Layer(f) | $f_{-2}$ | $f_{-1}$ | $f_0$ | $f_{-2}$ | $f_{-1}$ | $f_{0}$ | $f_{-2}$ | $f_{-1}$ | $f_{0}$ |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\n| Attack Success Rate | 49.4 | 92.09 | 99.29 | 30.44 | 81.87 | 97.35 | 6.66 | 45.46 | 76.64 |\\n+\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\\u2500\\u2500\\u2500+\"}",
"{\"title\": \"Continue\", \"comment\": \"\", \"q3a\": \"Is there a way to evaluate the merits of semantic modification (beyond attack success) in addition to \\u201cdoes it look reasonable\\u201d?\", \"a3a\": \"As far as we know, user study has been widely used in the literature when it comes to the qualitative measurement of adversarial examples [a1]. Other measurement such as $L_p$ bound has been known drawbacks and limitations as discussed in [a1, a2, a3]. We admit it is non-trivial to devise perceptual metrics to measure the perceptual quality of adversarial samples in a systematic manner, and it is truly a challenging open problem in the vision and learning community.\", \"q3b\": \"The authors mention attribute-based modifications are more practical, how can this be evaluated?\", \"a3b\": \"Thanks for the question and sorry for the confusion. In our scenario, \\u201cmore practical\\u201d means it is relatively easier for someone to realize semantic attributes in practice than perform $L_p$ based perturbation. For instance, one can realize the semantic attribute editing to the faceID system by wearing a pair of glasses or have the hair dyed with a different color.\", \"q3c\": \"If attribute-based attacks are better, is there a cost to this?\", \"a3c\": \"Thanks for the interesting question! The extra costs of SemanticAdv are from two sources: (1) we need the corresponding attribute annotation for each image or a pre-trained attribute classifier to predict the attribute labels; and (2) we need to train a generative model to conduct attribute-conditional image editing.\\nThese two problems happen to be popular research topics in the vision and learning community with tremendous progress in the past few years, which we can leverage.\", \"q3d\": \"How easy is it to make attribute-based attacks compared to low-level ones?\", \"a3d\": \"First, we observe that generating the attribute-based attacks is as efficient as the low-level ones. We conduct additional experiments to evaluate the running time. The detailed setting can be found in Section A. It takes on average 0.30s for CW to generate a single adversarial example on single GTX 1080Ti while the running time is 0.32s for our SemanticAdv. Besides the efficiency, we believe the model optimization of SemanticAdv is as easy as CW attack, except that SemanticAdv requires a pre-trained attribute-conditional image generation model available (see Q3c).\", \"q4\": \"Impact of \\u201cselecting successfully attacked example\\u201d to evaluate the transferability.\", \"a4\": \"This is the standard setting in the literature [a1] when it comes to attack transferability evaluation. We will make this clear in the revision.\", \"q5\": \"The advantage of the proposed method.\", \"a5\": \"Thanks for pointing this out. Our proposed method has three major advantages:\\n(1) SemanticAdv helps identify specific semantic-based adversarial examples for a machine learning model (e.g., face verification network, scene segmentation network) to further explore corner cases in the representation; (2) as the reviewer points out, such semantic-based attacks can enlarge the diversity of seen adversarial examples and therefore help improve model robustness by training with them against unseen ones as discussed in [a4]; and (3) analyzing the defense effectiveness with SemanticAdv by modifying different attributes could help better understand the model vulnerabilities from semantic perspective. \\n\\nReferences\\n[a1] \\u201cSpatially transformed adversarial examples.\\u201d Xiao et al, In ICLR 2018.\\n[a2] \\u201cWasserstein Adversarial Examples via Projected Sinkhorn Iterations.\\u201d Wong et al. In ICML 2019\\n[a3] \\u201cUnrestricted adversarial examples.\\u201d Brown et al. arXiv preprint arXiv:1809.08352 2018.\\n[a4] \\u201cTesting robustness against unforeseen adversaries.\\u201d Kang et al. arXiv preprint arXiv:1908.08016 2019.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We really appreciate the reviewer\\u2019s precious comments. Sorry for the potential confusion. We would like to answer your questions as follows and we have added them in our revision.\", \"q1\": \"assume M is an oracle-- what is the impact of this?\", \"a1\": \"Thanks for pointing this out and we will remove this notation in the revision to avoid confusion. Basically, M here is used to obtain the corresponding label related to data x, and we actually do not need to use this assumption in our experiments (we can assume the ground-truth label is given). But we see this assumption introducing the confusion and we will remove this statement by using the ground truth label y directly.\", \"q2\": \"\\u201cThe results in Table C don't look good.\\u201d\", \"a2\": \"We believe the \\u201cTable C don\\u2019t look good\\u201d refers to the results with \\u201cworst\\u201d and \\u201caverage\\u201d metrics. In Table C, the \\u201cbest\\u201d metric of SemanticAdv should be served as a fair comparison to CW, where both methods achieve 100% attack success rate. Therefore, our result is good. The detailed reasons are as follows.\\n\\nFor each victim image, our SemanticAdv generates a total of 17 adversarial images by augmenting one semantic attribute each time (e.g., we have 17 attributes to manipulate). However, CW generates a single adversarial example regardless of attributes, which can be viewed as instance-level generation. Therefore, we compare CW with our SemanticAdv on the instance-level which corresponds to the \\u201cbest\\u201d metric. \\n\\nIn addition, we report the performance using the \\u201caverage\\u201d and \\u201cworst\\u201d metric, which actually provides additional insights into the robustness of face verification models across different attributes. Combining the results from Table C in our appendix and Figure 3, we understand that the face verification models used in our experiments have different levels of robustness across attributes. For example, face verification models are more robust against local shape variations than color variations, e.g., pale skin has higher attack success rate than mouth open. We believe these discoveries will help the community further understand the properties of face verification models.\\n\\nTo summarize, our *semantic* adversarial examples not only achieves attack success rate comparable to traditional $L_p$-norm bounded CW attacks, but also enables us to investigate the model robustness under different semantic attributes. We will make the description of Table C clearer in the revised manuscript.\\n\\n\\n(To be continued.)\"}",
"{\"title\": \"Reply to \\\"generalization of semantic perturbations\\\".\", \"comment\": \"Thank you for your interest!\\n\\nIn our experiments, we observe that our proposed semantic perturbation is generalizable to some extent. Specifically, the same type of semantic perturbation can be applied to attack different samples (faces). Also, feel free to check our anonymous website where we show good examples of applying the same type of perturbations (e.g., young --> senior, regular skin --> pale skin) to synthesize adversarial face images against the verification model (see Figure 2 on the anonymous website).\", \"https\": \"//sites.google.com/view/generate-semantic-adv-example\"}",
"{\"title\": \"Interesting work\", \"comment\": \"Hi,\\n\\nThe idea of semantic manipulation is pretty interesting. I wonder how these perturbations generalize? For example in Figure 1 Right, the perturbation is supposed to make a face pale and fool a face verification system. Would the same perturbation still work for a different face?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes adversarial attacks by modifying semantic properties of the image. Rather than modifying low-level pixels, it modifies mid-level attributes. The authors show that the proposed method is effective and achieves stronger results than the pixel-level attack method (CW) in terms of attacking capability transferring to other architectures. Importantly, the authors show results on a variety of tasks, e.g. landmark detection and segmentation in addition to classification/identification. The most related work is Joshi 2019 and the authors show that the method used in that work (modification in attribute space) is inferior to modification in feature space still via attributes, as the authors proposed. However, I have a few comments and concerns:\\n1) The authors mention on page 3 they assume M is an oracle-- what is the impact of this?\\n2) The results in Table C don't look good-- the proposed method can *at best* (in a generous setup) equal the results of CW-- maybe I missed something but more discussion would be helpful.\\n3) Is there a way to evaluate the merits of semantic modification (beyond attack success) in addition to \\\"does it look reasonable\\\"? The authors mention attribute-based modifications are more practical, how can this be evaluated? If attribute-based attacks are better, is there a cost to this? How easy is it to make attribute-based attacks compared to low-level ones?\\n4) The authors mention that for their transferrability results, they \\\"select the successfully attacked...\\\" (page 7). What is the impact of this, as opposed to selecting non-successfully attacked samples?\\n5) Re: behavior with defense methods, is the advantage of the proposed method a matter of training the defense methods in a tailored way, so they're aware of attribute-based attacks?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors describe a method for adversarially modifying a given (test) example that 1) still retains the correct label on the example, but 2) causes a model to make an incorrect prediction on it. The novelty of their proposed method is that their adversarial modifications are along a provided semantic axis (e.g., changing the color of someone's skin in a face recognition task) instead of the standard $L_p$ perturbations that the existing literature has focused on (e.g., making a very small change to each individual pixel). The adversarial examples that the authors construct, experimentally, are impressive and striking. I'd especially like to acknowledge the work that the authors put in to construct an anonymous link where they showcase results from their experiments. Thank you!\\n\\nOverall, I think that this is interesting work that can help to broaden the study of adversarial examples and make them more applicable even in non-adversarial settings (e.g., by making models more robust to the changes in semantic attributes that the authors consider). There has been quite a bit of interest in the community in adversarial examples that are not just $L_p$ perturbations, and I believe that the authors' approach will encourage a good deal of follow-up research. \\n\\nHowever, my main concern with the paper is that in my opinion, it does not sufficiently address why it is important to generate adversarial examples in the way they do. For example:\\n\\n1) Is the argument that this is a more powerful attack surface, so adversaries should take note (and defenders should figure out how to defend against this)? If that is the case, what is the attack model under which these attacks are realistic? For example, the original $L_\\\\infty$ attacks are motivated in the sense that the adversarial examples are visually imperceptible, so they might not be noticed by the end-user. What is the equivalent argument for these semantic attacks?\\n\\n2) Is the argument that these semantic attacks somehow capture a more realistic part of the data distribution over all natural images, and therefore it is good to have models that perform well on these semantic adversarial examples even if we're not concerned about an adversary (e.g., because the model might generalize better to other tasks or be more causally correct)? If that's the case, then I think this needs to be explored more. For example, what about the following straw man baseline: use a controllable semantic-attribute-based generator to generate semantically different images without any notion of an adversarial attack, and then do standard $L_p$ attacks on that generated image? How would that be better or worse than the proposed method?\\n\\n3) Or is the argument that it is just good to be able to generate examples that models get wrong? If so, why, and why is this method better than other methods?\\n\\nI think the paper would be significantly stronger if the importance and implications of their work were explicated along the above lines. For this reason, my current assessment is a weak reject, though I'd be open to changing this assessment.\\n\\n=== Less critical comments, no need to respond or fix right away ===\\n\\nWhile the overall concept and approach was clear, I generally found the notation and mathematical exposition difficult to follow. Please be more precise. Here is a non-exhaustive list of examples from section 3:\\n\\na) I'm not sure what's the difference between $x^\\\\text{tgt}$ and $x^\\\\text{adv}$, or between $x^\\\\text{new}$ and $x^*$. These seem to be used somewhat interchangeably?\\n\\nb) Equation 3 is the central optimization problem in the paper, and should be written out explicitly using $\\\\alpha$ as the optimization variable, instead of referring to equations 1 and 2 (in which $x^*$ doesn't even appear).\\n\\nc) I didn't understand equation 4. What does assuming $M(x^\\\\text{tgt}) = y^\\\\text{tgt}$ mean? What happens when that is not true?\\n\\nd) Equation 5: Why is $y$ in the right hand side by not in the left?\\n\\ne) Equation 6: $L_\\\\text{smooth}$ is missing an argument.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper proposes to generate \\\"unrestricted adversarial examples\\\" via attribute-conditional image editing. Their method, SemanticAdv, leverages disentangled semantic factors and interpolates feature-map with higher freedom than attribute-space. Their adversarial optimization objectives combine both attack effectiveness and interpolation smoothness. They conduct extensive experiments for several tasks compared with CW-attack, showing broad applicability of the proposed method.\\n\\nThe paper is well written and technically sound with concrete experimental results. I'm glad to suggest accepting the paper.\\n\\nWith the help of attribute-conditional StarGAN, SemanticAdv generates adversarial examples by interpolating feature-maps conditioned on attributes. They design adversarial optimization objectives with specific attack objectives for identity verification and structured prediction tasks. They provide experiments showing the effectiveness of SemanticAdv; analysis on attributes, attack transferability, black-box attack, and robustness against defenses; as well as user study with subjective. The qualitative results also look nice and the code base is open-sourced.\\n\\nA question out of curiosity, the last conv layer in the generator is used as the feature-map. How is the attack effectiveness of using other layers?\"}",
"{\"comment\": \"Thanks for the nice reference!\\nWe mainly valued the mentioned paper based on its physical attack effectiveness, but we will definitely add this interesting discussion about it!\", \"title\": \"reply to \\\"A closely related paper\\\"\"}",
"{\"comment\": \"Great work and I really enjoy reading it.\\n\\nHowever, previous work has also studied the semantic attack to fool models. Please check out this paper [1]. For the attack of semantic attributes, to my knowledge, [1] is the first work to perform the semantic attack to fool DNNs by designing specific eyeglasses.\\n\\nIn my opinion, a discussion/comparison seems due.\\n\\n[1] Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition\", \"title\": \"A closely related paper\"}"
]
} |
SkglVlSFPS | Uncertainty - sensitive learning and planning with ensembles | [
"Piotr Miłoś",
"Łukasz Kuciński",
"Konrad Czechowski",
"Piotr Kozakowski",
"Maciej Klimek"
] | We propose a reinforcement learning framework for discrete environments in which an agent optimizes its behavior on two timescales. For the short one, it uses tree search methods to perform tactical decisions. The long strategic level is handled with an ensemble of value functions learned using $TD$-like backups. Combining these two techniques brings synergies. The planning module performs \textit{what-if} analysis allowing to avoid short-term pitfalls and boost backups of the value function. Notably, our method performs well in environments with sparse rewards where standard $TD(1)$ backups fail. On the other hand, the value functions compensate for inherent short-sightedness of planning. Importantly, we use ensembles to measure the epistemic uncertainty of value functions. This serves two purposes: a) it stabilizes planning, b) it guides exploration.
We evaluate our methods on discrete environments with sparse rewards: the Deep sea chain environment, toy Montezuma's Revenge, and Sokoban. In all the cases, we obtain speed-up of learning and boost to the final performance. | [
"deep reinfocement learning",
"mcts",
"ensembles",
"uncertainty"
] | Reject | https://openreview.net/pdf?id=SkglVlSFPS | https://openreview.net/forum?id=SkglVlSFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BJzN0S-PQ",
"Hyl3nCQnoH",
"HJxM0T73oS",
"SJlz4TX3jH",
"HJlP92j69H",
"H1lRg1qtcS",
"HkgcGAHAFH",
"SJlvczd5ur",
"Byea19BhDS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798743941,
1573826228419,
1573825994037,
1573825833549,
1572875406656,
1572605686009,
1571868177960,
1570566798987,
1569638885204
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2236/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2236/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2236/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2236/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2236/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2236/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2236/Authors"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors study planning problems with sparse rewards.\\nThey propose a tree search algorithm together with an ensemble of value \\nfunctions to guide exploration in this setting. \\nThe value predictions from the ensemble are combined in a risk sensitive way, \\ntherefore biasing the search towards states with high uncertainty in value \\nprediction. \\nThe approach is applied to several grid-world environments. \\n \\nThe reviewers mostly criticized the presentation of the material, in particular \\nthat the paper provided insufficient details on the proposed \\nmethod. Furthermore, the comparison to model-free RL methods was deemed somewhat \\nlacking, as the proposed algorithm has access to the ground truth model. \\nThe authors improved the manuscript in the rebuttal. \\n \\nBased on the reviews and my own reading I think that the paper in it's current \\nform is below acceptance threshold. However, with further improved presentation \\nand baselines for the experiments, this has potential to be an important contribution.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer to AnonReviewer1\", \"comment\": \"We thank for the review.\"}",
"{\"title\": \"Answer to AnonReviewer4\", \"comment\": \"We thank the reviewer for a detailed review. We admit major deficiencies in the presentation of our work, which, we believe, improved significantly in the new uploaded version.\", \"answering_detailed_comments\": [\"the presentation of the method is rewritten. We hope that it is much clearer and addresses the reviewers concerns. In particular, we explain MCTS, mask, sampling mechanism either in Section 2 or Appendix.\", \"As to the reviewer concerns regarding 'soft-penalization'. We penalize loops on two levels, mcts planner and the episodes. This is now explained in Section 2 and Appendix A (the relevant parameters are penalty_p and penalty_e).\", \"Regarding assumptions: In our work we assume access to the prefect model (realised by the simulator), which, hopefully clearly, is now stated in the introduction. Having a model enables to avoid loops (independently of the fact how the model is obtained).\", \"We admit that access to a model is a substantial assumption, however this is somewhat orthogonal to our main focus. In the future work, we would like to address learning models. There are a number of domains, like Sokoban, in which, we believe, learning of the model is much simpler than planning. In fact our very preliminary experiments indicate that learning model of Sokoban is is indeed likely to be doable.\", \"We respectfully disagree with the statement that that having fixed ratio of \\u201csolved\\u201d and \\u201cunsolved\\u201d episodes is a strong assumption. For environments with a single (sparse) reward for completing a task (like the environments used in our experiments) it is a very natural concept. Furthermore, there is a growing body of literature concerning prioritised usage of \\u201cgood\\u201d episodes, see e.g. [1,2,3].\", \"Motivation behind particular choices of \\\\phi_a are now presented in Section 2.\", \"We admit that having ablations would be nice to have and we will prepare them for the camera ready version. We reckon that this does not diminish the value of the method itself.\", \"We like the idea of including the exploration bonus into model-free training and state it in the future work section.\", \"[1] Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. ICML 2018\", \"[2] Kaixiang Lin, Jiayu Zhou, Ranking Policy Gradient. arXiv:1906.09674, 2019\", \"[3] Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee, Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy. arXiv:1907.10247 (2019)\"]}",
"{\"title\": \"Answer to AnonReviewer3\", \"comment\": \"We thank for the review and comments. We admit various shortcomings especially in the text clarity. We have uploaded a new overhauled version of the paper, which we hope makes our work more accessible.\", \"below_we_address_particular_concerns_of_the_reviewer\": [\"\\u201cmain algorithm 1, it really seems like more of a sketch\\u201d - the description of the method has been rewritten. We provide pseudo-code for all the components of the method. This is done in Section 2 and Appendix A.\", \"Figures descriptions, in particular Figures 1 and Figure 2, have been clarified.\", \"We assume access to the perfect model (simulator so to speak), which is now, hopefully clearly, stated in the introduction. One could argue that in some cases, like Sokoban, the model is easy to learn, what is hard is planning. Having said that, learning models and using imperfect models is an exciting research direction.\", \"As to the code, we have made some further clean-ups and improved the README file. Meanwhile, we have also been working on a completely new version of code, redesigned from scratch. We hope that it will also be of use for the community.\", \"We made our best to clearly state our contribution as well as match our claims with the corresponding evidence.\"]}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Uncertainty-Sensitive Learning and Planning with Ensembles\\n=====================================================\\n\\nThis paper investigates the use of uncertainty-aware estimates in solving planning problems (RL with access to simulator).\\nThe proposed algorithm combines a learned model-free value estimate with MCTS planning.\\nAn ensemble of neural networks is used to model posterior uncertainty in the value estimate and drive efficient exploration.\", \"there_are_several_things_to_like_about_this_paper\": [\"The paper takes on several core issues in RL/planning research, most notably the synthesis of dealing with model-based and model-free uncertainty in RL.\", \"The general flavour of the paper + algorithm seems to be reasonable. The proposal to use ensemble uncertainty estimates to drive model-based MCTS is interesting, natural, and I think it's a good one.\", \"The proposed structure of the paper is quite nice, there is mostly a linear and logical progression of complexity in the experiments. This is nice to see clear benefits of the approach on the simplest possible settings and build up from there.\", \"The effort to open source code + implementation details is laudable.\", \"However, there are several places where this paper falls short:\", \"In general, the claims and results of the paper are far too vague to be fully understood and replicated. Take the main algorithm 1, it really seems like more of a \\\"sketch\\\" of a very general family of algorithms, rather than a specific description of a clear algorithm.\", \"This vagueness is spread throughout the plots and figures as well... note that Figure 1 has no indication of how many steps have been evaluated, and Figure 2 has no indication for what value K > 0 was actually used. The clarity does not improve in Sections 3.2 and 3.3 where quite inconsistent performance metrics and presentations are presented.\", \"Generally, the writing could be tightened quite a lot. In particular I would encourage you to think about whether each statement you make is clearly supported by some theorem, experiment or plot in your paper. For example, on page 3 \\\"We found this mechanism to be beneficial... see Section 3.3\\\" but then it's not clear exactly what statement shows that particular part of the mechanism was helpful, versus other issues associated with ensemble learning. There are more than a few typos... the on(e) in Osband... akin to ??... might be obtained by choosing from (the) ensemble...\", \"It would be very helpful to clarify that the agent is given access to a simulator... so that this is not exactly the typical RL setting of sequential decision making. This should appear early in the paper.\", \"The code that is released with the paper is also quite confusing, it is not structured with a clear README and includes many sections of dead/commented code. I was hoping the code might rescue some of the clarity, but I think that still needs work.\", \"Overall, I do think there is some interesting material here...\", \"It's an important problem, and the core building blocks of combining model, value and uncertainty for better exploration is interesting.\", \"However, I just think the actual paper is not clear enough on the details.\", \"My belief is that going through this paper very methodically and carefully to make sure that every single detail + claim is rigorously supported would help this paper immensely.\", \"For that reason I have to say that I think it's a \\\"reject\\\" in its current form.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors propose to combine planning methods like MCTS with an ensemble of value functions to a) estimate the value of leaf nodes of the search tree and b) use the ensemble estimate of uncertainty to guide exploration during MCTS search.\\nThe MCTS rollouts are also used as optimization targets for the value function.\\n\\nI believe this is a clear reject. On the one hand, the paper needs signficiantly more work on the writing and clarity. On the other hand I have several worries on the method and evaluation side.\", \"regarding_the_presentation_of_the_paper\": \"Overall, the paper seems quite rushed. This is not a strong reason for rejection but should be improved in a future version. For example, punctuation and sentence structure is often wrong, the paper has only slighlty over 7 pages, a citation is undefined on p.7 and images and whitespace is formatted wrongly on occasion (e.g. top of page 6).\\nMore importantly, on the content side, the experimental section is sufficiently clear and well written, however, the method description needs more detail and background information. The paper relies on several prior works which are referred to but not described (E.g. MCTS , the sampling mechanism by Osband et al. which they are using but not describing, the 'mask' from Osband et al which they are using but not describing).\\nFurthermore, the algorithm itself is not described in sufficient detail:\\n- How does the 'soft-penalization' work?\\n- How exactly does the mechanism \\\"similar in fashing to\\\" Thomson sampling work?\\n- Are you learning a model or do you have access to the true transition function?\", \"regarding_the_method\": [\"I can't say anything definitive about the method as I'm not entirely clear how exactly it works. However, I have several worries that might need addressing:\", \"It seems to me that the method relies on access to the _true_ transition and reward function and not on a learned model. This is a big difference to much of the prior work they compare against. This also makes the comparison against any pure model free method like PPO much less meaningful.\", \"Similarly, manually avoiding dead-ends and loops is a very strong assumption\", \"Also, being able to distinguish and use a fixed ratio of \\\"solved\\\" and \\\"unsolved\\\" episodes is a strong assumption.\", \"The one main contribution seems to be a new way of how \\\\phi_a(x) is defined. Their particular choice needs a clearer motivation. Furthermore, if there is more contribution and differences to prior work, highlighting them more would help the reader understand the contribution.\", \"As the work makes several strong assumptions regarding the environment and access to the model, significantly more work (e.g. ablation studies) is needed to clearly show which assumption and feature of the algorithm is important for performance (and ideally also why). For example (but that's just a first idea): To understand the impact of their choice of \\\\phi vs. their planning architecture, it would be be interesting to maybe train PPO using an exploration bonus based on \\\\phi. This would allow disentangling the contribution of: Access to the true model, \\\"discrete-environment-tricks\\\" like penalizing dead-ends, and exploration incentivication of \\\\phi.\"], \"edit\": \"Thank you for your response and the updated manuscript, which reads considerably better.\\nI also agree with your point regarding the strength of assumption regarding \\\"solved\\\" and \\\"unsolved\\\" episodes.\\n\\nConsequently, I will raise my score to a \\\"weak reject\\\" to express that I think this is promising work.\\n\\nI do believe that ablation studies would add a lot to the paper as they would allow one to see which of the (many) added components help how much, for example between the selection function $\\\\phi$ and the various penalizations used.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an approach blending model-based, model-free methods and utilizing risk-sensitivity information in ensembles as part of the value estimation and exploration process. The exploration is based on risk- sensitivity measures such as moments and relative majority vote. There is a lot of work currently trying to marry the model free with model based approaches for integrated planning and learning as the authors have mentioned in the related work section of the paper and also called out similar methods and techniques. The authors have provided evidence via experiments in three environments and shown good results of using this blended approach. Code is also provided for others to further carry out explorations in this research area.\"}",
"{\"comment\": \"You're right. It took us somewhat longer to make a repo, which we are sure that is fully anonymous. Now it is up and running.\", \"title\": \"repo working\"}",
"{\"comment\": \"Hi,\\n \\nNo code is present in the repo of the github link. It is not fair to provide a placeholder link for code submissions (which impact the review process) and submit code taking considerable buffer time after submission deadline.\", \"title\": \"No code in provided github link even after 60 hours of submission deadline\"}"
]
} |
ByexElSYDr | Fair Resource Allocation in Federated Learning | [
"Tian Li",
"Maziar Sanjabi",
"Ahmad Beirami",
"Virginia Smith"
] | Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency. | [
"federated learning",
"fairness",
"distributed optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=ByexElSYDr | https://openreview.net/forum?id=ByexElSYDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LWP2v64SVX",
"r1gqMq5KiH",
"B1xcCt5Yor",
"SygvdKqYsH",
"H1eYY_9KoH",
"HJlUqN9Kor",
"SJxeQHzEqH",
"BJg7OyyM9r",
"rJeNTLRhFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743912,
1573657105699,
1573657042345,
1573656942633,
1573656705120,
1573655693775,
1572246808178,
1572101995052,
1571772092133
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2235/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2235/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2235/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2235/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2235/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2235/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2235/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2235/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This manuscript proposes and analyzes a federated learning procedure with more uniform performance across devices, motivated as resulting in a fairer performance distribution. The resulting algorithm is tunable in terms of the fairness-performance tradeoff and is evaluated on a variety of datasets.\\n\\nThe reviewers and AC agree that the problem studied is timely and interesting, as there is limited work on fairness in federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. In reviews and discussion, the reviewers noted insufficient justification of the approach and results, particularly in terms of broad empirical evaluation, and sensitivity of the results to misestimation of various constants. In the opinion of the AC, while the paper can be much improved, it seems to be technically correct, and the results are of sufficiently broad interest to consider publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the positive evaluation of our work.\\n\\n[Sensitivity of L] The reviewer is correct that it is important to derive a good estimate of L; similar to other first-order methods (including FedAvg), our algorithms (q-FedAvg and q-FedSGD) are sensitive to the choice of local learning rate, and are therefore sensitive to the estimate of the local Lipschitz constant L. However, we note that when q=0, our method does not incur any additional hyperparameter tuning cost beyond what is necessary for other first-order methods (such as FedAvg). The benefit of our approach is that when q>0, we suggest using the Lipschitz constant estimated at q=0 to easily derive an appropriate learning rate for q>0. This removes the need for tuning the learning rate at q>0. To test the robustness of this approach, we explore the efficacy of this heuristic directly in Figure 3. In particular, we compare q-FedSGD (with an estimated L) to FedSGD using the best-tuned learning rate for the same q. As is evident from Figure 3, this simple heuristic does not result in any performance degradation compared to the best tuned step-size, which helps to motivate the use of our approach.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for their careful review of the paper. As per the reviewer\\u2019s key comment, please see the [Contributions] section in our response to all reviewers.\\n\\n[Differences with Newton\\u2019s method] Unlike Newton\\u2019s method, which uses the second-order derivative information, our algorithm is a first-order method which only relies on the gradient information. It is common to use first-order methods in large-scale optimization (particularly for applications in federated learning) as second-order methods can be costly. Please let us know if this does not answer your question.\\n\\n[q-FedAvg] Thanks for these questions regarding our method; we answer them here, and have also updated Section 3.3 in our revision to better explain our approach. \\n(1) FedAvg is applicable to objectives where each local loss function is an empirical average over the loss on all local data points. Unfortunately, for our q-FFL objective, when q>0, $F_k^{q+1}$ is not a simple empirical average of the loss of local samples due to the (q+1) exponent. We therefore propose to generalize FedAvg for non-zero q by using a more sophisticated dynamic weighted average scheme, which is explained in Alg. 1 (q-FedAvg). We note that at q=0, the dynamic weighting in q-FedAvg simplifies to simple averaging, which recovers FedAvg as a special case. \\n(2) The reviewer is correct that the communication improvements of q-FedAvg over q-FedSGD are due to the local updating scheme; we discuss this in the last paragraph of Section 3.3. The benefits of local updating approaches such as FedAvg have been shown in simpler settings, which is what motivated us to adapt this approach to our q-FFL objective.\"}",
"{\"title\": \"Response to Reviewer #3 (Part2)\", \"comment\": \"[Accuracy discrepancies with previous work] \\nWe note that the goal of our experiments is not to show superior accuracy on a certain benchmark, but rather to show that our q-FFL objective and our proposed algorithms provide a flexible tradeoff between performance (accuracy) and fairness on a range of datasets. The reviewer is correct that we have slight differences in accuracy relative to prior work, which we explain below.\\n\\n(1) For Shakespeare, we achieve 52% accuracy which is slightly lower than the numbers reported in FedAvg (54%). This is because we are using a randomly subsampled version of the dataset due to resource constraints (it takes, for example, 2 days to run and tune hyperparameters on this small dataset using 10 GPUs). It is important to note that having fewer clients tends to result in a more uniform accuracy distribution (as we showed in the previous experiments), which in fact makes it a more difficult baseline. However, we also provide results on a larger version with 260 devices to better match prior work (see table below). Similar to all of our results, q>0 leads to a more uniform accuracy distribution while maintaining similar average accuracy. Note that as we expect, the variance is larger for a larger number of devices due to the potentially increased heterogeneity.\\n\\nTable 2. Effects of q-FFL with q>0 on a larger Shakespeare dataset\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nDataset | objective | average | worst 10% | best 10% | variance\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nShakesepeare | q = 0 | 52.3 | 31.0 | 67.6 | 108\\n | q = .001 | 52.2 | 34.5 | 67.1 | 76\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\n(2) For Fashion MNIST, we follow the exact setting described in the AFL paper where only 3 classes are subsampled for evaluation using a logistic regression model. We have tried our best to accurately replicate their setup including using the same optimizer, but as the code is not open-sourced, we do not have access to AFL\\u2019s exact implementation (hyperparameters, data preprocessing, etc), and are therefore uncertain as to why this discrepancy exists. We note that we have open-sourced our own code to avoid such issues with our work.\\n\\n[q-FFL is less likely to overfit] Thanks for this question; we have updated the paper to explain this statement in more detail. Unlike static weighting in uniform sampling, q-FFL dynamically gives less importance to a device as soon as its loss reduces. Thus, it is less likely to overfit to one device at the expense of not reducing the loss for other devices.\"}",
"{\"title\": \"Response to Reviewer #3 (Part1)\", \"comment\": \"We greatly appreciate the reviewer\\u2019s detailed review and suggestions to improve the paper.\\n\\n[Estimating the Lipschitz constant and tuning q] For a discussion on how to estimate the Lipschitz constants, please see our response to all reviewers; we have also clarified this in Section 3.3 of our revision. The results for q=0 are repeated with the estimated L. Note that our heuristic does not add any out of the ordinary computational cost as the learning rate (for q=0 which corresponds to FedAvg) is typically tuned via grid search. The reviewer is correct that the flexibility of having a tunable q to allow for tradeoffs between fairness and average performance comes at a cost of additional communication rounds. However, our proposed method helps to significantly reduce the number of communication rounds by obviating the need for tuning a learning rate for additional values of q>0.\\n\\n[Experiment setup] We include three key experiments in the main paper. Figure 1 + Table 1 together demonstrate that our objective is more \\u2018fair\\u2019 compared with FedAvg (confirming our theory). Figure 2 and Table 2 further show that our objective is more fair/more flexible compared with other baselines that may also lead to fairness. Finally, Figure 3 shows the efficiency of our method. The experiments in the appendix include more minor results such as how our objective performs on the training data. We have added an overview at the beginning of the appendix to make it easier to navigate. As per your suggestion, we have also included new experiments that directly explore the impact of data heterogeneity and number of clients on q-FFL.\\n\\n[Impact of data heterogeneity and the number of clients on q-FFL] In our existing experiments, we present failure modes by showing high degrees of performance variance when using FedAvg on a number of real-world datasets, which naturally vary in terms of heterogeneity and the number of clients. However, to more directly evaluate the effect of data heterogeneity and number of devices, we have performed new experiments on synthetic data where the heterogeneity can be quantified more precisely. Below, we show the results of the test accuracy (%) distribution averaged across 5 random test-val-train partitions of each dataset. We have also updated the paper and report the results in Table 9 in Appendix F.2. We are happy to move them to the main text if needed.\\n\\nTable 1. Effects of data heterogeneity and number of devices on unfairness.\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n Dataset |objective| average | worst 10%| best 10% | variance\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nSynthetic (iid)| 100 devices| q = 0 | 89.2 (0.6)| 70.9 (3) | 100.0 (0) | 85 (15)\\n | \\t | q = 1 | 89.0 (0.5) | 70.3 (3) | 100.0 (0) | 88 (19) \\n |-------------------------------------------------------------------------------------------\\n \\t | 50 devices | q = 0 | 87.1 (1.5) | 66.5 (3) | 100.0 (0) | 107 (14)\\n | | q = 1 | 86.8 (0.8) | 66.5 (2) | 100.0 (0) | 109 (13) \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nSynthetic (1,1)|100 devices | q = 0 | 83.0 (0.9) | 36.8 (2) | 100.0 (0) | 452 (22)\\n | | q = 1 | 82.7 (1.3) | 43.5 (5) | 100.0 (0) | 362 (58) \\n |-------------------------------------------------------------------------------------------\\n \\t | 50 devices | q = 0 | 84.5 (0.3) | 43.3 (2) | 100.0 (0) | 370 (37)\\n | | q = 1 | 85.1 (0.8) | 47.3 (3) | 100.0 (0) | 317 (41) \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nSynthetic (2,2)| 100 devices | q = 0 | 82.6 (1.1) | 25.5 (8) | 100.0 (0) | 618 (117)\\n | | q = 1 | 82.2 (0.7) | 31.9 (6) | 100.0 (0) | 484 (79) \\n |--------------------------------------------------------------------------------------------\\n \\t | 50 devices | q = 0 | 85.9 (1.0) | 36.8 (7) | 100.0 (0) | 421 (85)\\n | | q = 1 | 85.9 (1.4) | 39.1 (6) | 100.0 (0) | 396 (76) \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nThe synthetic data is generated in a similar way as described in Appendix E.1. The degree of heterogeneity of the Synthetic (a, b) dataset becomes larger as a and b get larger. We can see that fixing the number of devices, as data become more heterogeneous, the accuracy variance (for both q=0 and q>0) also increases. We also investigate the effects of the number of devices using the same set of synthetic datasets, but with fewer number of devices (reduced from 100 to 50). We see that as the number of devices decreases, the accuracy distribution tends to be more uniform with a smaller variance.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all reviewers for their time and helpful comments. We first address shared concerns and then respond to specific comments below. We have updated the paper (with edits highlighted in red). In response to the reviewers, in our revision we have further clarified our proposed methods and included an additional experiment to directly explore the effects of heterogeneity and network size. \\n\\n[Contributions]\\nIn this paper, we propose q-FFL, a novel objective that encourages a fairer (more uniform) performance distribution in federated learning. This is the first work we are aware of to explore such an objective in the context of distributed machine learning. q-FFL, parameterized by q, enables a flexible fairness/accuracy tradeoff and generalizes prior work of FedAvg (q=0) and AFL ($q \\\\to \\\\infty$). We theoretically prove that q-FFL improves uniformity, and develop scalable methods to solve q-FFL efficiently in federated networks by dynamically combining the model updates on the central server. Our empirical evaluation on a set of real-world datasets in realistic federated scenarios demonstrates the fairness and flexibility of our objective and the efficiency of our methods.\\n\\n[Methods] We clarify two aspects of our proposed methods.\\n- [q-FedSGD and q-FedAvg] We propose two methods to solve our objective: q-FedSGD and q-FedAvg. q-FedAvg is a communication-efficient variant of q-FedSGD that allows for local updating by replacing the gradients with model updates. This modification is analogous to the difference between distributed SGD and FedAvg, and is a common strategy in large-scale optimization to reduce communication and improve convergence speed [1].\\n- [Estimating the Lipschitz constant and tuning q] We crudely estimate the local Lipschitz constant on q=0 as the inverse of the best tuned learning rate using grid search, which is a heuristic commonly used in practice for many first-order optimization methods [2, 3]. We then propose to use this estimated Lipschitz value to bypass the step-size tuning for any q>0. For a practitioner who might have to find the best fairness/accuracy trade-off from a number of objectives with varying values of q, such a shortcut can save time as there would be no need to tune the learning rate for different values of q>0. We empirically verify that this heuristic works well in practice in Section 4 (please also see our response to Reviewer #1 for additional details). \\n\\n[1] S. Stitch. Local SGD converges fast and communicates little. ICLR, 2019.\\n[2] S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 2013.\\n[3] Y. Nesterov. Introductory lectures on convex optimization: A basic course. Springer Science & Business Media, 2013.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The problem of fairness in federated learning (FL) is important given the popularity of the topic and its immediate impact on the society. Vanilla FL approaches may be subject to poor performance for clients whose data is under-represented across all participants. This paper proposes a new algorithm for federated learning to reduce variance in performance across clients. The inspiration for the algorithm comes from the problem of uniform resource allocation in wireless networks.\\n\\nWhile the problem and the motivation for the algorithm are interesting on the high level, I think this paper does not deliver the key ideas in sufficient detail and clarity.\\n\\nOn the algorithms side, I am still unclear on how the Lipschitz constant L is estimated on the first run with q=0. Are the results for q=0 in the experiments reported for this run or is it repeated with the learned L? Further, this procedure suggests that the number of communication rounds is at least doubled for the end-to-end training. Tuning q, which seems to be necessary, may require even more communication rounds.\\n\\nWhile there are a lot of experiments in the paper (across main text and supplementary), none seem to be carried out sufficiently well. Understanding the complete experimental setup for at least one of them is also quite hard due to numerous supplementary references throughout the experiments section. I would recommend to focus on fewer experiments, but present more thorough results. Below are some suggestions.\\n\\nThe importance of resource allocation in FL appears to me to be directly related to the key FL aspects such as degree of data heterogeneity and number of clients. This submission is lacking experiments comparing FedAvg to the proposed method under these settings (which can be simulated using available datasets). To argue in favor of the proposed approach it is important to demonstrate failure modes of the existing algorithms under some realistic scenarios and present a solution using new algorithm.\\n\\nAccuracies in Fashion MNIST and Shakespeare experiments seem quite poor suggesting some problems with the setup. FedAvg paper reports 54% on Shakespeare, whereas this paper reports 52%. It also appears that the number of considered \\\"devices\\\" on Shakespeare is significantly smaller than in the FedAvg paper (31 vs 1146) - what is the reason for this?\\nOn Fashion MNIST, AFL paper reports 80%+ accuracy while achieving 90%+ on the combined dataset seems relative easy based on the results mentioned on the Github repository of the dataset. This paper reports 78% for the proposed method and AFL. Why is there a discrepancy with AFL paper and what is the performance of FedAvg on this dataset (assuming some suitable CNN architecture)? Is there a reason to believe that this dataset is much harder for federated learning than MNIST, where FedAvg roughly matches full data training?\\n\\nThis statement is ambiguous \\\"uniform sampling is a static method and can easily overfit to devices with very few data points, whereas q-FFL has better generalization properties due to its dynamic nature.\\\" If there is a device with very few data points it is easy to overfit to it and q-FFL will essentially ignore that device since the loss on this device is very small. Why does this not lead to more severe overfitting behavior?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Summary]\\nThe authors propose a protocol to encourage a more fair distribution of the performance across devices in a federated setting. In contrast with previous work, which protects a specific attribute, this paper aims to achieve the uniformity of the accuracy distribution.\\n\\n[Key Comments]\\nThe paper is well-organized and clearly written. The claims are well-supported by theoretical analysis and experimental results. However, my main concern is that the paper offers an incremental improvement over the early work FedAvg (McMahan et al., 2017). It would be helpful for the authors to summarize their contributions if space permits.\\n\\n[Details]\\n[Pro 1] This paper provides insights into fairness (a more uniform accuracy distribution) in federated learning, which appears to be well-motivated.\\n\\n[Pro 2] This paper provides an instructive method to estimate the upper-bound of the Lipschitz constants for ??? the local objective function (the objective function with clients' data) ???. It is an interesting idea to choose dynamic step-size depending on the global Lipschitz constants and fairness parameter q.\\n\\n[Pro 3] The evaluation fully considers various uniformity metrics, sampling strategies, and the chosen of q.\\n\\n[Con 1] I am confused about the difference between the proposed method and Newton's method. It would be helpful for the authors to clarify the limitation of the objective function (for example, the objective function should be second-order derivable).\\n\\n[Con 2] The authors note that \\\"It is not straightforward to simply apply FedAvg to problem (2) when q>0, as the F_{k}^{q+1} term prevents the use of local SGD.\\\" I found it difficult for me to follow this argument. Is it relevant to the parameter q? Given the communication-efficiency improvement in Section 3.3, few explanations are provided for the main improvement over previous work. Is it because of the local updating? Otherwise, more details about the convergence rate will strengthen the submission.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors propose a new optimization objective for fair resource allocation. Furthermore, a new algorithm, q-FedAvg, based on the vanilla federated learning, is proposed to solve the new optimization in massive and heterogeneous networks. The paper is well written. Theoretical analysis is also provided to support the effectiveness of the proposed methods. The experiments show good performance.\\nIn overall, I think this paper solves an important problem in federated learning, and I vote for acceptance.\\nHowever, since my knowledge in fairness is very limitted, I think my review is an educated guess. If the other reviews vote for rejection, I will not champion this paper.\", \"i_have_question_to_the_authors\": \"Is the proposed algorithm robust to the estimation of the Lipschitz constant? In my opinion, the proposed algorithm highly relies on $L_q(w)$. Thus, the estimation of L will be very essential. It will be better if the authors can show some results where different estimations of L is used, and compare these results to show the sensitivity to the estimation of L.\"}"
]
} |
SkxlElBYDS | Continual Learning via Principal Components Projection | [
"Gyuhak Kim",
"Bing Liu"
] | Continual learning in neural networks (NN) often suffers from catastrophic forgetting. That is, when learning a sequence of tasks on an NN, the learning of a new task will cause weight changes that may destroy the learned knowledge embedded in the weights for previous tasks. Without solving this problem, it is difficult to use an NN to perform continual or lifelong learning. Although researchers have attempted to solve the problem in many ways, it remains to be challenging. In this paper, we propose a new approach, called principal components projection (PCP). The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks. We propose to compute the principal components of the input vectors and use them to transform the input and to project the gradient updates for learning each new task. PCP does not need to store any sampled data from previous tasks or to generate pseudo data of previous tasks and use them to help learn a new task. Empirical evaluation shows that the proposed method PCP markedly outperforms the state-of-the-art baseline methods. | [
"Neural network",
"continual learning",
"catastrophic forgetting",
"lifelong learning"
] | Reject | https://openreview.net/pdf?id=SkxlElBYDS | https://openreview.net/forum?id=SkxlElBYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"dXOs7kkw2",
"SyxXQhYAYr",
"S1gezXh3tB",
"r1lB7pU2FS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743884,
1571884059074,
1571762952489,
1571740957352
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2234/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2234/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2234/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There is no author response for this paper. The paper addresses the issue of catastrophic forgetting in continual learning. The authors build upon the idea from [Zheng,2019], namely finding gradient updates in the space perpendicular to the input vectors of the previous tasks resulting in less forgetting, and propose an improvement, namely to use principal component analysis to enable learning new tasks without restricting their solution space as in [Zheng,2019].\\nWhile the reviewers acknowledge the importance to study continual learning, they raised several concerns that were viewed by the AC as critical issues: (1) convincing experimental evaluation -- an analysis that clearly shows how and when the proposed method can solve the issue that [Zheng,2019] faces with (task similarity/dissimilarity scenario) would substantially strengthen the evaluation and would allow to assess the scope and contributions of this work; also see R3\\u2019s detailed concerns and questions on empirical evaluation, R2\\u2019s suggestion to follow the standard protocols, and R1\\u2019s suggestion to use PackNet and HAT as baselines for comparison; (2) lack of presentation clarity -- see R2\\u2019s concerns how to improve, and R1\\u2019s suggestions on how to better position the paper. \\nA general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs clarifications, more empirical studies and polish to achieve the desired goal.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper tries to solve the catastrophic forgetting issue in the continual learning problem. The authors propose a method based on principal components projection to tackle this issue. The authors conduct experiments on image classification tasks to show the performance of the proposed method and compare it with two other baselines EWC and OWM.\", \"strong_points\": \"1. This paper tries to solve an important problem.\\n2. The intuition of applying principal components projection is straightforward.\", \"weak_points\": \"1. The most concerned point about this paper is the experiment. It is not convincing. The authors claim that OWM is one of the strongest baselines, but actually it perform really badly on EMNIST-26 (5 tasks), EMNIST-47 (5 tasks) and EMNIST-47 (10 tasks). What is the reason? Is it because of insufficient parameter tuning? If different methods perform differently on various datasets, it is really necessary to show more baseline methods to illustrate that the proposed method has universally good performance on different datasets.\\n2. It might strengthen the paper if the authors can show the comparison results on more other datasets, e.g., other image classification tasks. It would be better if the authors can show the proposed method can generalize to other tasks.\\n3. The authors point out that one key drawback of OWM is that, if the tasks are not quite related, OWM may perform very badly. Is this the case in the experiment?\\n4. It is not clear why the proposed method can solve the issue that OWM faces with (bad accuracy when tasks are not quite related).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces Principal Components Projection, a method that computes the principal components of input vectors, using them to train on a transformed input space and to project gradient updates. Experiments show improved results over OWM (the method that this paper builds on) and EWC.\\n\\nIf I understand correctly (which I think may not be the case), the principal component vectors are computed after the first forward/backward pass of each task, for the inputs to each layer (C_l^k). These principal components are then fixed, the orthogonal projection matrix P_l^k is then found, and then normal training is iterated until convergence using this C_l^k and P_l^k.\", \"questions\": [\"Seeing as (especially for the first task), weights are initialised randomly, why does this method provide reasonable principal components for layers after the first layer?\", \"I also do not understand why the dxd projection matrix P, which is orthogonal to all previous basis matrices C, has the property span(P^i) \\\\subset span(P^j) for i < j. Surely as more basis matrices are found, then the orthogonal space restricts in size.\", \"I also do not understand Equation 1. What is \\\\grad{W}? If it is, as defined 2 pages later, 'the backpropagation with respect to X_{k+1}' [or X_k here], then is Equation 1 saying that only one gradient step is used per task?\", \"The experiments seem reasonable, except that there are no standard deviations on the results. However, as far as I'm aware, these experimental protocols (dataset and model size) are not used in other papers: it would be nice to see experiments which match previous papers' protocols, for example with MNIST and CIFAR-10 at least (other papers use smaller model sizes).\", \"As it is currently, I am unable to understand the paper despite spending some time trying to understand it. I am therefore giving the paper a weak reject. Hopefully the authors can answer my questions.\", \"Finally, some minor specific suggestions for improving the writing:\", \"Immediately after Equation 12, there is \\\\grad{P^j} instead of \\\\grad{W^j}{P^{k-2}}\", \"The paragraph before Equation 13 uses 't' instead of 'k' sometimes for task index\", \"Use ` not ' for open quotation marks\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The method proposes a method for continual learning. The method is an extension of recent work, called orthogonal weights modification (OWM) [Zheng,2019]. This method aims to find gradient updates which are perpendicular to the input vectors of previous tasks (resulting in less forgetting). However, the authors argue, that the learning of new tasks is happening in the solution space of the previous tasks, which might severely limit the ability to adapt to new tasks. The authors propose a \\u2018principal component\\u2019-based solution to this problem. The method is considering the \\u2018task continual learning\\u2019 scenario (also known as task-aware) which means that the task label is given at inference time.\", \"conclusion\": \"1. The paper is not well-positioned in related works. I think the work is more related to works with \\u2018parameter isolation methods\\u2019 such as Piggyback, Packnet, HAT. These methods reserve part of the capacity of the network for tasks. I think the authors should relate their work with these methods, and provide an argument of the problem with these previous methods, which is addressed by their approach. I can see that rather than freezing weights (PackNet) or features (HAT) , the method freezes linear combinations of features. But it is for me not directly clear that that is desirable. In HAT the backpropagated vector is projected on the mask vector which coincides with the neurons (activations). \\n\\n2. The experimental verification of the paper is too weak, and only comparison to EWC and OWM (not well known) are provided. At least a comparison with the more related works PackNet and HAT should be included. For more recent method for task-aware CL see also \\u2018Continual learning: A comparative study on how to defy forgetting in classification tasks\\u2019. Also results seem bad. For example on CIFAR10, 5 tasks in TCL setting is two-class problem per task; I would expect better results. \\n\\n3. The authors claim that OWM is effective if tasks are similar, but not when dissimilar. And the proposed PCP solves this problem. However, all experiments are on similar tasks, and no cross domain tasks are considered, e.g. going from MNIST (task1) to EMNIST-26 (task2) etc. This would empirically support the claim. Also, the authors expect the difference between PCP and OWM to be even larger then. \\n\\n4. Some more analysis of the success of PCA in representing the distribution would be appreciated, e.g. the percentage of total energy which is captured (sum of selected eigenvalues divided by sum of all eigenvalues). Such an analysis of P_l^k as a function of the tasks (and for several layers) would be interesting to see, for example for EMNIST-47(10 tasks). \\n\\n5. Novelty with respect to OWM is rather small.\\n\\n6. The authors should mention that the method is pretrained on ImageNet in section 4.3. Given these datasets, I think it makes more sense to train from scratch and I would like to see those results.\", \"minor_remarks\": [\"I wonder if you use OWM or PCP you discard the possibility of positive backward transfer. Maybe the authors could comment on that.\", \"The authors write that \\u2018TCL setting the classification results are usually better than those of the CCL\\u2019 is that not per definition true ? Anything correctly classified under CCL is correctly classified under TCL but not the other way around.\"]}"
]
} |
Skey4eBYPS | Convolutional Conditional Neural Processes | [
"Jonathan Gordon",
"Wessel P. Bruinsma",
"Andrew Y. K. Foong",
"James Requeima",
"Yann Dubois",
"Richard E. Turner"
] | We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images. The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces. To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set. We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs. We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks. | [
"Neural Processes",
"Deep Sets",
"Translation Equivariance"
] | Accept (Talk) | https://openreview.net/pdf?id=Skey4eBYPS | https://openreview.net/forum?id=Skey4eBYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DPbGSqtfG2",
"BJeBl3sioB",
"BkxWI4ejoS",
"r1eRXZGDjS",
"HJx52ezvoS",
"BJgS5eMDiB",
"BkxdHJfvoB",
"SygYz1Gwir",
"S1eZ24ug9r",
"SkeY6z_6tH",
"r1erPYdXFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743853,
1573792748849,
1573745737483,
1573490981770,
1573490866385,
1573490829360,
1573490495677,
1573490449224,
1572009128854,
1571812032790,
1571158364750
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2232/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2232/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2232/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2232/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2232/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2232/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2232/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2232/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2232/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2232/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper presents Convolutional Conditional Neural Process (ConvCNP), a new member of the neural process family that models translation equivariance. Current models must learn translation equivariance from the data, and the authors show that ConvCNP can learn this as part of the model, which is much more generalisable and efficient. They evaluate the ConvCNP on several benchmarks, including an astronomical time-series modelling experiment, a sim2real experiment, and several image completion experiments and show excellent results. The authors wrote extensive responses the the reviewers, uploading a revised version of the paper, and there was some further discussion. This is a strong paper worthy of inclusion in ICLR and could have a large impact on many fields in ML/AI.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"--\", \"comment\": \"I have read the rebuttal from the authors and am satisfied with their answers! I will maintain my initial assessment.\"}",
"{\"title\": \"Response to reviews and revised manuscript\", \"comment\": \"We thank the reviewers for their detailed reviews, and many helpful comments. We have now uploaded a revised version of the manuscript, reflecting the suggestions. The main revisions are summarized below:\\n\\t1. We have put significant effort into improving the clarity and readability of the paper, which we realize covers a large amount of material. To improve exposition we have\\n\\t\\ta. rewritten large parts of section 4, to include more details on the models; \\n\\t\\tb. included pseudo-code for on-the-grid ConvCNP as well;\\n\\t\\tc. redesigned Fig 1b to improve readability (now Fig 1a); and\\n\\t\\td. put significant effort into rewriting Appendix A, which is the most technical part of the paper. We believe the proofs are now far clearer and easier to follow.\\n\\t2. We have put effort into further explaining the results. In particular, we have focused on the image setting and performance on the ZSMM task, which leads to a discussion on the relationship between the size of the receptive field and generalization performance. This discussion further provides some insights into designing architectures for the model. This modification includes an expanded discussion in Section 5.4 and new results from an empirical investigation in Appendix D.6.\\n\\t3. We have improved the discussion regarding consistency in Section 6, making clear the distinction between consistency and _conditional_ consistency, and including some missing citations.\\nWe believe these changes have improved the quality of the paper, and will lead to greater impact. We thank the reviewers for their useful feedback.\"}",
"{\"title\": \"Response to review\", \"comment\": \"We would like to thank the reviewer for a kind and helpful review and useful comments which we believe will improve the paper. We are pleased that you have recognized the role of Theorem 1 in motivating the work, and the variety of the experiments. We address specific comments raised in the review below. Towards the end of the discussion period, we will upload a revised version of the manuscript that will reflect your (and the other reviewers\\u2019) comments. As we work on the revised manuscript, please see below our comments on your main concerns.\\n\\nR3.1 Major Comment\\n\\n> My main criticism of the work is that it's very dense, requiring a few passes to really grasp the theoretical contribution and the concrete architecture used in the ConvCNP\\n\\nWe agree that there is a large amount of material to cover in the paper. We are working on rewriting Section 4 on the ConvCNP architecture to make it easier to understand our method. We will also enlarge Fig. 1 b) to make it more readable. The section on multiplicity is important to include in the main body since, without it, we cannot accurately state Theorem 1. However, we have replaced the discussion on multiplicity in the main body with a shorter intuitive description, leaving the mathematical details to an appendix. Additionally, we have put considerable effort into improving the clarity and readability of Appendix A, the most technical part of the paper.\\n\\nR3.2 Miscellaneous Comments\\n\\n> It would be good to have a brief discussion of why the ConvCNPPXL performs very badly on the ZSMM task, while being the best performing method in all of the other tasks. I couldn't find such a discussion.\\n\\nThank you for your question. Good performance on the ZSMM task requires translation equivariance. In practice, we find that when the receptive field is very large, the model exhibits undesirable behaviours at the boundaries of the image. In particular, we believe that this is an artifact of the 0-padding at the boundaries of the images in the ZSMM experiments. We will add a plot in the appendices showing the test log likelihood for ZSMM against the size of the receptive field for a $\\\\rho$ which uses \\u201czeros\\u201d and \\u201ccircular\\u201d padding. With \\u201dzeros\\u201d padding, the test log likelihood decreases relatively smoothly with an increasing receptive field. For \\u201ccircular\\u201d padding, there seems to be no significant correlation between these two. We will also add a discussion to this end to the experimental section. \\n\\n> Did the authors try emitting a 36-dimensional joint covariance matrix over the six-dimensional output in the plasticc experiment?\\n\\nThis is an interesting suggestion, and is a very natural extension of our work for the multi-output regression setting. However, in this work we only emitted independent Gaussian predictive distributions because this was the simplest setting, and our main concern was to judge if the representational power of deep learning combined with translation equivariance could outperform standard GP regression in this setting.\\n\\n> In the synthetic experiments, for the EQ and weak periodic kernels it would be nice to see the `ground truth' log-likelihood given by the actual GP, just to have some idea of what the upper bound of LL could be.\\n\\nWe agree that this is an interesting baseline to provide, and will include the ground truth GP log-likelihoods in the revised version of the paper.\\n\\n> In appendix C.2 Figure 6, what is the difference between the `true function' and the `Ground Truth GP'? I thought the true function was a gp.\\n\\nThe true function is a single sample from the GP prior. This sample is then evaluated at several points to obtain a training set. The ground truth GP refers to the posterior obtained by training a GP that has the same kernel as that used to generate the true function. We will change the wording in the paper to make this more clear.\"}",
"{\"title\": \"Response to review (part 2 of 2)\", \"comment\": \"R2.3: Minor Comments\\n\\n> Are there any guidelines on choice of filter size of CNN in the image case? E.g. have you chosen the filter size of ConvCNP such that the receptive field is smaller than the image, whereas it\\u2019s bigger for ConvCNPXL? It\\u2019s not clear why having a bigger receptive field allows to capture non-stationarity, and it would be helpful to expand on that, perhaps in the appendix.\\n\\nThe filter size is an important design choice that indeed warrants discussion in the paper. We will add an appendix with new experiments and discussion about the effect of the receptive field on translation equivariance. In the image experiments, the ConvCNP and ConvCNPXL were chosen such that the former has a smaller receptive field than the input, while the latter has a larger one. \\n\\nEmpirically, we found that increasing the receptive field decreases the performance of the model on tasks that are reliant on translation equivariance. We believe this has to do with the behaviour of the model at the boundaries of the images, and in particular, we believe this is an artifact of the 0-padding at the boundaries of the images in the ZSMM experiments. We showcase this issue by adding a plot in the appendix showing the test log likelihood against the size of the receptive field for a $\\\\rho$ which uses \\u201czeros\\u201d and \\u201ccircular\\u201d padding. With \\u201czeros\\u201d padding, the log likelihood decreases relatively smoothly with an increasing receptive field. For \\u201ccircular\\u201d padding, there seems to be no significant correlation between these two. \\n\\n> Also it\\u2019d help for the sake of clarity to explain why AttnCNP uses significantly more memory than ConvCNP, i.e. because memory for self-attention is O(N^2) where N=HW is the number of inputs, whereas for convolutions it\\u2019s O(HW).\\n\\nThank you for pointing this out. We will add the theoretical memory complexity of self attention and convolutions in a revised version of the writing.\\n\\n> I think it\\u2019d also help to state explicitly in the body that AttnCNP is ANP without the latent path when it is introduced.\\n> typos: first paragraph of Section 2: Z_M <- Z_m (twice), finitely <- infinitely, Appendix D.1: separabe <- separable\\n\\nThank you. These points will be addressed in a revised version of the writing.\"}",
"{\"title\": \"Response to review (part 1 of 2)\", \"comment\": \"We greatly thank the reviewer for taking the time to read the paper thoroughly and providing a kind and highly detailed assessment. Your major and minor comments are very helpful and will be used towards improving the quality of our work. Towards the end of the discussion period, we will upload a revised version of the manuscript that will reflect your (and the other reviewers\\u2019) comments. As we work on the revised manuscript, please see below our comments on your main concerns.\\n\\nR2.1: Review\\n\\n> A more competitive baseline for AttnCNP would have been to parameterise the logits of the attention weights as a periodic function with learnable length scale (e.g. stationary periodic kernel), since this is another way of building in periodicity into the model.\\n\\nWe agree that this is an interesting baseline that likely would have performed better than the standard AttnCNP on the periodic kernel and perhaps the sawtooth function. However, the goal of this comparison is to evaluate the same model on multiple kernels, rather than tailoring an individual model to each kernel. From this perspective, we may similarly have replaced the EQ kernel in the ConvCNP encoder with a periodic kernel. We opted to use the same model for every experiment, demonstrating the flexibility and capacity of these models to capture different data modalities.\\n\\nR2.2: Comments and Questions\\n\\n> One link that might be worth pointing out regarding functional representation of context is that ANP (or AttenCNP) can also be seen as giving a functional representations of the context; the ANP computes a target-specific representation of the context, which can be seen as a function of the target inputs.\\n\\nWe agree that, viewed thus, the ANP computes a target-specific representation of the context, which is indeed a function of the target inputs. However, key is that traditional DeepSets \\u2013 used to define the representation in their models \\u2013 introduce a finite-dimensional bottleneck, whereas ConvDeepSet produces a representation that is infinite dimensional, removing this bottleneck from the model. \\n\\n> I think it\\u2019s incorrect to say that latent-variable extensions enforce consistency. Even with the latent variable, if the encoder is seen as part of the model, then the NP isn\\u2019t consistent (pointed out in the last paragraph of section 2.1 in the ANP paper). So there still are issues regarding AR sampling. There does however seem to exist variants of NPs that satisfy consistency e.g. https://arxiv.org/abs/1906.08324\\n\\nThank you for pointing this out. The discussion on consistency in the initial submission is indeed inaccurate and will be corrected in the revision. What was meant is that the construction is guaranteed to be statistically consistent over the non-context points. In the revision, we will make clear that we are referring to this notion of consistency (conditional consistency). This requires a view of the model where the context points are treated separately, which we agree is uncomfortable. If, instead, the context points are also considered part of the model and handled by AR sampling, then again the resulting distribution does not obey statistical consistency. We agree that developing consistent variants would be an interesting direction for future work. We referenced the conditional BRUNO work in the conclusion, and thank you for pointing us towards the Functional Neural Process (FNP), which indeed is also relevant to the discussion. We will mention FNPs in the discussion and add a reference.\\n\\n> What is preventing the incorporation of a latent variable in the ConvCNP? Is this just something that can be easily done but you haven\\u2019t tried, or do you see any non-trivial issues that arise when doing so e.g. maintaining translation equivariance?\\n\\nWe see no major issues with incorporating a latent variable in the ConvCNP. In fact, we think that this constitutes a highly interesting extension, as there are several ways this could be achieved, and these pose several interesting challenges that need to be addressed. We aim to explore this direction in future work.\"}",
"{\"title\": \"Response to review (part 2 of 2)\", \"comment\": \"R1.3: Scope of the Experiments? Benefit of Translation Equivariance?\\n\\nNext, you mention concerns regarding our experiments, in particular their scope and the lack of specific examples highlighting the usefulness of our model. On this matter, we respectfully disagree, and would like to highlight the following as evidence. First, as noted by both Reviewers 2 and 3, our experimental section is \\n\\n\\t- R3: \\u201ccomprehensive and diverse, showing good performance on both toy examples and more real-world problems\\u201d, and \\n\\t- R2: \\u201cthe evaluation is extensive, and the results are significant\\u201d. \\n\\nFurther, the empirical evaluation clearly demonstrates the benefits arising directly from translation equivariance. In all of our experiments, the introduction of translation equivariance as an inductive bias results in significant gains, which manifests itself in several ways.\\n\\n\\t1. Performance: As pointed out by both Reviewers 2 and 3, on standard performance metrics (e.g., log-likelihood and RMSE), our models achieve significant improvements over powerful but non-translation-equivariant competitors. \\n\\t2. Model size: As pointed out by Reviewer 3, our models are (in most cases) far more parameter efficient than their non-translation-equivariant competitors.\\n\\t3. Generalization to out-of-distribution data: As pointed out by Reviewer 2, arguably the most convincing empirical demonstration of the usefulness of our model is its ability to generalize to out-of-distribution data. Examples:\\n\\n\\t\\ta. Consider Figures 2, 6, 7, and 8. Our model is able to produce high-quality predictive distributions even when encountering data that is out of the training distribution range. We emphasize that this is a direct consequence of translation equivariance, and is therefore something that the non-translation-equivariant baselines are incapable of, as is demonstrated in those same figures.\\n\\t\\tb. Consider Figure 4. Our model is able to generalize to images that are significantly different from the training distributions, e.g. containing multiple digits as opposed to a single, centered digit, or images of different shapes containing multiple faces as opposed to a single face. Again, we stress that this is a direct consequence of translation equivariance. Observe that in Figures 4.a and 12 it is apparent that non-translation-equivariant models are incapable of this kind of generalization.\\n\\nAs pointed out by both Reviewers 2 and 3, the inductive bias introduced by translation equivariance provides strong motivation for our developments, and the comprehensive \\tempirical results corroborate the motivation. We hope our comments address your concerns. We look forward to reading your response, and a continued discussion on these points.\\n\\n[1] M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. W. Teh, D. Rezende, and S. M. A. Eslami. Conditional neural processes. 2018.\\n[2] M. Garnelo, J. Schwarz, D. Rosenbaum, F. Viola, D. J Rezende, S. M. A. Eslami, and Y. W. Teh. Neural processes. 2018.\\n[3] H. Kim, A. Mnih, J. Schwarz, M. Garnelo, S. M. A. Eslami, D. Rosenbaum, O. Vinyals, and Y. W. Teh. Attentive neural processes. 2019.\\n[4] A. Carr, and D. Wingate. Graph neural processes: towards Bayesian graph neural networks. 2019.\\n[5] M. Y. Seker, M. Imre, J. Piater, E. Ugur. Conditional neural movement primitives. 2019.\\n[6] J. Requeima, J. Gordon, J. Bronskill, S. Nowozin, and R. E. Turner. Fast and flexible multi-task classification using conditional neural adaptive processes. 2019.\\n[7] V. Fortuin, M. Huser, F. Locatello, H. Strathmann, and G. Ratsch. SOM-VAE: Interpretable discrete representation learning on time-series. 2018.\"}",
"{\"title\": \"Response to review (part 1 of 2)\", \"comment\": \"Summary of the reviewer\\u2019s main concerns: \\n\\t1. How widely applicable is the method developed in the paper?\\n\\t2. How important is equivariance as an inductive bias?\\n\\t3. Do the experiments demonstrate that the method is generally applicable? Is the inductive bias (translation equivariance) empirically beneficial?\\n\\nSummary of the authors\\u2019 response:\\n\\t1. The methods are widely applicable in real-world applications including time-series, spatial data, and images.\\n\\t2. Equivariance is hugely important providing large performance gains.\\n\\t3. The experiments show that the method is useful for time-series modelling, sim2real transfer, and image modelling. They also clearly demonstrate the benefits of translation equivariance.\\n\\n----\", \"detailed_rebuttal\": \"We thank you for your time and effort in reading and reviewing our paper. Towards the end of the discussion period, we will upload a revised version of the manuscript that will reflect your (and the other reviewers\\u2019) comments. As we work on the revised manuscript, please see below our comments on your main concerns. We look forward to your response, and to an ongoing discussion on these points.\\n\\nR1.1: How Widely Applicable is the Model?\\n\\nNeural process based models are particularly applicable to settings where a large collection of small-but-related datasets are available, and one wishes to construct powerful models that can efficiently provide inferences for unseen datasets. Examples of such settings are abundant: Image reconstruction, as in Section 5.4 of our paper (also featured in the experimental sections of [1\\u20133]) is one such example. Further examples are edge imputation on graphs [4], learning of robotic movement primitives [5], and few-shot classification [1,6]. Importantly, neural processes can also model data which is non-uniformly sampled, e.g. medical time-series data [7]. Such data is difficult to model with CNNs and RNNs, which means that applications with data like this have not fully benefited from the power of deep learning. In our work, we consider additional real-word applications (with non-uniformly sampled data) of neural processes such as modelling of astronomical objects (Section 5.2) and predator-prey models in a Sim-2-Real environment (Section 5.3). All of the above are examples of real-world applications of neural processes, highlighting the flexibility and broad applicability of this model class. \\n\\nR1.2: How Important is Equivariance as an Inductive Bias?\\n\\nIt is difficult to overstate the practical applicability of translation equivariance as an inductive bias. The general success of CNNs may (arguably) be attributed to this inductive bias in large part. As we discuss in the paper, many of the applications of interest for NP-based models may also greatly benefit from this inductive bias. For example, consider time-series-based applications, such as the synthetic data in Section 5.1, astronomical objects (Section 5.2), and predator\\u2013prey models (Section 5.3). These sections demonstrate that our work brings the benefits of convolutions to applications with non-uniformly sampled data, which is an open challenge in the ML literature. Similarly, as is well known from the standard CNN example, image modelling significantly benefits from this inductive bias (Section 5.4). We agree with you that this motivation can be better developed in the paper. We will work on adding this high-level motivation to the introduction in the revised version of the paper, and thank you for raising the issue.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes a method for model neural for neural processes considering translation-equivariant embeddings.\\n\\nThe paper seems to be quite specific topic. Maybe, the author could add more empirical results to it to show the impact on translation-equivariant examples. The theoretical claims seem to be valid. So the question is a bit open what are the applications. The empirical results are also narrow as there is not much other competitive work. The results seem to be increment extension to previous work. \\n\\nThe work looks solid to me, currently I am probably not able to appreciate and judge relevance to its full extend. I would judge, it is more of interest to view specific people working on this - maybe, the authors could for the final version make this more clear. \\n\\nThe questions that should be more addressed maybe is also the applications - why is this relevant and how does it improve your specific cases. Why do we want to develop this. State of the art is quite relative if authors come from a quit narrow area which not much papers on the topic and data sets. \\n\\nOne of the main points of the paper did not get clear how does translation-equivariant helps to solve or improve the empirical results. Could you add some examples where this improves results. \\n\\nI remain ambivalent. It seems to be solid work with not much convincing applications and somewhat incremental. Maybe the authors might address this in their introduction more. The motivation remains unclear to me and hence difficult to judge its potential and impact.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"-- Summary\\n\\nThis paper considers the problem of developing neural processes which\\nare translation-equivariant. The authors derive a necessary and sufficient\\nfunctional form that the neural process \\\\Phi function must exhibit\\nin order to be permutation invariant, continuous and translation\\nequivariant.\\n\\nUsing the derived functional form, the authors construct a\\ntranslation-equivariant neural process, the convolutional conditional\\nneural process.\", \"results_in_several_experimental_settings_are_given\": \"1d synthetic\\nexperiments, an astronomical time-series modelling experiment, a\\nsim2real experiment, and several image completion experiments. All\\nthe experiments show performance improvements over the AttnCNP, the\\nmain baseline tested against. In the astronomy setting the authors\\ntest against the winning kaggle entry, against which they get better\\nlog likelihood. The authors give several qualitative experiments,\\nincluding image completion tasks from a small number of pixels.\\n\\nProofs of all the theorems and full details of all the experiments\\nare given in the appendix, along with ablations of the model.\\n\\n\\n-- Review\\n\\nOverall I found this paper very impressive. It is clear how the theoretical\\nresults motivate the choice of architecture. The fact that Theorem 1\\ncompletely characterises the design of all translation-equivariant\\nneural processes is a remarkable result which precisely specifies the\\ndegrees of freedom available when constructing a convolutional NP.\\n\\nThe implementation gives state of the art results against\\nthe AttnCNP while using fewer parameters on a variety of tasks. The image\\ncompletion tasks are impressive.\\n\\nIt seems that the authors close an open question posed in (Zaheer 2017)\\nregarding how to form embeddings of sets of varying size by embedding\\nthe sets into an RKHS instead of a finite-dimensional space. This in itself\\nis an interesting idea, and I am interested to see how this embedding method\\nbe applied outside of the CNP framework.\\n\\nThe experimental results are comprehensive and diverse, showing good\\nperformance on both toy examples and more real-world problems. The ablations\\nand qualitative comparisons in the appendix are helpful in showing where\\nthe ConvCNP outperforms the AttnCNP.\\n\\nMy main criticism of the work is that it's very dense, requiring a few\\npasses to really grasp the theoretical contribution and the concrete\\narchitecture used in the ConvCNP. I would recommend enlarging figure 1\\n(b), which is illuminating but quite cluttered due to the small\\nsize. Perhaps the section on multiplicity could be moved to the\\nappendix to make space as it seems for all real-world datasets the\\nmultiplicity would be equal to 1. \\n\\n\\nMisc Comments\\n\\n- It would be good to have a brief discussion of why the ConvCNPPXL performs\\nvery badly on the ZSMM task, while being the best performing method in all\\nof the other tasks. I couldn't find such a discussion.\\n- Did the authors try emitting a 36-dimensional joint covariance matrix over the\\nsix-dimensional output in the plasticc experiment?\\n- In the synthetic experiments, for the EQ and weak periodic kernels it would\\nbe nice to see the `ground truth' log-likelihood given by the actual GP,\\njust to have some idea of what the upper bound of LL could be.\\n- In appendix C.2 Figure 6, what is the difference between the `true function' and the\\n`Ground Truth GP'? I thought the true function was a gp...\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper introduces ConvCNP, a new member of the neural process(NP) family that models translational equivariance in the data, which uses convolutions and stationary kernels to aggregate the context data into a functional representation.\", \"This problem is well-motivated as there are various domains where such an inductive bias is desirable, such as spatio-temporal data and images, and will help especially with predictions for out-of-distribution tasks. This inductive bias was never built into NPs, and it remained unanswered whether the NP can learn such a behaviour. This paper shows that the answer is negative and that one needs to make modifications to create such inductive bias.\", \"The architecture of the ConvCNP is motivated by theory that completely characterises the set of translation equivariant functions Phi that maps sets of (x,y) pairs to bounded continuous functions that map x to y (disclaimer: I haven\\u2019t read through the proof in the appendix, so will not make any claims on its correctness). Theorem 1 defines the set of such functions using rho, phi and psi, and the choices for each on on-the-grid data and off-the-grid data are listed in Section 4. There are ablation studies in Appendix D.4 that justify the choices.\", \"Overall the paper is very well-written and clear for the most part, with helpful pseudo-code and well-laid out quantitative + qualitative results, and a very detailed appendix that allows replicating the setup. The evaluation is extensive, and the results are significant.\", \"The results on 1D synthetic data show a noticeable improvement of the ConvCNP compared to the AttnCNP, with improved interpolation as well as accurate extrapolation for the weakly periodic function. I do think however that a more competitive baseline for AttnCNP would have been to parameterise the logits of the attention weights as a periodic function with learnable length scale (e.g. stationary periodic kernel), since this is another way of building in periodicity into the model. Arguably this is more explicit and restrictive than the translational equivariance built into ConvCNP, but would have made for a more interesting comparison.\", \"Having said that, I like how the evaluation was performed on a variety of stochastic processes - previous literature only used GP + EQ kernel, but here more challenging non-smooth functions such as GP + Matern kernels and sawtooth functions are explored - and it\\u2019s very convincing to see the outstanding performance of ConvCNPs here.\", \"It\\u2019s also nice to see results on regression tasks on real data (sections 5.2, 5.3), which was never explored in the NP literature as far as I know. 5.2 shows that ConvCNPs can be competitive against other methods that model stochastic processes, and 5.3 shows an instance of where ConvCNPs do a reasonable job whereas (Attn)CNP fails.\", \"The results on images is also extensive, covering 6 different datasets (including the 2 zero shot tasks), and show convincing qualitative and quantitative results. The zero shot tasks are nice examples that explicitly show the consequences of not being able to model translation equivariance in more realistic images composed of multiple objects/faces.\", \"I have several comments/questions regarding the disccusion & related work section:\", \"One link that might be worth pointing out regarding functional representation of context is that ANP (or AttenCNP) can also be seen as giving a functional representations of the context; the ANP computes a target-specific representation of the context, which can be seen as a function of the target inputs.\", \"I think it\\u2019s incorrect to say that latent-variable extensions enforce consistency. Even with the latent variable, if the encoder is seen as part of the model, then the NP isn\\u2019t consistent (pointed out in the last paragraph of section 2.1 in the ANP paper). So there still are issues regarding AR sampling. There does however seem to exist variants of NPs that satisfy consistency e.g. https://arxiv.org/abs/1906.08324\", \"What is preventing the incorporation of a latent variable in the ConvCNP? Is this just something that can be easily done but you haven\\u2019t tried, or do you see any non-trivial issues that arise when doing so e.g. maintaining translation equivariance?\"], \"other_minor_comments\": [\"Are there any guidelines on choice of filter size of CNN in the image case? E.g. have you chosen the filter size of ConvCNP such that the receptive field is smaller than the image, whereas it\\u2019s bigger for ConvCNPXL? It\\u2019s not clear why having a bigger receptive field allows to capture non-stationarity, and it would be helpful to expand on that, perhaps in the appendix.\", \"Also it\\u2019d help for the sake of clarity to explain why AttnCNP uses significantly more memory than ConvCNP, i.e. because memory for self-attention is O(N^2) where N=HW is the number of inputs, whereas for convolutions it\\u2019s O(HW).\", \"I think it\\u2019d also help to state explicitly in the body that AttnCNP is ANP without the latent path when it is introduced.\", \"typos: first paragraph of Section 2: Z_M <- Z_m (twice), finitely <- infinitely, Appendix D.1: separabe <- separable\", \"Overall, I think this is a very strong submission and I vote for its acceptance.\"]}"
]
} |
rJxRmlStDB | Self-Induced Curriculum Learning in Neural Machine Translation | [
"Dana Ruiter",
"Cristina España-Bonet",
"Josef van Genabith"
] | Self-supervised neural machine translation (SS-NMT) learns how to extract/select suitable training data from comparable (rather than parallel) corpora and how to translate, in a way that the two tasks support each other in a virtuous circle. SS-NMT has been shown to be competitive with state-of-the-art unsupervised NMT. In this study we provide an in-depth analysis of the sampling choices the SS-NMT model takes during training. We show that, without it having been told to do so, the model selects samples of increasing (i) complexity and (ii) task-relevance in combination with (iii) a denoising curriculum. We observe that the dynamics of the mutual-supervision of both system internal representation types is vital for the extraction and hence translation performance. We show that in terms of the human Gunning-Fog Readability index (GF), SS-NMT starts by extracting and learning from Wikipedia data suitable for high school (GF=10--11) and quickly moves towards content suitable for first year undergraduate students (GF=13). | [
"curriculum learning",
"neural machine translation",
"self-supervised learning"
] | Reject | https://openreview.net/pdf?id=rJxRmlStDB | https://openreview.net/forum?id=rJxRmlStDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vCvdqC0Uq3",
"H1gMbCH3sS",
"BJg9vTBYiS",
"HylrgUNXiS",
"rJeew15WjH",
"Syez1CFWjS",
"Bygx092pFB",
"SkxqBn8OYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798743823,
1573834233596,
1573637474259,
1573238252785,
1573130071843,
1573129690017,
1571830472148,
1571478594497
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2231/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2231/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2231/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2231/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2231/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2231/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2231/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a method for curriculum learning based on extracting parallel sentences from comparable corpora (wikipedia), and continuously retraining the model based on these examples. Two reviewers pointed out that the initial version of the paper lacked references and baselines from methods of mining parallel sentences from comparable corpora such as Wikipedia. The authors have responded at length and included some of the requested baseline results. This changed one reviewer's score but has not tipped the balance strongly enough for considering this for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Different data sources\", \"comment\": \"WikiMatrix uses a different dump of Wikipedia and different extraction scripts than the one you used, for a completely fair comparison you should run the WikiMatrix extraction scripts with the LASER embeddings on your corpus.\\n\\nHowever, in my opinion the experiments that you provided, even with somewhat different corpora, show that you can obtain translation systems of comparable quality, and since you don't depend on any parallel resource your approach is potentially more widely applicable.\"}",
"{\"title\": \"Comparison\", \"comment\": \"Dear Reviewer,\\n\\nthank you very much for your feedback and valuable comments. Let us address your concern regarding the comparison with other pseudo-parallel data extraction methods from Wikipedia:\\n\\nThe SS-NMT method is a general data selection approach that may also select non-parallel sentences (i.e. similar pairs) if they are beneficial for its learning. For example, in our analysis of the selected sentences we saw that non-parallel sentences are selected in higher quantities at the beginning of training and allow the model to initialize itself on the task. However, we see how SS-NMT can also be seen as a (pseudo-) parallel data extraction method itself. To reflect this, we would like to add two parts to the revised version of our paper:\\n\\ni) a section describing previous work in the field of parallel data mining on Wikipedia will be added to \\\"Related Work\\\"\\n\\nii) a comparison of SS-NMT with a supervised model trained on the en-{fr, de, es} Wikimatrix [1] corpora with optimal threshold. We have recently performed these experiments and would like to report the results here:\\n\\nL1-L2 Wikimatrix SS-NMT\\nen-fr 33.50 29.48\\nfr-en 30.12 27.69\\nen-de 13.22 14.40\\nde-en 12.17 18.06\\nen-es 29.60 28.57\\nes-en 26.63 26.42\\n\\nThe method used in [1] is quite similar to the base-idea of SS-NMT, with the main differences being that [1] only performs data mining and does this in a highly multilingual setting. This makes the comparison for us especially interesting.\\n\\nNevertheless, we want to stress that the main focus of this study is the curriculum that arises in SS-NMT and how it is beneficial for learning. To combine this with the data mining perspective, it would be interesting to see how a supervised system would perform when training on the Wikimatrix data sampled according to a curriculum similar to that of SS-NMT, i.e. increasing complexity, decreasing noise etc. Unfortunately, because of the time constraints, we cannot report the results of this experiment during the rebuttal period, but they could be added to a potential camera ready version.\\n\\n[1] Schwenk et al. 2019 \\\"WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia\\\" https://arxiv.org/pdf/1907.05791.pdf\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper describes a method for training self-supervised neural machine translation systems from a document-aligned comparable corpus (Wikipedia in en, fr, de and es).\", \"the_proposed_training_method_consists_of_two_concurrent_processes\": \"a pseudo-parallel sentence pair extraction process, where average word embeddings and encoder states are used to construct sentence embeddings which are compared to extract candidate sentence pairs, and a conventional model optimization process that uses online batches of the extracted sentence pairs as training data.\\n\\nExperimental results on automatically evaluated translation quality on standard test sets are reported, in addition to parallel sentence extraction quality evaluated on the Europarl corpus and additional analyses on the self-induced curriculum resulting from the training process.\\n\\nThe proposed methodology is solid. The main issue with the paper is the lack of proper baseline comparison. The authors compare only with supervised and unsupervised systems trained on different corpora, and not with other approaches based on pseudo-parallel data extraction from Wikipedia.\", \"edit\": \"I have increased my score based on the author's response.\"}",
"{\"title\": \"Findings, non-unique training, training time and related work\", \"comment\": \"Dear Reviewer,\\n\\nthank you very much for your valuable comments. Let us address your concerns:\\n\\n1.) To the best of our knowledge this study is the first deep analysis of self-supervised MT. It shows that a form of active selection naturally emerges from the setup, without having been explicitly programmed. This is a contribution to knowledge and increases our understanding of self-supervised learning, in the special setting presented here. We expect the detailed analyses and results to point to avenues for tackling data settings and situations where our current self-supervised approaches struggle, and therefore to help improving and extending the current approach. For example, the fact that homographs are important during the initialization phase of the model, SS-NMT systems may struggle with distant languages and differences in the writing systems. Future work could then build upon this.\\n\\n2.) If we use all the data discovered throughout the SS-NMT training without duplicate removal to train a supervised NMT system, the supervised model would see exactly the same data in the same order as the SS-NMT system, leading to the same performance with slight statistically irrelevant variations. The point of extracting the unique data from the beginning, middle, end and all of the SS-NMTs training and then training a supervised system on this data, was to show that (i) the data quality increases from beginning to end and (ii) the order of the data matters (i.e. using SS-NMT as a simple data extractor for an external supervised system by training on all the data found by SS-NMT but not in exactly the same order and frequency is not optimal. Thus, the curriculum found by SS-NMT is beneficial for training).\\n\\n3.) Data extraction and training happens simultaneously in the model. Training (with included data selection) of a large SS-NMT model on Wikipedia takes about 2~4 weeks on a single GPU GTX TITAN. Let us also report the number of epochs/steps/hours both SS-NMT and supervised baseline NMT (NMT_all) models were trained for:\\n\\nlang SS-NMT NMT_all\", \"en_fr\": \"7/420k/746h 7/750k/207h\", \"en_de\": \"10/210k/464h 10/440k/123h\", \"en_es\": \"6/450k/736h 6/650k/188h\\n\\nIn both cases, the model is a Transformer base, i.e. a 6-layer encoder-decoder with \\n8-head self-attention, 512-dim word embeddings and a 2048-dim hidden \\nfeed-forward.\\n\\n\\n4.) Thank you for letting us know about this paper, which we would like to add to our \\\"Related Work\\\" section as it is indeed relevant to the topic.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"title\": \"Evaluation, BLEU and Related Work\", \"comment\": \"Dear Reviewer,\\n\\nthank you very much for your valuable comments. Let us address first your main concern regarding the evaluation:\\n\\nWe are aware of the wide range of prior research that has been done in the field of parallel data mining (e.g. on Wikipedia). However, the SS-NMT method we analyze in this paper does not intend to be a parallel data mining approach. Instead, it is a data selection method that depends on the models state, and thus may also select non-parallel sentences (i.e. similar pairs) if they are useful for the system. In this study, we analyze which kind of sentences are selected at different stages during training and we see, for example, that at the beginning of training non-parallel sentences can still be useful for learning.\\nNevertheless, we see the point that SS-NMT can be viewed as a data mining approach in itself. In order to capture this, we would add a section to the \\\"Related Work\\\" section to address this.\\n\\nAs for a direct comparison of a data selection method on Wikipedia, we have recently performed experiments on the Wikimatrix corpus in en-{fr, de, es} [1], where a similar extraction method was used. We would be happy to add this experiment to this paper to compare the Wikimatrix approach to the SS-NMT approach. Let us report the BLEU scores we get from this experiment, where we trained a supervised NMT system on the corresponding Wikimatrix corpora:\\n\\nL1-L2 Wikimatrix SS-NMT\\nen-fr 33.50 29.48\\nfr-en 30.12 27.69\\nen-de 13.22 14.40\\nde-en 12.17 18.06\\nen-es 29.60 28.57\\nes-en 26.63 26.42\\n\\nHere, Wikimatrx outperformed SS-NMT for en-fr, while SS-NMT is stronger in en-de, while the difference between the two methods is rather small for en-es.\\n\\n[1] Schwenk et al. 2019 \\\"WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia\\\" https://arxiv.org/pdf/1907.05791.pdf\", \"now_to_address_your_minor_concerns\": \"1.) It is true that we use a rather large BPE size. However, it is the BPE value that was reported in the original SS-NMT paper, which is why we kept it for comparison. Nevertheless, you are right that BPE size is an interesting value for SS-NMT. High-resourced supervised NMT tends to performs better with larger BPE sizes, but SS-NMT also depends heavily on homographs during the beginning of training. Having a smaller BPE size can lead to more tokens being shared between two languages (taken that they are not distant languages, as is the case here for en-{fr, de, es}). If this decreased BPE size would then lead to a better initialization of SS-NMT and thus improved translation performance, would be something to investigate in future research.\\n\\n2.) Thank you for bringing this paper to our attention, which we would like to add to the \\\"Related Work\\\" section. The main difference between the idea in \\\"Learning to Teach\\\" (LTT) and the SS-NMT approach is that LTT uses two separate models, a \\\"teacher\\\" and a \\\"learner\\\", which in a reinforcement setting mutually boost each other. However, in SS-NMT the \\\"teacher\\\" and the \\\"learner\\\" are the same model, and the data selection depends on the model state itself.\\n\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies how to extract/select suitable training data from comparable \\u2014rather than parallel\\u2014 corpora. The idea sounds reasonable.\", \"my_major_concern_is_about_the_evaluation\": \"it didn't compare with any existing work. Actually there quite a few papers on mining parallel sentences from comparable corpora such as Wikipedia, as shown below. Seems the authors are not aware of those works and didn't review and compare with them. Without such comparisons, it is difficult to judge the effectiveness of the proposed method and the quality of this work.\\n[1] Finding similar sentences across multiple languages in Wikipedia, Proceedings of the Workshop on NEW TEXT Wikis and blogs and other dynamic text sources. 2006.\\n[2] Method for building sentence-aligned corpus from wikipedia, 2008 AAAI Workshop on Wikipedia and Artificial Intelligence (WikiAI08). 2008.\\n[3] Extracting parallel sentences from comparable corpora using document-level alignment, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2010.\\n[4] \\\"Improving machine translation performance by exploiting non-parallel corpora.\\\" Computational Linguistics2006.\\n[5] https://www.aclweb.org/anthology/W04-3208.pdf\\n[6] https://openreview.net/pdf?id=ryza73R9tQ\", \"minor_issues\": \"1. \\\"For each language pair, a shared byte-pair encoding (BPE) (Sennrich et al., 2016) of 100k merge operations is applied.\\\" Most papers on neural machine translation don't use such a large BPE size, which is likely to lead to better performance. It would be better to use the same setting as previous work for fair comparisons.\\n\\n\\t2. \\\"In the case of SS-NMT, both tasks \\u2014data extraction and learning NMT\\u2014 enable and enhance each other, such that this mutual supervision leads to a self-induced curriculum, which is the subject to our analysis.\\\" Similar idea, mutual boosting between data selection and model training, has been explored in the following paper, although not for machine translation. What's the difference between these two papers?\\nLearning to Teach, ICLR 2018.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"** Paper summary **\\nSelf-supervised machine translation (SS-NMT) is a problem where extracting data and training an NMT model are simultaneously conducted. (Ruiter et al., 2019) proposed several rules to select data and train models. This paper analyzes the following aspects of self-supervised machine translation (SS-NMT):\\n1.\\tData extraction quality: precision and recall increase w.r.t. training iterations.\\n2.\\tCloseness to translation tasks: For the extracted sentences, as training goes on, the complexity decreases to the average level of potential bilingual corpus; the similarly of extracted sentences becomes closer. The authors also find that a joint process of extracting data and training models outperforms training a model with the extracted data.\\n3.\\tComplexity and similarity: The extracted sentences become harder w.r.t training epochs (measured by Gunning Fog Index). The presence of homographs becomes weaker and weaker.\\n\\n** Details **\\n1.\\tThe analysis is solid, but the findings are in general not quite surprising to readers. Besides, how should we leverage the findings in the paper?\\n2.\\tFor the results in Table 3, what if we use all the data discovered by the initial, middle and end epochs (which might be duplicated) instead of ``unique\\u2019\\u2019 data?\\n3. Can you show more statistics about data extraction time, training time?\\n4. The relation with [ref1] should be discussed.\\n\\n[ref1] Machine Translation with Weakly Paired Documents, https://openreview.net/pdf?id=ryza73R9tQ\"}"
]
} |
rJlTXxSFPr | A Quality-Diversity Controllable GAN for Text Generation | [
"Xingyu Lou",
"Kaihe Xu",
"Zhongliang Li",
"Tian Xia",
"Shaojun Wang",
"Jing Xiao"
] | Text generation is a critical and difficult natural language processing task. Maximum likelihood estimate (MLE) based models have been arguably suffered from exposure bias in the inference stage and thus varieties of language generative adversarial networks (GANs) bypassing this problem have emerged. However, recent study has demonstrated that MLE models can constantly outperform GANs models over quality-diversity space under several metrics. In this paper, we propose a quality-diversity controllable language GAN. | [
"text generation",
"GAN",
"quality-diversity",
"generalized Jensen-Shannon divergence"
] | Reject | https://openreview.net/pdf?id=rJlTXxSFPr | https://openreview.net/forum?id=rJlTXxSFPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5qUMV1MbX5",
"SygLjcZ3oS",
"B1gG2j12sr",
"ryeTZj1niH",
"BygMPlX3Kr",
"H1e4ESkYKB",
"B1xIAMv8Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743785,
1573816990000,
1573809065916,
1573808901118,
1571725401941,
1571513644048,
1571349197792
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2229/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2229/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2229/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2229/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2229/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2229/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper provides a method (loss function) for training GAN model for generation of discrete text token generation. The aim of this loss method to control the trade off between quality vs diversity while generating the text data.\\n\\nThe paper is generally well written, but the experimental section is not overly good: Interpretation of the results is missing; error bars are missing.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your comments. We'll add TextGAN and Symmetric VAE in our references. As for the MaliGAN, as far as we know, its proof for Theorem 3.1 is not correct, that's one reason that it couldn't be officially published.\\n\\nRegarding to the work of symmetric VAE, as shown in Section 4.2 of the symmetric VAE paper, it is equivalent to GAN, where there are two sets of neural network parameters, one for generator, one for discriminator. But in N3DGAN proposed by Li et al., there is only one set of parameters for the generator and there is no neural network used for the discriminator, that's the major distinction with many GANs.\\n\\nThe quality-diversity controllable GAN in this work is a generalization of N3DGAN proposed Li et al., 2019 and has connections to the forward KL divergence or the reverse KL divergence in terms of empirical distribution and model's distribution when pi is approaching 0 or 1, where the reverse KL divergence is not well defined since it has a term of log (p_g/0). KL and reversed KL have different meanings in symmetric VAE.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for your detailed review.\\n=== Theoretical analysis ===\\n1\\u3001D_G^* has a term of empirical distribution, whether that term becomes real distribution when N goes to infinity is an open question, it will be answered in our future work.\\n2\\u3001Although generalized JSD is 0 when \\\\pi = 0 and \\\\pi = 1, generalized JSD/\\\\pi and generalized JSD/(1-\\\\pi) tend to be forward KL divergence and reverse KL divergence respectively when pi tends to 0 and 1.\\nGeneralized JSD is not simpy the interpolation between forward and reverse KL divergences.\\nReverse KL divergence is not well defined and we can't optimize it directly, but we can approximately optimize the reverse KL divergence by letting pi tend to 1. Models trained via forward KL divergence have a tendency to overgeneralise and generate unplausible samples which means diversity and models trained via reverse KL divergence will try to avoid any behaviour that is unlikely under data distribution which means quality.\\n3\\u3001H is the positive definite Hessian. We will give more detailed proof in the final version.\\n=== Empirical results ===\\n1\\u3001For NLL_oracle, we take as input the sentences generated by the language model we trained to the oracle LSTM model. For NLL_test, we take as input the sentences generated by the oracle LSTM model to the language model we trained.\\n2\\u3001we give more detailed qualitative analysis of Table 1 and Table 2 in the final version.\\n3\\u3001As can be seen from figure 1 and figure 3, given the same list of temperature, the language models trained with different \\\\pi have notably distinct performances on quality and diversity metrics.We also add example outputs for both COCO and EMNLP 2017 News tasks in the final version and we can see that our proposed model performs better than SeqGAN and LeakGAN.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for your detailed review.\\n1\\u3001We introduce \\\\pi and demonstrate that quality and diversity can be controlled by using different \\\\pi. \\nIn addition, Li et al. (2019) don't use temperature sweep in the experiments, but only report performances under quality metric at temperature=1.\\n2\\u3001Thanks for pointing out a more concise way to prove proposition 2. Although the process is concise, we think that the conclusion is interesting, which shows that we can control the dependence on forward KL divergence and reverse KL divergence by controlling a single hyperparameter \\\\pi, thereby achieving control over quality-diversity trade-off.\\n3\\u3001Please refer to the above responses to the first and second questions.\\n4\\u3001We chose COCO dataset and EMNLP2017 WMT News dataset since they have become common benchmarks for text generation.The related papers such as \\\"Language GANs falling short\\\", \\\"Jointly measuring diversity and quality in text generation models\\\", \\\"Training language gans from scratch\\\" and \\\"Neural Text Generation: Past, Present and Beyond\\\" all utilize both or one of these two datasets. \\n5\\u3001We add the generated sentences for both COCO and EMNLP 2017 News tasks and corresponding analysis in the final version, and we can see that our proposed model performs better than SeqGAN and LeakGAN. \\nWe also give more detailed qualitative analysis of Table 1 and Table 2 in the final version.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides a method (loss function) for training GAN model for generation of discrete text token generation. The aim of this loss method to control the trade off between quality vs diversity while generating the text data.\\n\\nFor example,\\nif original sentence is \\\"The company \\u2019 s shares rose 1 . 5 percent to 1 . 81 percent , the highest since the end of the year .\\\" and the output is \\\"The company \\u2019 s shares rose 1 . 5 percent to 1 . 81 percent , the highest since the end of the year .\\\" then the quality of generation is high and diversity is low.\", \"pros\": \"1. The paper is very well written, with importance to the smaller details. It is a very good read for even people who are new to this problem. Especially, I appreciate the part where authors took efforts in writing why a few metrics are not used!\\n2. The motivation is good and also contributions are explicitly written. The details of the approach are provided clearly.\\n3. The experiments are provided in two different datasets and also the experiments support the two major claims in the paper.\", \"cons\": \"1. The primary concern with this submission is the novelty. The Proposition 1 of using forward-backward JSD based divergence has already been proposed in Li et al. (2019). Also, Li et al. (2019) proposes the entire contribution of this paper. The only difference is the introduction of \\\\pi, which controls the percentage of labelled data to be considered between the generated data and original data. Basically, Li et al. (2019) is a specific case of this paper where \\\\pi = 0.5. Thus, I would consider this paper as one additional experiment in Li et al. (2019) and not a whole paper as such.\\n2. Also, in the formulation in Eqn 2, the proposition 2 becomes a direct observation when \\\\pi becomes 0 or \\\\pi becomes 1. I would not call this as a proposition but a mere observation of Eqn 2.\\n3. Thus, taking away proposition 1 (already proposed in Li et al. (2019)) and proposition 2 (which is a mere observation) I do not find any novelty in this paper.\\n4. From an experiments perspective, Li et al. (2019) performed experiments in 4 datasets: Chinese Poems, MS COCO captions, Obama Speech, and EMNLP2017 WMT news. However, in this paper, results are shown in only two datasets - MS COCO captions and EMNLP2017 WMT news. Was it because that this paper was submitted in haste and/or the results in the other two datasets are not compelling enough to share?\\n5. The result analysis are poor - the authors have shown only the numbers in the tables, while the interpretation on these numbers and the discussion is left to reviewers discretion. Also, there are no generated examples that the authors are showing in either of the datasets. The authors should further discuss and analyze the results, show generated examples, and explain success and failure cases and the reasons behind them.\\n\\nOverall, I find the novelty and the experimental analysis of the paper, very weak.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a GAN-based text generation approach, where the authors propose to directly optimize a weighted version of JSD replacing p_data with its empirical distribution. I find the theoretical analysis of the approach confusing, thus would like to get clarification from the authors. The experiments largely rely on automatic evaluation, which is known to be unreliable for text generation. I'd like to see human evaluation of the generated sentences, and at least some example outputs should be shown (even if it's cherry-picked). Given that both the theory and the empirical results are not solid in the current version, I intend to reject the submission.\\n\\n=== Theoretical analysis ===\\n1. Main question: based on Equation 2, the optimal solution p_G^* is the empirical data distribution. It's unclear if p_G^* goes to the real data distribution when N goes to infinity.\\n2. Given the definition of the generalized JSD in Equation y, when \\\\pi = 0 and \\\\pi = 1, JSD_\\\\pi is both 0. How does it control the balance between forward and reverse KL? I'm also wondering what's the connection between Proposition 2 and the interpolation between forward and reverse KL (which is implied in the text).\\n3. In the proof of Proposition 2, what is H? Also in Equation 8, second line, how does the second term disappear? Would be good to have complete proof in the appendix.\\n\\n=== Empirical results ===\\n1. The description of NLL_test and NLL_oracle is very brief. Could you specify what are the language model and data used in each case?\\n2. In Table 2, all numbers are pretty close, are they significantly different? It would be really helpful to show some qualitative results as well.\\n3. Is there evidence that \\\\pi is controlling the tradeoff between quality and diversity? From the experiments it's mainly controlled by the temperature.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed to use KL and reversed KL as its new objective function for text generation GAN training.\\nHowever, this paper missed a lot important references. Basically, the authors only compare results with seqGAN and leakGAN. MaliGAN (https://arxiv.org/pdf/1702.07983.pdf), TextGAN (https://arxiv.org/pdf/1706.03850.pdf), etc. \\nAlso, KL + reversed KL training method for GAN framework is first proposed in Symmetric VAE (https://arxiv.org/abs/1709.01846), and the Proposition 2 basically are the same as the Symmetric VAE paper.\\n\\nTherefore, I think this work is lack of novelty, and still need more time to work on.\"}"
]
} |
ByeaXeBFvH | Hydra: Preserving Ensemble Diversity for Model Distillation | [
"Linh Tran",
"Bastiaan S. Veeling",
"Kevin Roth",
"Jakub Świątkowski",
"Joshua V. Dillon",
"Jasper Snoek",
"Stephan Mandt",
"Tim Salimans",
"Sebastian Nowozin",
"Rodolphe Jenatton"
] | Ensembles of models have been empirically shown to improve predictive performance and to yield robust measures of uncertainty. However, they are expensive in computation and memory. Therefore, recent research has focused on distilling ensembles into a single compact model, reducing the computational and memory burden of the ensemble while trying to preserve its predictive behavior. Most existing distillation formulations summarize the ensemble by capturing its average predictions. As a result, the diversity of the ensemble predictions, stemming from each individual member, is lost. Thus the distilled model cannot provide a measure of uncertainty comparable to that of the original ensemble. To retain more faithfully the diversity of the ensemble, we propose a distillation method based on a single multi-headed neural network, which we refer to as Hydra. The shared body network learns a joint feature representation that enables each head to capture the predictive behavior of each ensemble member. We demonstrate that with a slight increase in parameter count, Hydra improves distillation performance on classification and regression settings while capturing the uncertainty behaviour of the original ensemble over both in-domain and out-of-distribution tasks. | [
"model distillation",
"ensemble models"
] | Reject | https://openreview.net/pdf?id=ByeaXeBFvH | https://openreview.net/forum?id=ByeaXeBFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OBol8Q_JAR",
"Bkxv-Md3jr",
"Hyen1zO2ir",
"BklDgbd3oS",
"r1egFx_3ir",
"rJxxmgd3jB",
"ByxD8uBpFr",
"ryexo70hKr",
"HJgwwEGEFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743754,
1573843454624,
1573843427811,
1573843182562,
1573843064230,
1573842967883,
1571801167453,
1571771288136,
1571198047498
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2227/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2227/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2227/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2227/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2227/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2227/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2227/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2227/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work introduces a simple and effective method for ensemble distillation. The method is a simple extension of earlier \\u201cprior networks\\u201d: it differs in which, instead of fitting a single network to mimic a distribution produced by the ensemble, this work suggests to use multi-head (one head per individual ensemble member) in order to better capture the ensemble diversity. This paper experimentally shows that multi-head architecture performs well on MNIST and CIFAR-10 (they added CIFAR-100 in the revised version) in terms of accuracy and uncertainty.\\n\\nWhile the method is effective and the experiments on CIFAR-100 (a harder task) improved the paper, the reviewers (myself included) pointed out in the discussion phase that the limited novelty remains a major weakness. The proposed method seems like a trivial extension of the prior work, and does not provide much additional insight. To remedy this shortcoming, I suggest the authors provide extensive experimental supports including various datasets and ablation studies. \\n\\nAnother concern mentioned in the discussion is the fact that these small improvements are in spite of the fact that the proposed method ends up using many more parameters than the baselines. Including and comparing different model sizes in a full fledged experimental evaluation would better convey the trade-offs of the proposed approach.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to AnonReviewer3 (part II)\", \"comment\": \"[R3.7] ONE [1] might be a stronger baseline [..]. Moreover, since ONE has multiple heads, uncertainty estimation is also available. [...]\\n[Response] [1] only uses 3 heads, i.e. they distill an ensemble of three members with the proposed ONE. We used an ensemble of 50 members, therefore we cannot compare directly to the results reported in [1]. Due to time constraints of the rebuttal, we will add the comparison for the camera-ready version.\\n\\n[R3.8] In OOD detection tasks, Hydra underperforms Prior Networks on 5 of 8 datasets (note that PN (2.60) is better than Hydra (3.11) in the case of MNIST (test)). To overcome this gap, the proposed method requires more parameters.\\n[Response] Thank you for this suggestion. We will explore different kinds of architecture and ways of weight sharing in future work.\"}",
"{\"title\": \"Response to AnonReviewer3 (part I)\", \"comment\": \"[R3.1] This paper proposes a simple yet effective distillation scheme from an ensemble of independent models to a multi-head architecture for preserving the diversity of the ensemble. [...]\\n[Response] We would like to thank the reviewers for the detailed feedback and hope that the responses helps clarify major concerns.\\n\\n[R3.2] The multi-head architectures have been widely used in various settings, especially multi-task learning. As the authors mentioned, it also used for online distillation [1]. Although its goal is different from this paper, just applying such multi-head architectures seems to be incremental.\\n[Response] We argue that although multi-headed architectures have been explored, for the case of offline distillation they have not been. Especially in our work, we focus on multi-headed architecture for uncertainty estimation. This is a direction which has not been proposed to the best of our knowledge. Further, our proposed objective is simple, the average KL divergence between each head and corresponding teacher model, and versatile as it can be applied to both classification and regression. \\n\\n[R3.3] To evaluate OOD detection quality, ID/OOD datasets should be stated and various metrics (e.g., AUROC) [...]\\nIn our work, we focus on the quality of uncertainty preservation and thus report strictly proper scoring rules (Gneiting & Raftery, 2007) following metrics proposed by Ovadia et al., 2019 (Brier score) and the follow-up work of Prior Networks of Malinin et al., 2019 (model uncertainty). We believe that both metrics are appropriate ID/OOD evaluation as this has been used in the works aforementioned. However, we appreciate this suggestion and will add this to the paper for the camera-ready version.\\n\\n[R3.4] ]This paper provides experiments on only small-sized 10-class datasets, MNIST and CIFAR10. To verify the effectiveness of the proposed distillation method, other large-sized datasets should be tested, e.g., CIFAR-100, ImageNet.\\n[Response] We have run extended experiments on CIFAR-100. We used an ensemble of 10 Resnet20 members which were trained separately for distillation. For both Prior Networks and Knowledge distillation, we used Resnet20 with varying number of linear layers ([100, 100, 100], [300, 300, 100], [800, 800, 100]) after the residual blocks of Resnet20. For Hydra we used 10 heads to distill 10 ensemble members, each head had varying number of linear layers ([100, 100, 100], [300, 300, 100], [800, 800, 100]) and shared a Resnet20 as body. We optimized over a wide range of hyperparameters for both Hydra and baseline models (knowledge distillation, Prior Networks) and can show improvement for both accuracy and NLL (see table below). We also show that the test model uncertainty is closer to the ones of the ensemble. \\n\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Methods | NLL | ACC | MU |\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Individual NN | 1.4282 | 0.6713 | - |\\n| Ensemble (N=10) | 0.8764 | 0.7585 | 0.3709 |\\n| Knowledge distillation (Hinton et al. (2015)) | 0.9637 | 0.7205 | N/A |\\n| Prior Networks (Malinin et al. (2019)) | 3.0705 | 0.6638 | 0.0088 |\\n| Hydra | 0.9421 | 0.7314 | 0.1259 |\\n+-------------------------------------------------------------+-----------+----------+----------+\\n\\n[R3.5] There is no ablation study on the effect of the number of size of heads in Hydra. To achieve similar performance to the ensemble, how many heads are required?\\nWe assume as many heads as we have ensemble members. Since we have for both MNIST and CIFAR10 ensembles of 50 members, the number of heads for Hydra is by default set to 50. Therefore, an ablation study would not give much insight in this case.\\n\\n[R3.6] As reported in Table 5, in the case of CIFAR10, Hydra has 14x more parameters and 6x more FLOPs. [...] A comparison with an ensemble with M=14 models should be tested because this ensemble has the same number of parameters compared to Hydra with M=50 heads. [...]\\n[Response] We ran out of time during the rebuttal phase to conduct this additional study for CIFAR-10. However, as part of the new experiments we conducted for CIFAR-100, we obtained good performance while using just dense layers for the heads. Comparatively, an ensemble with M=10 obtains the following performance of 73.14 accuracy and a negative log-likelihood of 0.9421.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"[R2.1] This work introduces a new method for ensemble distillation. The problem of making better ensemble distillation methods seems relevant as ensembles are still one of the best ways to estimate uncertainty in practice (although see concerns below). [...] The paper is well-written, illustrations are good.\\n[Response] We would like to thank the reviewer for the valuable feedback. We hope that the additional experiments and the responses helps clarifying concerns.\\n\\n[R2.2] The method itself is very easy to implement, and does seem to outperform the baseline (prior networks). However, I am a bit concerned that the method itself seems like a trivial extension of the prior work, and does not really provide much addition insight.\\n[Response] The focus of our proposed work is knowledge distillation which preserves model uncertainty. We do this by using a multi-headed architecture, an objective which has not been suggested for model uncertainty preservation before. We do believe that this simple objective is versatile while able to preserve uncertainty. \\n\\n[R2.3] In addition, the results are reported on a set of small-scale benchmarks and seem incremental: it can be OK, but it would be really great to see a somewhat more realistic application.\\n[Response] We followed your advice and performed evaluation on CIFAR100. We used an ensemble of 10 Resnet20 members which were trained separately for distillation. For both Prior Networks and Knowledge distillation, we used Resnet20 with varying number of linear layers after the residual blocks of Resnet20. For Hydra we used 10 heads to distill 10 ensemble members, each head had varying number of linear layers and shared a Resnet20 as body. We optimized over a wide range of hyperparameters for both Hydra and baseline models (knowledge distillation, Prior Networks). All details can be found in the general comment above. In the table below, we report results with Hydra and all baseline/state-of-the-art models. We show improvement for both accuracy and NLL. We also show that the test model uncertainty is closer to the ones of the ensemble. \\n\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Methods | NLL | ACC | MU |\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Individual NN | 1.4282 | 0.6713 | - |\\n| Ensemble (N=10) | 0.8764 | 0.7585 | 0.3709 |\\n| Knowledge distillation (Hinton et al. (2015)) | 0.9637 | 0.7205 | N/A |\\n| Prior Networks (Malinin et al. (2019)) | 3.0705 | 0.6638 | 0.0088 |\\n| Hydra | 0.9421 | 0.7314 | 0.1259 |\\n+-------------------------------------------------------------+-----------+----------+----------+\\n\\n[R2.4] I honestly do not see the point on having an additional column in tables if all the values are N/A. \\n[Response] This is there to emphasize the limitations of Knowledge distillation and Prior networks. Knowledge distillation is not able to output any uncertainty estimates, and Prior Networks are usually inferior to Knowledge distillation and can only be applied to classification tasks. Our method improves on both classification and regression tasks and offers uncertainty estimation. We have added a brief explanation in the table caption. \\n\\n[R2.5] The names in Table 4 are mixed up.\\n[Response] Thank you for spotting this. We have fixed this in the current revision.\\n\\n[R2.6] Arguably, a lot of applications that would actually rely on uncertainty estimation might require online training of some sort. [...] I understand that this might not be the main focus of this work, but it seems like a major limitation of \\u201cdistillation\\u201d approaches in general, which should / could be addressed in some way?\\n[Response] We would argue the other way around. It is actually more difficult to adapt different training regimes to support co-training of teacher and student models. For example, Lan et al. propose a multi-branch online distillation. For their model, all teacher models need to be trained concurrently to the student model. This requires a high number of GPUs for all teacher models and student model to be trained efficiently. We are not confident that this can scale to an ensemble of 50 members - the number of ensemble members we used. Lan et al. only show their model on an ensemble of three members. Further, in cases where training is not straightforward, e.g. Bayesian Neural Networks, recurrent models, Progressive Growing GAN, designing a general purpose online training might be difficult. Nonetheless, we believe for the models proposed an iterative manner of training both teacher and student may be possible. However, we leave exploring this direction to future work.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"[R1.1] The paper proposes to distill the predictions of an ensemble with a multi-headed network, [...] The paper presents a straightforward idea and fairly unsurprising results [...]\\n[Response] We would like to thank the reviewer for the feedback. \\n[R1.2] It is unclear that the marginal improvements demonstrated justify the increased cost and how this approach would scale to larger ensembles. The paper would have been more interesting if the authors had managed to demonstrate significant improvements over competitors on not toy (MNIST / CIFAR) problems. \\n[Response] In order to demonstrate improvements on \\u201cnot toy\\u201d problems, we applied Hydra and baseline models to CIFAR-100. Although the complexity of the model remains the same (Resnet-20), the complexity of the task increases. In particular, with the higher number of classes, the improvement of the ensemble compared with a single model is more pronounced (around xx % CIFAR-10 and around 10% improvement for CIFAR-100), which makes the task of properly distilling the ensemble even more relevant and critical. For CIFAR-100, we also use additional linear layers as Hydra heads. We optimized over a wide range of hyperparameters for both Hydra and baseline models (knowledge distillation, Prior Networks) and can show improvement for both accuracy and NLL (see table below). We also show that the test model uncertainty is closer to the ones of the ensemble, Prior Network\\u2019s model uncertainty is much lower than the one of the original ensemble and knowledge distillation cannot be used for model uncertainty estimation. \\n\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Methods | NLL | ACC | MU |\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Individual NN | 1.4282 | 0.6713 | - |\\n| Ensemble (N=10) | 0.8764 | 0.7585 | 0.3709 |\\n| Knowledge distillation (Hinton et al. (2015)) | 0.9637 | 0.7205 | N/A |\\n| Prior Networks (Malinin et al. (2019)) | 3.0705 | 0.6638 | 0.0088 |\\n| Hydra | 0.9421 | 0.7314 | 0.1259 |\\n+-------------------------------------------------------------+-----------+----------+----------+\\n\\n[R1.3] Unfortunately, this is not the case and the fact that similar ideas (Lan et al.) have been proposed in the past (which the authors, to their credit, cite) leads me to recommend a rejection.\\n[Response] We would like to refer to our related work for a detailed comparison to the work of Lan et al. Although we share conceptual similarities with their work, our work differs from theirs in several ways. We focus on offline distillation which can be used for any kind of ensemble and any number of ensemble members, even ones whose training may be difficult to replicate. For instance, Bayesian ensembles might be difficult to be trained within the co-distillation process of Lan et al. A further benefit of Hydra is its simple design which is reflected in our single-component objective function. Hydra does not need to learn an additional gating mechanism to linearly combine the logits of the student models. Lan et al. only show their proposed model for a three-branch model and it is unclear how effective and efficient the co-distillation is with 50 branches (which is the setting that we used).\"}",
"{\"title\": \"General comments to the AC and the reviewers\", \"comment\": \"First of all, we would like to thank the reviewers for their feedback. In the following, we summarize and address major concerns raised by all reviewers:\\n1) Lack of novelty: All reviewers criticize the lack of novelty as multi-headed architectures have been explored in various areas of machine learning. Further, reviewer 1 and 3 note that a related work (Lan et al, 2018) has already proposed a multi-branch architecture for distillation of ensembles.\\n[Response] We would like to emphasize that our focus was to introduce a general and simple method for distillation which preserves model uncertainty. Using a multi-headed architecture has not been proposed so far. Our method can be applied to both classification and regression tasks (which baselines like Prior Networks cannot) while being able to preserve model uncertainty (which popular method like knowledge distillation cannot). In comparison to (Lan et al., 2018), our method is thought to be used for offline distillation whereas Lan et al. proposed a method for online distillation, also known as co-distillation. We argue that co-distillation training is difficult to adapt to different kinds of model (Bayesian neural networks, progressive growing models, recurrent models) and difficult to scale as all ensemble models are trained simultaneously with the student model during distillation. In fact, Lan et al. only show a distillation of an ensemble of three members. Our method is able to scale, and we have shown this with ensemble of 50 members. Further, our methods can be easily applied to any kind of model as we do not rely on training the teacher models at the same time as the student model.\\n2) Lack of large-scale experiments: All reviewers were concerned that current evaluation does not include large-scale datasets and architecture, thus, are concerned how scalable the method is and whether improvements are also made in such large-scale settings. \\n[Response] As advised by all reviewers, we conducted further experiments with CIFAR-100. Although the model complexity stays the same (we used a Resnet20), the task complexity increases. Due to the time constraint, we only experiment with MLP heads in order to be able to do extensive hyperparameter for fair comparisons between Hydra and models used for comparison. As shown in the table below, Hydra improves Knowledge distillation and Prior Networks for both negative log-likelihood and accuracy. Further, we are able to have higher model uncertainty estimates than Prior Networks, and also are closer to what the ensemble outputs. We have not yet included the results to the paper, as we would like to run more experiments to include larger ensembles and the last residual blocks as Hydra heads. These results will then be added to the final camera-ready version.\\n\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Methods | NLL | ACC | MU |\\n+-------------------------------------------------------------+----------+----------+-----------+\\n| Individual NN | 1.4282 | 0.6713 | - |\\n| Ensemble (N=10) | 0.8764 | 0.7585 | 0.3709 |\\n| Knowledge distillation (Hinton et al. (2015)) | 0.9637 | 0.7205 | N/A |\\n| Prior Networks (Malinin et al. (2019)) | 3.0705 | 0.6638 | 0.0088 |\\n| Hydra | 0.9421 | 0.7314 | 0.1259 |\\n+-------------------------------------------------------------+-----------+----------+----------+\", \"the_settings_for_the_experiments_are_given_below\": \"\", \"number_of_ensemble\": \"10\", \"number_of_layers_for_heads\": \"[300, 300, 100], [800, 800, 100], [100, 100, 100] (also used for Knowledge distillation and Prior Networks as last layers)\", \"weight_decay\": \"[1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8]\", \"temperature\": \"[1., 2.5, 5., 7.5, 10.]\", \"dropout_rate\": \"[0.0, 0.1, 0.25, 0.5, 0.75, 0.9]\\n\\nDetailed comments to all reviews are below as replies to each review.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to distill the predictions of an ensemble with a multi-headed network, with as many heads as members in the original ensemble. Distillation proceeds by minimizing the KL divergence between the predictions of each ensemble member with the corresponding head in the student network. Experiments illustrate that the multi-headed architecture approximates the ensemble marginally better than approaches that use a network with a single head.\\n\\nThe paper presents a straightforward idea and fairly unsurprising results \\u2014 a multi-headed architecture with each head matching an ensemble member more faithfully represents the original ensemble. This improved fidelity, however, comes at the cost of increased computation and storage requirements (which scale linearly with the size of the ensemble). It is unclear that the marginal improvements demonstrated justify the increased cost and how this approach would scale to larger ensembles. The paper would have been more interesting if the authors had managed to demonstrated significant improvements over competitors on not toy (MNIST / CIFAR) problems. Unfortunately, this is not the case and the fact that similar ideas (Lan et al.) have been proposed in the past (which the authors, to their credit, cite) leads me to recommend a rejection.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overview:\\nThis work introduces a new method for ensemble distillation. The problem of making better ensemble distillation methods seems relevant as ensembles are still one of the best ways to estimate uncertainty in practice (although see concerns below). The method itself is a simple extension of earlier \\u201cprior networks\\u201d: the original method suggested to fit a single network to mimick a distribution produce by given ensemble, and here authors suggest to use multi-head (one head per individual ensemble member) in order to better capture the ensemble diversity. \\nAuthors report results on multiple relatively standard benchmarks (MNIST, CIFAR, etc), and seem to outperform the baseline by a small margin. The choice of baselines is reasonable.\", \"writing\": \"The paper is well-written, illustrations are good.\", \"decision\": \"The method itself is very easy to implement, and does seem to outperform the baseline (prior networks). However, I am a bit concerned that the method itself seems like a trivial extension of the prior work, and does not really provide much addition insight. In addition, the results are reported on a set of small-scale benchmarks and seem incremental: it can be OK, but it would be really great to see a somewhat more realistic application.\\nThus, I am on the fence with this one, but generally positive about this work, thus \\u201cweak accept\\u201d rating.\\n\\nQuestions / concerns:\\n* I honestly do not see the point on having an additional column in tables if all the values are N/A. \\n* The names in Table 4 are mixed up.\\n* Arguably, a lot of applications that would actually rely on uncertainty estimation might require online training of some sort. This means that in those scenarios one does not actually have access to a pre-trained ensemble. I understand that this might not be the main focus of this work, but it seems like a major limitation of \\u201cdistillation\\u201d approaches in general, which should / could be addressed in some way?\\n\\n<update>\\nThanks for a detailed answer. I am not very convinced by the argument about online-vs-offline training, but I do not see a reason to decrease my rating. I can see this work useful in practice.\\n</update>\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Summary & Pros\", \"This paper proposes a simple yet effective distillation scheme from an ensemble of independent models to a multi-head architecture for preserving the diversity of the ensemble.\", \"The proposed scheme provides the same advantages of the ensemble in terms of uncertainty estimation and predictive performance, but it is computationally efficient compared to the ensemble.\", \"This paper experimentally shows that multi-head architecture performs well on MNIST and CIFAR-10 in terms of accuracy and uncertainty.\", \"Concerns #1: Novelty of the proposed method\", \"The multi-head architectures have been widely used in various settings, especially multi-task learning. As the authors mentioned, it also used for online distillation [1]. Although its goal is different from this paper, just applying such multi-head architectures seems to be incremental.\", \"Concerns #2: Insufficient experiments\", \"To evaluate OOD detection quality, ID/OOD datasets should be stated and various metrics (e.g., AUROC) should be measured like other literature, e.g., Table 2 in [2]. Such OOD detection quality is important to evaluate the quality of uncertainty estimation.\", \"This paper provides experiments on only small-sized 10-class datasets, MNIST and CIFAR10. To verify the effectiveness of the proposed distillation method, other large-sized datasets should be tested, e.g., CIFAR-100, ImageNet.\", \"There is no ablation study on the effect of the number of size of heads in Hydra. To achieve similar performance to the ensemble, how many heads are required?\", \"Concerns #3: Week efficiency\", \"As reported in Table 5, in the case of CIFAR10, Hydra has 14x more parameters and 6x more FLOPs. Despite such a large number of parameters, the performance gain seems to be incremental.\", \"A comparison with an ensemble with M=14 models should be tested because this ensemble has the same number of parameters compared to Hydra with M=50 heads. I think it might achieve good performance on the evaluation metrics.\", \"Concerns #4: Incremental improvements\", \"Accuracy gain is too marginal even Hydra uses 14x more parameters.\", \"ONE [1] might be a stronger baseline because ONE achieves 94% accuracy on CIFAR-10 using ResNet32 with only 2~3 heads while Hydra achieves only 90% even it uses 50 heads. Moreover, since ONE has multiple heads, uncertainty estimation is also available. So it should be compared with the proposed method.\", \"In OOD detection tasks, Hydra underperforms Prior Networks on 5 of 8 datasets (note that PN (2.60) is better than Hydra (3.11) in the case of MNIST (test)). To overcome this gap, the proposed method requires more parameters.\", \"[1] Zhu, Xiatian, and Shaogang Gong. \\\"Knowledge Distillation by On-the-Fly Native Ensemble.\\\" Advances in Neural Information Processing Systems. 2018.\", \"[2] Andrey Malinin and Mark Gales. \\\"Predictive Uncertainty Estimation via Prior Networks.\\\" Advances in Neural Information Processing Systems. 2018.\"]}"
]
} |
H1l2mxHKvr | Few-Shot Few-Shot Learning and the role of Spatial Attention | [
"Yann Lifchitz",
"Yannis Avrithis",
"Sylvaine Picard"
] | Few-shot learning is often motivated by the ability of humans to learn new tasks from few examples. However, standard few-shot classification benchmarks assume that the representation is learned on a limited amount of base class data, ignoring the amount of prior knowledge that a human may have accumulated before learning new tasks. At the same time, even if a powerful representation is available, it may happen in some domain that base class data are limited or non-existent. This motivates us to study a problem where the representation is obtained from a classifier pre-trained on a large-scale dataset of a different domain, assuming no access to its training process, while the base class data are limited to few examples per class and their role is to adapt the representation to the domain at hand rather than learn from scratch. We adapt the representation in two stages, namely on the few base class data if available and on the even fewer data of new tasks. In doing so, we obtain from the pre-trained classifier a spatial attention map that allows focusing on objects and suppressing background clutter. This is important in the new problem, because when base class data are few, the network cannot learn where to focus implicitly. We also show that a pre-trained network may be easily adapted to novel classes, without meta-learning. | [
"few-shot learning",
"spatial attention"
] | Reject | https://openreview.net/pdf?id=H1l2mxHKvr | https://openreview.net/forum?id=H1l2mxHKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"dpWrpAY3HV",
"Byec1WQ3oH",
"BklTklXnjr",
"HyeIMy7noS",
"rkgOJZaaKH",
"S1xtjozaYr",
"H1epAP5jFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743726,
1573822689571,
1573822436924,
1573822221626,
1571832031588,
1571789729428,
1571690453485
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2226/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2226/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2226/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2226/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2226/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2226/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper tackles the interesting problem of meta-learning in problem spaces where training \\\"tasks\\\" are scarce. Two criticisms that seems to shared across reviewers are that (i) it is debatable how \\\"novel\\\" the space of meta learning with \\\"few\\\" tasks is, especially since there aren't established standard for how many training tasks should be available, and (ii) the paper could use more comparisons with baseline methods and ablations to understand the contributions. As an AC, I down-weight criticism (i) because I don't feel the paper has to be creating a new problem definition; it's acceptable to make advances within an existing space. However, criticism (ii) seems to remain. After conferring with reviewers it seems that the rebuttal was not strong enough to significantly alter the reviewer's opinions on this issue, and so the paper does not have enough support to justify acceptance. The paper certainly addresses interesting issues, and I look forward to seeing a revised/improved version at another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Most concerns appear to be due to a very poor understanding of our work by the Reviewer, which is hard to explain for someone having published in this area and read the paper thoroughly. We do our best to explain below.\", \"c1\": \"\\\"They try to propose a new problem, but their description shows that the problem is exactly the same as what most \\u201cfew-shot learning\\u201d works aim to solve: use a pre-trained model, train a meta-learner on few-shot training tasks, and apply it to novel test tasks.\\\"\", \"a1\": \"In summary: as explained in the 4th paragraph of the introduction, our problem differs from standard few-shot learning in that a large scale dataset from another domain is available, while in-domain base class data is lacking. This problem is more realistic than the common few-shot setting, as the other two reviewers agree.\", \"c2\": \"\\\"The algorithm does not have any important contributions comparing to existing ones: they define a prototype per class based on the pre-trained model and apply the nearest neighbor classification. The so-called \\u201cprototypical classifier\\u201d is actually the nearest neighbor classifier since no prototypical network structure is learned in the proposed method.\\\"\", \"a2\": \"As we explain in section 2/\\\"Contribution\\\", our contribution is to introduce this new variant of few-shot learning and study the effects of spatial attention and adaptation in this new setting. We do not claim anywhere that the prototype classifier is our contribution. On the contrary, we discuss it in section 2/\\\"Prototypes\\\" as background. In section 2/\\\"Related work\\\", we also explain how prototypical networks (Snell et al., 2017) use this classifier in a meta-learning setup.\\n\\nWe use this prototype classifier in the adaptation stage where at each iteration, the prototypes of the novel classes are computed with the (updated) features of the support examples. We use standard cross-entropy loss on the output of a cosine classifier (2) having the prototypes as class weights, as described in section 4.2. This may not be a standard choice, but it is not a claimed contribution either. At inference, the prediction is indeed the nearest prototype as in (Snell et al., 2017).\", \"c3\": \"\\\"I would not call the weighted average as \\\"attention\\\" because it is not: the weight in attention is computed by a module with learnable parameters, while the weight in this paper is computed by the entropy of a pre-defined model\\u2019s output prediction.\\\"\", \"a3\": \"As a matter of terminology, visual attention has existed in computer vision long before being computed by a learnable module. A very well-known example is Itti, Koch and Niebur, A model of saliency-based visual attention for rapid scene analysis, PAMI 1998.\", \"c4\": \"\\\"The \\\"spatial attention\\\" only makes sense when the pre-trained domain\\u2019s classes can describe the main concepts appearing in the images of novel classes. This assumption is too strong since it requires class-level (rather than lower-level) relationships.\\\"\", \"a4\": \"As shown in Figure 1 and as demonstrated in the results, our spatial attention mechanism can indeed generalize to novel classes, which is remarkable for its simplicity and absence of learnable parameters. As we discuss, an interpretation is that \\\"uncertainty over a large number of such classes may express anything unknown like background.\\\" Reviewer #2 appears to agree on the validity of this argument.\", \"c5\": \"\\\"The base training is not necessary in the algorithm: it is used to only fine-tuning theta and W. As the author said in the beginning of Section 4.1, they can directly solve novel tasks based on the pre-trained model.\\\"\", \"a5\": \"Certainly we can solve new tasks, but is that good enough? On CUB for instance (Table 1), k=5 can be up to 18% better than k=0; k=ALL can be more than 40% better.\", \"c6\": \"\\\"The experiments show that the pre-trained model is helpful in few-shot learning, which is a known fact.\\\"\", \"a6\": \"Pre-training on a large-scale dataset from another domain has not been studied in the context of few-shot learning. This comment is apparently due to the poor understanding of the problem, as addressed in A1.\", \"c7\": \"\\\"The writing of this paper is very poor: a lot of typos and grammar errors, inconsistency between narratives, abuse of notations, wrong equation reference, even missing punctuations. They make the paper hard to understand.\\\"\", \"a7\": \"We are always open to constructive criticism. A comment as severe as this would deserve at least some concrete examples for each kind of writing error.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your review. Please find our response below.\", \"c1\": \"\\\"However, I doubt the novelty and effectiveness of the attention way used in the paper. The attention module helps the model focuses on the objects not the background, which is absolutely correct. But there are already some relevant studies in the missing reference Large-Scale Long-Tailed Recognition in an Open World, CVPR2019.\\\"\", \"a1\": \"Thank you for pointing out this related work, which we shall discuss. Liu et al. also use an attention mechanism in their work. However, the attention is computed from the feature maps of the embedding network trained on in-domain classes. In our work, because in-domain base classes images might not be available we propose an attention mechanism that is as generic as possible. Their ablation study shows that using a spatial attention mechanism can improve few-shot accuracy by a small margin of not more than 1%, which is consistent with our findings.\", \"c2\": \"\\\"Also, from the results, the significant improvements come from the weights of the pre-trained model but not the attention used.\\\"\", \"a2\": \"Of course, a pre-trained model performs a lot better than a model trained from scratch on the base class data of standard few-shot learning benchmarks, even when the pre-training and few-shot domains are very different (Places and CUB). This is exactly our motivation to study a practical setting that is completely overseen in all work on few-shot learning so far. That said, the choice of pre-trained model is not a method to be compared to others but rather what defines the problem (in all tables for instance, we explicitly say \\\"Baselines to be compared only to randomly initialized with k=ALL\\\"). In this problem, the gains coming from attention/adaptation are in general consistent across all experiments and can be up to 8% as discussed in A3 to Reviewer #3. These findings are interesting for a problem that is studied for the first time.\", \"c3\": \"\\\"Also, I am curious about the dense classification used in the adaptation phase. Will it achieve similar performance with fine-tuning using just standard loss?\\\"\", \"a3\": \"As described in section 4.2, during the adaptation phase, we use the standard cosine classifier and the loss function is standard cross-entropy. Dense classification is only used during base class adaptation, similarly to (Lifchitz et al., 2019). We also experimented with dense classification at stage 2, which was inferior. We can discuss that.\", \"c4\": \"\\\"Btw, according to the formatting instructions, the abstract should be limited in one paragraph.\\\"\", \"a4\": \"Fixed.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your review. Please find our response below.\", \"c1\": \"\\\"I would modify it so instead of discarding miniImageNet classes that are overlapping with Places I would discard the problematic Places classes. This way it will be easier to compare to standard FSL.\\\"\", \"a1\": \"In Appendix B, we show results on the full miniImageNet dataset. Compared to the modified version, all results are increased nearly uniformly regardless of the initialization being random or pretrained. Therefore, conclusions made with the modified miniImageNet hold when there is some overlap between the pre-training dataset and the few-shot dataset. Removing the overlapping classes from Places before pre-training is something that we explicitly exclude in the definition of the problem: \\\"we do not have access to its training process or data\\\", as stated at several places. This choice stems from a very practical consideration: it allows for large networks pre-trained on large-scale data, which can be used off-the-shelf without repeating the process in every paper. This makes it harder to control overlap, which is however compensated by limiting the amount of base class data.\", \"c2\": \"\\\"Also, I don't understand why for CUB the benchmarks includes k={0,1,5} while for miniImageNet it is k={0,20,50}, obviously k={0,1,5} are more interesting.\\\"\", \"a2\": \"Contrary to CUB where k=1 is already bringing noteworthy improvements over k=0, base training with k<20 images on miniImageNet per class does not bring significant improvement. One interpretation is that CUB, being a fine-grained dataset, has low variety of visual content such that few examples are enough to adapt the pre-trained model to the new domain. Another interpretation is that due to the larger domain gap between Places and CUB, few examples can bring significant improvement. We shall discuss. We can of course add a few more measurements as well.\", \"c3\": \"\\\"As for the suggested method, I find it hard to judge since there are no strong baselines to compare against.\\\"\", \"a3\": \"We are comparing to recent methods that are using the same embedding network and data for fair comparison. The baseline we provide (base class training with dense classification) is simple but has state of the art performance in the classic few-shot setup. On CUB, it is only outperformed slightly by the recent implementation of prototypical network of Chen et. al. On miniImageNet (Appendix B) our baseline outperforms the same implementation by a large margin. The two models that outperform our baseline on miniImageNet (Ensemble, CTM) are more complex and applying them in our setup is not straightforward (for instance, initializing an ensemble out of an off-the shelf network). Since this problem is studied for the first time, easy reproduction is important. Besides, on CUB, Ensemble is outperformed by our baseline and CTM is not available. Overall, considering performance and simplicity, we have found our baseline the best choice. We are already discussing this at the end of Appendix B.\", \"c4\": \"\\\"Also, the ablation study of removing the attention and/or adaptation doesn\\u2019t result in a definitive conclusion.\\\"\", \"a4\": \"Spatial attention improves few-shot classification when few examples of the base classes are available (k is small), which is consistent across datasets. When k is larger, the gain from spatial attention is lower but is still consistent. When k=ALL, spatial attention does not bring any accuracy improvement but does not degrades it significantly either (greatest loss is by 0.3%), making it a safe choice especially for small k. Improvements of adaptation are consistent everywhere. The gain is impressive (up to 8%) on CUB with few base classes (k=0 or k=1). As we discuss, this is very important because labeled base class data may not be available.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"A new task is suggested, similarly to FSL the test is done in an episodic manner of k-shot 5-way, but the number of samples for base classes is also limited. The model is potentially pre-trained on a large scale dataset from another domain. The suggested method is applying spatial attention according to entropy criteria (or certainty) of the original classifier (from a different domain).\\n\\n\\nI think the suggested task is important and more realistic than the usual FSL benchmarks. I would modify it so instead of discarding mini-imagenet classes that are overlapping with Places I would discard the problematic Places classes. This way it will be easier to compare to standard FSL. Also, I don\\u2019t understand why for CUB the benchmarks includes k={0,1,5} while for mini-imagenet it is k={0,20,50}, obviously k={0,1,5} are more interesting.\\n\\nAs for the suggested method, I find it hard to judge since there are no strong baselines to compare against. Also, the ablation study of removing the attention and/or adaptation doesn\\u2019t result in a definitive conclusion.\", \"update\": \"While your comments do weaken some of my concerns, I'm afraid it is not enough for changing my previous rating. I think being more careful about the benchmark definition with regards to train/test overlap and comparing to stronger baselines will help improve the paper for future submissions.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a new realistic setting for few-shot learning that we can obtain representations from a pre-trained model trained on a large-scale dataset, but cannot access its training details. Also, there may be a large domain shift between the dataset of the pre-trained model and our dataset. For the pre-trained model, they will not only use its weights but also use it to generate a spatial attention map and help the model focuses on objects of images. Back to the standard few-shot classification problem, they will first adapt the model with base class samples and then adapt to novel classes.\\n\\nThe proposed new setting is very meaningful since we already have many powerful pre-trained models and why not exploit its usage for few-shot learning problems. However, I doubt the novelty and effectiveness of the attention way used in the paper. The attention module helps the model focuses on the objects not the background, which is absolutely correct. But there are already some relevant studies in the missing reference Large-Scale Long-Tailed Recognition in an Open World, CVPR2019. Also, from the results, the significant improvements come from the weights of the pre-trained model but not the attention used. Is the attention way used in the paper a good way to exploit the pre-trained model for few-shot classification problems?\\n\\nAlso, I am curious about the dense classification used in the adaptation phase. Will it achieve similar performance with finetuning using just standard loss?\\n\\nBtw, according to the formatting instructions, the abstract should be limited in one paragraph.\\n\\n=========================================================\", \"after_rebuttal\": \"I thank the author for the response.\\n\\nI do see there are differences in the way of generating attention masks between the proposed work and (Liu et al.). But the improvements from the attention module is not significant, especially when using all base data.\\n\\nI keep my original scores.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper introduces a problem \\u201cfew-shot few-shot learning\\u201d that aims to firstly transfer prior knowledge from one domain to the domain where the base training tasks reside, and then train a few-shot learning model on training tasks and apply it to novel test tasks. The two \\u201cfew-shot\\u201d in the name refers to base training tasks and novel test tasks. In their algorithm, they use a model pre-trained on another dataset as the prior knowledge and fine-tune it on training tasks. During the test, they use the weighted average of samples\\u2019 representations per class as the prototype of each class, where the weight is large for samples with more discriminative prediction over pre-trained domain\\u2019s classes. Afterward, classification is reduced to finding the nearest neighbor among the class prototypes. Some experiments show that the pre-trained model can improve few-shot classification accuracy.\", \"my_major_concerns\": \"1) They try to propose a new problem, but their description shows that the problem is exactly the same as what most \\u201cfew-shot learning\\u201d works aim to solve: use a pre-trained model, train a meta-learner on few-shot training tasks, and apply it to novel test tasks. \\n\\n2) The algorithm does not have any important contributions comparing to existing ones: they define a prototype per class based on the pre-trained model and apply the nearest neighbor classification. The so-called \\u201cprototypical classifier\\u201d is actually the nearest neighbor classifier since no prototypical network structure is learned in the proposed method.\\n\\n3) I would not call the weighted average as \\u201cattention\\u201d because it is not: the weight in attention is computed by a module with learnable parameters, while the weight in this paper is computed by the entropy of a pre-defined model\\u2019s output prediction. \\n\\n4) The \\u201cspatial attention\\u201d only makes sense when the pre-trained domain\\u2019s classes can describe the main concepts appearing in the images of novel classes. This assumption is too strong since it requires class-level (rather than lower-level) relationships.\\n\\n5) The base training is not necessary in the algorithm: it is used to only fine-tuning theta and W. As the author said in the beginning of Section 4.1, they can directly solve novel tasks based on the pre-trained model.\\n\\n6) The experiments show that the pre-trained model is helpful in few-shot learning, which is a known fact.\\n\\n7) The writing of this paper is very poor: a lot of typos and grammar errors, inconsistency between narratives, abuse of notations, wrong equation reference, even missing punctuations. They make the paper hard to understand.\\n\\n-------------\", \"update\": \"Thanks for the authors' rebuttal! After reading their rebuttal, I still have main concerns about the novelty of the problem and the writing quality. The proposed method tends to be incremental.\"}"
]
} |
BJlnmgrFvS | BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning | [
"Xinyue Chen",
"Zijian Zhou",
"Zheng Wang",
"Che Wang",
"Yanqiu Wu",
"Qing Deng",
"Keith Ross"
] | The field of Deep Reinforcement Learning (DRL) has recently seen a surge in research in batch reinforcement learning, which aims for sample-efficient learning from a given data set without additional interactions with the environment. In the batch DRL setting, commonly employed off-policy DRL algorithms can perform poorly and sometimes even fail to learn altogether. In this paper we propose anew algorithm, Best-Action Imitation Learning (BAIL), which unlike many off-policy DRL algorithms does not involve maximizing Q functions over the action space. Striving for simplicity as well as performance, BAIL first selects from the batch the actions it believes to be high-performing actions for their corresponding states; it then uses those state-action pairs to train a policy network using imitation learning. Although BAIL is simple, we demonstrate that BAIL achieves state of the art performance on the Mujoco benchmark, typically outperforming BatchConstrained deep Q-Learning (BCQ) by a wide margin. | [
"Deep Reinforcement Learning",
"Batch Reinforcement Learning",
"Sample Efficiency"
] | Reject | https://openreview.net/pdf?id=BJlnmgrFvS | https://openreview.net/forum?id=BJlnmgrFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"4mOAMP_p_",
"S1gQk7josH",
"rkxaR_9sor",
"Ske2Jgqssr",
"SJelB0Pj5H",
"SJlbdX86tr",
"S1gZB2I2YB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743697,
1573790426917,
1573787861434,
1573785572442,
1572728376044,
1571804008794,
1571740728734
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2225/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2225/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2225/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2225/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2225/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2225/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a novel algorithm for batch RL with offline data. The method is simple and outperforms a recently proposed algorithm, BCQ, on Mujoco benchmark tasks.\", \"the_main_points_that_have_not_been_addressed_after_the_author_rebuttal_are\": [\"Lack of rigor and incorrectness of theoretical statements. Furthermore, there is little analysis of the method beyond the performance results.\", \"Non-standard assumptions/choices in the algorithm without justification (e.g., concatenating episodes).\", \"Numerous sloppy statements / assumptions that are not justified.\", \"No comparison to BEAR, making it challenging to evaluate their state-of-the-art claims.\", \"The reviewers also point out several limitations of the proposed method. Adding a brief discussion of these limitations would strengthen the paper.\", \"The method is interesting and simple, so I believe that the paper has the potential to be a strong submission if the authors incorporate the reviewers suggestions in a future submission. However, at this time, the paper falls below the acceptance bar.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your thorough review. We appreciate that you \\\"like the simplicity of the approach and the fact that it is much easier to understand than existing works like BCQ\\\".\", \"for_your_second_point_you_write\": \"\\\"Experimental results are a little unsettling. The primary reason is that in all of the plots, BCQ, BAIL, BC aren't starting from the same test return at 0 parameter updates!\\\" The reason why the BAIL learning curve is flat is because the imitation learning approach learns very fast. In our experiments, we evaluate the performance of the policy every 5,000 gradient updates, during which 500,000 data points are seen. Therefore, the BAIL policy at the first evaluation point in the plot already is already very good. The experimental results are indeed correct: the simple BAIL algorithm provides better performance than the more complex BCQ algorithm. We have also made our code publicly available so that you try for yourself. The results are not \\\"unsettling\\\".\\n\\nWe agree with your first point, there may be some environments for which BAIL will not do well for reasons you describe (regress to terrible action). But we feel this comment is unfair. Many environments, including all the Mujoco environments, due not have this issue. Note that the BCQ paper also only considers the Mujoco environments, and the BCQ and BEAR could possibly have this problem as well. We feel that our BAIL algorithm should be judged by its performance on existing popular benchmarks and not on imaginary to-be-created benchmarks. \\n\\nThe deterministic assumption is needed for the upper envelope. If the environment is stochastic, then the upper envelope would be enveloping the maximal of the possible returns for a given policy, when the actual value function would be an average of the returns. \\n\\nWe feel that REM and BCQ/BAIL are largely orthogonal. Therefore we feel REM is outside the scope of the paper. In a subsequent paper, we may consider combining BAIL and REM (and doing an ablation study). \\n\\nIn the next version, we will remove the two statements you feel are subjective. \\n\\nConcerning your last point, nothing fishy whatsoever is going on. This is something we observed that other authors have not brought to light. (Unlike our paper, the BCQ and BEAR papers only looked at one seed to generate a batch. Their results may be different for multiple seeds.) Different batches with different seeds and the same algorithm can indeed lead to widely different results for batch RL. We agree that further work is needed to understand this phenomenon. But we also feel that is question should be addressed in separate paper. \\n\\nGiven our response above and the veracity of our experimental results, would you consider raising your score? Don't you feel that the novelty, simplicity, and performance of the algorithm on the existing competing benchmarks should be the main criteria for scoring?\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you thorough review. Below we respond to many of your comments.\\n\\nWe feel the paper should largely be judged on the novelty, simplicity, and efficacy of the BAIL algorithm. The BAIL algorithm is significantly simpler than BCQ but nevertheless provides better performance. These contributions alone merit acceptance at ICLR. \\n\\nYou write \\\"the paper is very poor in detail that makes readers hard to be convinced with the results.\\\" Nevertheless, Reviewer 2 writes \\\"Paper is well-written. It was clear, lucid and descriptive.\\\" We can assure you that there are some senior authors on the paper. The experimental results are correct, as we discuss below. \\n\\n1. Whether we use dual-gradient descent or a penalty with K =10,000 to solve a constrained optimization problem is a matter of taste. The authors have significant experience in constrained optimization, and prefer the penalty approach for this problem. The K value here is a hyper-parameter. What is most important, however, is that approach as is works: using a simple loss function, it beats BCQ by a wide margin. We feel you should give more credit to the simplicity and novelty of the algorithm, and to the fact that it provides excellent performance. \\n\\n2. Your point (2) is well-taken. However, like most of deep learning, our BAIL algorithm is a heuristic that is motivated by mathematics but does not have a mathematical guarantee. What is most important is that it is simple and it works, as shown in the experimental results.\\n\\n3. It is a one-standard deviation confidence interval. We will clarify this in the next version. We will also clarify how we compute the improvement. \\n\\n4. For early stopping, we took the standard approach of monitoring the loss function over a validation set (which is chosen from the batch data). In the next version, we will clarify this. \\n\\n5. Yes, all the final polices are evaluated with a deterministic policy, not with a stochastic policy. \\n\\nAlso, we feel your last comment is unfair. At the time of submission, the BEAR code was not available. We feel that BEAR is complex (as compared to BAIL) and would have been difficult to implement. Furthermore, even if we had implemented it, reviewers may have then questioned our implementation. It is the responsibility of the authors of the BEAR paper to provide their code, and not ours to figure out how it should be implemented. It is our understanding that the BEAR code has been recently released. However, this submission should be judged in the context of what code was publicly available at the time of release. \\n\\nYou seem to be surprised that the experimental results can be so good with such a simple algorithm. But the fact is that it's true. One doesn't need VAE's and other machinery to achieve good results. We made our code available at the time of submission, and the reviewers are invited to check it out for themselves.\"}",
"{\"title\": \"Response to your review\", \"comment\": \"Thank you for your thoughtful review. As you and the other reviewers have noted, the approach of imitating good actions is novel as so is the upper envelope. The principal contributions of the paper are(1) introduce these novel approaches, and (2) through careful and thorough evaluation show that BAIL soundly beats BCQ, thereby achieving state of the art performance. We feel the paper should be largely judged by these two contributions.\\nYour concerns about the rigor of the theorem statements seems to be mostly a semantic one and can easily be corrected. Given the high novelty (and simplicity) of the approach, and the excellent experimental results, perhaps you can consider raising your score for this paper? If you like, we can remove the \\\"smoothing\\\" terminology and/or remove the theorems altogether from the paper. \\n\\nThe BCQ paper considers only the environments Hopper, HalfCheetah, and Walker, and uses batch data sets generated by DDPG. In order to provide a fair comparison with the BCQ paper, we choose the same environments and batch generating techniques. We also wanted to enlarge the scope of the experimentation by including Ant. However, DDPG for Ant fails to learn and generate useful batch data. For that reason, we used SAC to generate the batch data for Ant.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\nThis paper studies the problem of learning a policy from a fixed dataset. The authors propose to estimate a smooth upper envelope of the episodic returns from the dataset as a state-value function. The policy is then learned by imitating the state action pairs from the dataset whose actual episodic return is close to the estimated envelope.\", \"recommended_decision\": \"The direction of imitating \\\"good\\\" actions from the dataset is interesting. The intuition of estimating an upper envelope of the value function seems reasonable. However, I feel like this paper is not ready to be published in terms of its overall quality, mainly due to the lack of correctness, rigorousness and justification in statements and approaches.\", \"major_comments\": [\"On the top of page 4: \\\"Because the Mujoco environments are continuing tasks, it is desirable to approximate the return over the infinite horizon, particularly for i values that are close to the (artificial) end of an episode. To do this, we note that the data-generation policy from one episode to the next typically changes slowly. We therefore apply a simple augmentation heuristic of concatenating the subsequent episode to the current episode, and running the sum in (1) to infinity.\\\" I cannot see how this approach is validated. The reset of initial state makes cross-episode cumulative reward from a state s not an approximation to the real return from state s. Estimating the infinite horizon return from finite horizon data is indeed a challenge here and simply cut the return at the end of an episode is be problematic. But the solution proposed by the authors is wrong in principle and cannot be simply justified by \\\"good empirical performance\\\". I feel hard to regard this choice a valid part of an algorithm unless further justification can be provided.\", \"Statements of theorems (4.1 and 4.2) are non-rigorous and contain irrelevant information: \\\"lambda-smooth\\\" is not an appropriate terminology when lambda is the weight of the regularizer. The actual \\\"smoothness\\\" also depends on the other term in the loss (same lambda does not indicate same smoothness in different objectives). For the same reason, Theorem 4.2 is wrong as changing K also changes the smoothness of the learned function. Proof of Theorem 4.2 in appendix is wrong as the authors ignore the coefficients in the last equation. Theorem 4.1-(1) cannot be true unless how V_\\\\phi is parameterized is given: e.g. if there is no bias term or the regularization is applies to the bias term V will always output 0 as lambda 0-> \\\\infty. The \\\"2m+d\\\" in Theorem 4.1-(2) is irrelevant to this work and cannot be justified without more detailed statements about how the network is parameterized. I appreciate the motivation that the authors try to validate the use of their objective to learn a \\\"smooth upper envelope\\\" but most of these statements are somewhat trivial and/or wrong section 4.1 does not actually deliver a valid justification.\", \"The use of \\\"smooth upper envelope\\\" itself can bring both over-estimation and under-estimation. For example, if one can concatenate different parts from different episodes to get a trajectory with higher return, the episodic return for the states along this trajectory is an under-estimate. Although it is fine to use a conservative estimate it would be better to be explicit about this and explain why this may not be a concern. On the other hand, it can bring over estimation to the state-values due to the smoothness enhanced to the fitted V. It would be better to see e.g. when these concerns do not matter (theoretically) or they are not real concerns in practice (by further inspecting the experiments).\", \"Regarding Experiments: Why Hopper, Walker, HalfCheetah are trained with DDPG while Ant is trained by SAC? The performance of Final-DDPG/SAC after training for 1m steps looks way below what SAC and TD3 can get. Is it because they are just partially trained or noise is added to them? The baseline online-trained policy should not contain noise for a fair comparison. That said, in batch RL setting it is not necessary to compare to online-trained policy because it is a different setting. But if the authors want to compare to those, choice of baseline should be careful. An important baseline which is missing is to run vanilla DDPG/TD3/SAC as a batch-mode algorithm.\"], \"minor_comments\": [\"Section 3, first paragraph: It is not very meaningful to say \\\"simulators are deterministic so deterministic environments are important\\\". Simulators are made by humans so they can be either deterministic or stochastic. \\\"many robotic tasks are expected to be deterministic environments\\\" is probably not true. I do not view \\\"assuming deterministic envs\\\" as a major limitation but I do not find these statements convincing as well. Similarly, the argument for studying non-stationary policy seems unsupportive: if the dataset comes from training a policy online then why do we care about learning another offline policy rather than just use or continue training the online policy. One argument I can see is that the online policy is worse. But the fact that these policies are worst than running e.g. SAC for a million steps makes the motivation questionable. Again, I do not view \\\"choice of setting\\\" as a limitation but I just find these statements a bit unsupportive.\"], \"potential_directions_for_improvement\": \"To me the main part of the paper that looks problematic is Section 4.1 (both the approximation of infinite horizon returns and the theorems). It would be better to see a more rigorous and coherent justification of this approach (or some improved version), e.g. by either presenting analysis that is rigorous, correct and actually relevant or leave the space for more detailed empirical justification (e.g. whether potential over/under-estimating happens or not, comparing the estimated V to real episodic return of the learned policy).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper tries to solve a batch reinforcement learning problem with a very simple but efficient algorithm. It first learns a smooth upper bound of Monte Carlo returns in the batch data (called the \\\"upper envelope\\\"). Then, the algorithm chooses state action pairs of the batch data that have returns larger than constant times the upper envelope. It lowers the constant until the algorithm gets 25% of the data. Then the algorithm trains the policy on chosen state-action pairs. The algorithm is shown to outperform BCQ in experiments.\\n\\nAlthough I like the idea of the paper, I vote for rejection. While there is no theoretical guarantee on the performance of the algorithm, the design of the algorithm does not follow the usual design the other researchers follow. The way of reporting the experiment results does not seem very professional. I recommend the authors to consult with some other researchers who have publication experience. In the current form, the paper is very poor in detail that makes readers hard to be convinced with the results.\", \"these_are_some_points_that_i_could_not_understand\": \"1. Why do you fix K=10000 on modified loss instead of dual gradient descent for constrained optimization?\\n2. How do you guarantee that choosing (s,a) such that G>xV gives you good samples? Since mean returns are not zero, it won't pick the top 25% actions for all states. States with the high mean return will have all of its samples included, while states with the low mean return will have all of its samples excluded. Although the authors concatenated all the experiences to compute returns (which is ad-hoc as well), the initial states will have a lower return than other states. This means that most of the actions of the initial states will be excluded in the training set while more actions of the other states will be included, which does not seem desirable. (e.g. in Figure 1 Ant. If we set x=0 (extreme case), states of timestep >600000 will be all included where t<600000 will be partially excluded. )\\n3. In the explanation of Figure 2, it is written as \\\"standard deviation confidence interval\\\". Is it standard deviation, or confidence interval? Also, why are the standard deviation in the Figure 2 and the Table 1 so different? How do you compute Improvement in the Table 1? What happens if the environment gives negative returns only (i.e. Pendulum), such that BCQ gives you -1000 and BAIL gives you -500?\\n4. As claimed in theorems, V=max(G) if lambda->infinity. This means that the \\\"Highest Returns\\\" in figure 3 is also one specific hyperparameter choice of the suggested algorithm. There might be a better choice of regularization that outperforms both BAIL and Highest Returns as early-stopping done in the paper is just one random amount of regularization. What was the early-stopping criterion and how is it chosen? How do we know it is the best regularization option?\\n5. Is the final DDPG or final SAC evaluated with a deterministic policy? According to the paper, I assume that it was not. Those algorithms usually add large noise while training for exploration, and such noise is removed while in evaluation. In Bear Q learning, better action selection technic is used, which chooses the action sample that maximizes critic Q. Is the evaluations really fair for all algorithms? As far as I know, Mujoco environments are deterministic except the initial state sampling, and there should only be very small variance.\\n\\nAlso, I believe the paper should be compared to Bear Q learning as well, as it is very easy to implement and outperforms BCQ by a large margin.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary of Claims:\\n\\nThe paper proposes a batch RL method that they claim is simpler than most existing methods that try to avoid the extrapolation error that is prevalent among batch RL methods. They do this by completely avoiding the minimization/maximization (cost/reward) of the approximate value function that is fit to the batch transitions. Instead, they train an approximation for the state value function's tight upper bound (which they refer to as the upper-envelope) by using their monte-carlo returns. By fitting such an approximator, they sample the state-action pairs that are close to the envelope (thus have high values/returns), and use behavioral cloning to fit a parameterized policy to those state-action pairs.\", \"decision\": \"Weak Reject.\", \"my_decision_is_influenced_by_two_main_reasons\": \"(1) Although the simplicity of the method is apparent and a very desirable feature, the authors don't highlight situations where this can lead to bad policies. For example, consider that there are two pairs (s, a_1, s') and (s, a_2, s') in the batch that are close to the upper-envelope, and hence will both be used for training the policy. Using Behavioral cloning, the policy would regress to the mean of a_1 and a_2, which could be a terrible action altogether. The issue here is that only one of these two pairs has higher return and our policy needs to only predict that action (or in the case of tie, either one.) This can be really bad in situations where two very different actions can lead to same returns (e.g. in a reacher-like task the arm can reach a goal in two different rotations.) Even though I pointed out a very specific case, one could think of many other cases where the proposed approach might result in a bad policy. \\n\\nHaving said all of this, it might be true that such cases do not appear in practice (which I highly doubt) but its the authors job to raise and clarify that. The current set of experimental setups (mujoco locomotion problems) are not good enough evidence for that and they need experiments where optimal-policies can be multi-modal or have diverse experimental setups (manipulation etc.)\\n\\n(2) Experimental results are a little unsettling. The primary reason is that in all of the plots, BCQ, BAIL, BC aren't starting from the same test return at 0 parameter updates! In most plots BAIL starts off way higher in return than BCQ, BC with no parameter updates yet, which suggests that the experiments were not setup well. Maybe, they didn't initialize the policy in the same way for all the approaches, maybe the random seeds were not the same for all approaches, or maybe BAIL had some sort of pretraining for the policy that was not accounted for in the parameter updates. In any way, this needs to be addressed. This is also highlighted by the fact that the learning curves for BAIL are almost always flat across a million parameter updates! If you are starting off with a random initialization, there should be an upwards slope for the learning curve. Also, as raised in the previous point I think using these Mujoco locomotion environments is not convincing enough to claim that BAIL is a viable competitive batch RL approach.\", \"comments_and_questions\": \"(1) I like the simplicity of the approach and the fact that it is much more easier to understand than existing works like BCQ\\n\\n(2) Paper is well-written. It was clear, lucid and descriptive.\\n\\n(3) Why is the deterministic dynamics assumption needed? I am curious\\n\\n(4) The paper makes some subjective statements such as \\\"BEAR is also complex\\\", which is not substantiated well enough. Refrain from making such statements\\n\\n(5) Not comparing to BEAR because their code is not publicly available is a contentious reason. I personally feel that the authors could have reimplemented it and compared but I am not sure what the community feels about that\\n\\n(6) Is there any reason why REM cannot be applied to mujoco environments? If it can be, then why did the authors not compare to REM as well?\\n\\n(7) Another subjective statement (that is clearly wrong) \\\"many robotic tasks are expected to be deterministic environments\\\" - although this is slightly true, the reason we model environments to be stochastic is not because there is inherent randomness in them but because our state descriptions are never complete. The state descriptors are always partial and we account for them by assuming stochasticity in the dynamics. For example, consider a robotic manipulation task where if you know all the environmental factors as part of your state space(such as the friction coefficients) you can assume deterministic dynamics, else you are better off assuming stochastic dynamics because the same actuation might not result in the same motion every time (because of varying friction)\\n\\n(8) Concatenating subsequent episodes in a batch only makes sense (as the authors point out) if the policy doesn't change much across episodes. But this is not true of current off-policy RL methods like DDPG, SAC. You either need very small learning rate or a trust-region constraint to ensure that the policy doesn't change much across episodes. \\n\\n(9) Why do different batches with different seeds and the same algorithm lead to widely different results for batch RL? There is clearly something fishy here. Is it because of the off-policy RL methods used to collect the data, is it due to the batch RL method used? More investigation needed\"}"
]
} |
Hygi7xStvS | Lossless Data Compression with Transformer | [
"Gautier Izacard",
"Armand Joulin",
"Edouard Grave"
] | Transformers have replaced long-short term memory and other recurrent neural networks variants in sequence modeling. It achieves state-of-the-art performance on a wide range of tasks related to natural language processing, including language modeling, machine translation, and sentence representation. Lossless compression is another problem that can benefit from better sequence models. It is closely related to the problem of online learning of language models. But, despite this ressemblance, it is an area where purely neural network based methods have not yet reached the compression ratio of state-of-the-art algorithms. In this paper, we propose a Transformer based lossless compression method that match the best compression ratio for text. Our approach is purely based on neural networks and does not rely on hand-crafted features as other lossless compression algorithms. We also provide a thorough study of the impact of the different components of the Transformer and its training on the compression ratio. | [
"data compression",
"transformer"
] | Reject | https://openreview.net/pdf?id=Hygi7xStvS | https://openreview.net/forum?id=Hygi7xStvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"H7U8NobZmb",
"rkxDdksnjB",
"S1lcZ1ohsH",
"BJxPqa9noS",
"HklOX_jl5r",
"r1eFrDP19S",
"BkeZpB56FH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743669,
1573855087252,
1573854977645,
1573854606877,
1572022304276,
1571940160657,
1571820985377
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2224/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2224/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2224/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2224/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2224/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2224/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes to use transformers to do lossless data compression. The idea is simple and straightforward (with adding n-gram inputs). The initial submission considered one dataset, a new dataset was added in the rebuttal. Still, there is no runtime in the experiments (and Transformers can take a lot of time to train). Since this is more an experimental paper, this is crucial (and the improvements reports are very small and it is difficult to judge if there are significant).\\nOverall, there was a positive discussion between the authors and the reviewers. The reviewers commented that concerns have been addressed, but did not change the evaluation which is unanimous reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for their feedback.\\n\\nIn this paper, we show that a method purely based on neural networks, without hand designed features, can obtain SoTA results compression results on benchmarks such as enwik8. Existing work showed a significant gap between methods purely based on neural networks, compared to methods such as PAQ8 or cmix (see https://bellard.org/nncp/nncp.pdf). Also note that for each application, there is a tradeoff between compression ratio and compression/decompression speed. Our focus in this paper is to obtain the best possible compression rate, at the expense of compression speed.\\n\\nRegarding SoTA results for compression, cmix and PAQ8 variants are the methods obtaining the best compression rates, according to https://cs.fit.edu/~mmahoney/compression/text.html, http://mattmahoney.net/dc/silesia.html or http://qlic.altervista.org/LPCB.html. Could the reviewer indicates methods outperforming these approaches that we might have missed? Note that results reported on enwik8 in the traditional language modeling setting (i.e. training on 90% of data, validating and testing on the rest) are not comparable to the compression setting we study.\\n\\nWe will add a discussion of \\\"Practical Full Resolution Learned Lossless Image Compression\\\" [1] as well as the paper mentioned by reviewer 3 in the related work. The method proposed for image compression in [1] combines arithmetic coding with a neural network. As opposed to our work, the approach is designed to enable practical compression and decompression speed with compression ratio comparable with standard methods (while we focus on compression rate only). Another difference with our work is the evaluation setting: in [1] it is assumed that both the encoder and the decoder have access to the pre-trained network for free. As PAQ8 and cmix, we do not use a pre-trained network, it has to be included in the archive in order to be accessed by the decompressor. While it is true that sending a network to the decoder can be amortized over the decompression of large amount of data, the size of the L3C network archive is 35MB and not negligible compared to the size of the compressed archive of enwik8, around 15MB. It should also be noted that PAQ8 and CMIX achieve better compression ratio than other compression algorithms on a dataset composed of large images, at the expense of compression and decompression speed (e.g. see http://qlic.altervista.org/LPCB.html).\\n\\nBERT fundamentally differs from our setting in several aspects. First, BERT is not trained as a language model to predict the next character given the preceding characters, but as a denoising auto-encoder. As such, BERT is not a generative model of sequence, and it is not straightforward to apply it to data compression. Moreover, standard BERT models are several 100s of MB in size, which would need to be included in the archive to allow the decompression. As such, it does not make using BERT practical for lossless data compression.\\n\\nWe have not yet implemented an end-to-end framework for compression and decompression. The decoder and encoder can get the same numbers provided that they have access to the same random number generator and the same seed. Finally, we will release our code with the paper to allow reproducibility.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for their feedback.\\n\\n1. We will add experiments on the silesia benchmark (http://mattmahoney.net/dc/silesia.html). Overall, our method compress the whole corpus in 33.4MB, compared to 33.3MB for cmix (we report bpc for individual files below).\\n\\n \\t| paq8 | cmix |\\tours\\ndickens | 1.58 | 1.49 | 1.59\\nmozilla\\t| 1.61 | 1.49 | 1.41\\nmr \\t| 1.68 | 1.62 | 1.48\\nnci \\t| 0.23 | 0.20 | 0.20\\nooffice\\t| 1.87 | 1.74 | 2.25\\nosdb \\t| 1.65 | 1.60 | 1.58\\nreymont\\t| 0.98 | 0.93 | 0.94\\nsamba \\t| 1.02 | 0.96 | 1.14\\nsao \\t| 4.16 | 4.16 | 4.18\\nwebster\\t| 1.00 | 0.90 | 0.84\\nx-ray \\t| 3.41 | 3.37 | 3.33\\nxml \\t| 0.41 | 0.37 | 0.54\\n\\nSimilarly to cmix and PAQ8, our method don\\u2019t pretrain the language model on a dataset different from the data to compress. The neural network is randomly initialized at the beginning of the compression and decompression phase with the same seed and is trained during both phases. The only constraint is that a given character has first to be compressed before being used to update the model. We have considered initializing the neural network with a pre-trained model. However in order to decompress the archive, the decoder needs to have access to the pretrained model used by the encoder. Thus, the size of the pre-trained model has to be added to the size of the archive. As a result, we didn\\u2019t manage to improve performances by using a pre-trained model.\\n\\n2. There is an inherent tradeoff between compression ratio and compression/decompression speed. In this paper we focus on the compression ratio, and at the expense of speed we match the performance of CMIX on enwik8. We will add runtime in the paper.\\n\\n3. Indeed, the model is trained using the softmax cross entropy. Without revisit, the weights of the network is updated every 256 characters. The gradient is obtained by backpropagating the error of the prediction associated with the 256 characters that have just been compressed. Thus during compression and decompression the network is trained for one epoch. With revisits, the number of times a given character is used to train the Transformer is increased. Enwik8 is composed of 100MB of wikipedia data in XML format.\\n\\n4. a) A revisit is a partial pass on data that have already been compressed. Revisits are performed during compression and decompression at fixed intervals to further train the network. All weight updates performed during compression have to be performed identically during decompression in order to losslessly decompressed data. Thus increasing the number of revisits makes compression and decompression slower. In particular since it is necessary to compute a forward pass to compress the data, the cost of backpropagating the error of the first prediction is amortized.\\nb) Since we made the assumption (also made in PAQ8 and CMIX) that pretrained models used for compression have to be included in the archive in order to be accessed by the decoder, revisits is just the way to use data several times to learn the parameters of the model. \\nc) During a revisit the learning rate is fixed, but it is linearly decreased over the compression phase. If the frequency of revisit F is too low, a higher revisit frequency could improve the compression ratio, especially at the beginning of the compression. We have observed that at some point increasing the frequency of revisit can be detrimental to the compression ratio. This can be explained by the fact that the network tends to forget the current context. Increasing the number of characters considered at each revisits improve the compression ratio but is detrimental to the compression/decompression speed. \\n\\nWe will address the minor comments in the paper, and will add the missing references to the related work.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for their feedback.\\n\\nWe will update the paper to address the minor comments. Moreover, we evaluated our method on the additional Silesia benchmark (http://mattmahoney.net/dc/silesia.html), which includes files of different types (such as text, UNIX or Windows executables, databases, pdf etc). We report compression rates for our method, as well as PAQ8L and cmix v8 below, and will add it to the paper. Overall, our method compress the whole corpus in 33.4MB, compared to 33.3MB for cmix. We report bpc for individual files below.\\n\\n \\t| paq8 | cmix |\\tours\\ndickens | 1.58 | 1.49 | 1.59\\nmozilla\\t| 1.61 | 1.49 | 1.41\\nmr \\t| 1.68 | 1.62 | 1.48\\nnci \\t| 0.23 | 0.20 | 0.20\\nooffice\\t| 1.87 | 1.74 | 2.25\\nosdb \\t| 1.65 | 1.60 | 1.58\\nreymont\\t| 0.98 | 0.93 | 0.94\\nsamba \\t| 1.02 | 0.96 | 1.14\\nsao \\t| 4.16 | 4.16 | 4.18\\nwebster\\t| 1.00 | 0.90 | 0.84\\nx-ray \\t| 3.41 | 3.37 | 3.33\\nxml \\t| 0.41 | 0.37 | 0.54\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper explores the effectiveness of the Transformer architecture to the lossless data compression problem.\\nIt also proposes a method to periodically revisit tokens that were already compressed for adopting the task setting of data compression, which is essentially online learning of sequence models. \\n \\nThe authors conduct their experiments on the enwik8 benchmark.\\nThey show that the Transformer architecture obtains state-of-the-art results.\\n \\nThis paper is basically easy to follow, but several typos and statements that should be improved.\\nThe problem setting to tackle is interesting.\\nHowever, applying a deep neural network approach to data compression problem has already been discussed in several previous studies.\\nTherefore, the novelty of this paper is somewhat limited.\\n \\n \\nMy main concern of this paper is that the proposed method was only evaluated on a single benchmark data.\\nI believe that it is a bit weak to support the effectiveness of the proposed method.\\nThe authors should evaluate their method on several benchmark datasets that have different aspects, such as settings with easy and hard to compress.\", \"minor_comment\": \"In Section 4.2, there is a missing citation.\\n... we do not use Adaptive Inputs (Baevski & Auli, 2018; ?) ...\\nPlease check and fix it.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper provides a method for lossless compression of text. It's heavily inspired by the language modelling methods that have been developed for the purposes of predicting the next character/word in a sentence, and it uses this idea as its backbone. The only difference is that the results are presented in the compression setting.\", \"i_think_we_should_reject_this_paper_due_to_the_following_reasons\": [\"I don't see enough of a difference between this and previous work\", \"the results are nowhere near SoTA for compression, despite the method being sold to this community\", \"there are other papers that do lossless neural compression that could have been used to make a comparison rather than making no comparison at all. For example, \\\"Practical Full Resolution Learned Lossless Image Compression\\\" (CVPR 2019) provides a framework for image rather than text, but that could be adapted to this field without any major changes (predict convolutionally characters, rather than RGB values).\", \"there's no comparison even with BERT (how well it do to predict the next character vs. this)...\", \"no runtime numbers\", \"no reproducibility discussion (i.e., how can I guarantee that my decoder can get exactly the same numbers as my encoder so that I can decompress on a different machine)\", \"no discussion about whether files were created/decompressed (this is ABSOLUTELY CRUCIAL for compression papers to discuss)\", \"Overall, I am not excited about this paper, and unless the authors put a lot more into it, there's just not enough novelty to justify a publication at ICLR.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThe paper investigates using the transformer architecture for neural network-based lossless compression of text. The resulting model, obtained through a thorough investigation of the architecture hyper-parameters are on par with standard SOTA compression. The paper is well-written. In particular the authors have done a great job reviewing existing compression literature and positioning their method within the space of prior work.\", \"recommendation\": \"Weak Reject\\nWhile the paper considers an interesting application of the Transformer architecture, and is well-written, it is of limited novelty. Specifically, the bulk of the paper is concerned with describing experimental results of a thorough (but standard) hyper-parameter search - considering things like Transformer context size, learning rate (schedule), number of layers and key, value, query dimensionality; and does not offer any new architectural modifications / insights.\\n\\nFurthermore, only a single dataset - enwik8 - is considered in the experimental validation and little attention is given to the description of the dataset split and any distribution differences between splits. Taken together, the existing experimental setup potentially creates an unfair advantage for the neural network-based methods - while the standard methods can be expected to perform similarly across a wide range of datasets / texts, the neural-network based methods have been trained and tested on very similar data and could be expected to perform well on these data, but not in case of a distributional shift (e.g. compressing legal texts instead of Wikipedia). The paper does not answer the question of whether or not this is true.\\n\\nFurthermore, similar to autoregressive models, transformers are known to be slow at inference time. I expect this to lead to very slow decoding. Therefore, methods in table 1 should be compared in compression/decompression time to give a better overview of the practical impact of this work. \\n\\nTaken together, in its current form the paper may be better suited for a workshop publication rather than a full conference paper.\", \"major_comments\": \"1. For reasons mentioned above, the paper should include additional experimental evaluation. In particular, it should consider the effect of training the model on one dataset, but evaluating it on another dataset; and discuss how differences in performance (if any) compare to standard methods.\\n2. Compression/decompression times of the proposed method should be compared against the other compression methods in table 1. I expect the proposed transformer to be slow at decompressing.\\n3. The paper does not contain the loss that the transformer model was used to optimize. I assume that it is the softmax cross entropy, but this is worth mentioning explicitly. It would also be worthwhile to explain the training procedure - for how many epochs was the model trained (see also next question), what was the dataset size? \\n4. Description of the \\u201ctraining with revisits\\u201d is not very clear. My understanding is that it resembles a pass through the data, where some of it is considered again at specific intervals. My first assessment is that this should not be necessary - the data should already be considered multiple times during the training process.\\na) The authors should provide a more detailed description of the training-with-revisits procedure, contrasting it specifically with a procedure where revisits are not done (i.e. normal training).\\nb) If the goal of the revisits training is to observe some training examples more than once, then it would be very interesting if simply training for a longer time (several epochs == passes through the data) has a similar effect.\\nc) Is there any motivation for the choice of the revisits hyper-parameters F and M? Was a different batch size used during the revisits training? Is the learning rate evolved during the revisits training phase or is it still decayed?\", \"minor_comments\": \"1. There is some prior work on using Neural Networks for lossless image compression (e.g. [1], [2]. [3] that achieves SOTA compression ratios compared to standard methods. It may be interesting for the readers to mention these results. In particular the authors\\u2019 statement that \\u201c[...] purely neural network based models are still far from state of the art [...]\\u201d may give the wrong impression to the readers.\\n2. The authors mention that they \\u201c[...] propose several improvements to its (the Transformer) architecture and training to accelerate and stabilize [...] training\\u201d. In my view, the experiments described in the paper resemble a hyper-parameter search more than architectural improvements. The authors may want to clarify in the text which specific improvements they refer to.\\n3. Page 1, last paragraph: \\u201c[...] of all the important component [...]\\u201d -> \\u201c[...] of all the important components [...]\\u201d\\n4. Page 3: \\u201c[...] attention span size across all layers as it suggested [...]\\u201d -> \\u201c[...] attention span size across all layers as was suggested [...]\\u201d\\n5. Page 3: Missing references.\\n6. Page 3: Use of small n and capital N when talking about n-grams. Should be made consistent.\\n7. Page 8 (Conclusion): \\u201cwihtout\\u201d -> \\u201cwithout\\u201d\\n\\n\\n[1] F. H. Kingma, P. Abbeel, and J. Ho. Bit-Swap: recursive bits-back coding for lossless compression with hierarchical latent variables. In International Conference on Machine Learning (ICML), 2019.\\n[2] Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, and Max Welling. Integer Discrete Flows and Lossless Compression. arXiv e-prints, 2019.\\n[3] Jonathan Ho, Evan Lohn, and Pieter Abbeel. Compression with Flows via Local Bits-Back Coding. arXiv e-prints, 2019.\"}"
]
} |
rkeiQlBFPB | Meta-Learning with Warped Gradient Descent | [
"Sebastian Flennerhag",
"Andrei A. Rusu",
"Razvan Pascanu",
"Francesco Visin",
"Hujun Yin",
"Raia Hadsell"
] | Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both of these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work, we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning. | [
"meta-learning",
"transfer learning"
] | Accept (Talk) | https://openreview.net/pdf?id=rkeiQlBFPB | https://openreview.net/forum?id=rkeiQlBFPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zwVZBPeDT",
"SylLDPVXiS",
"HkecPLN7iB",
"SJgNWQEXjB",
"B1eAh22icH",
"r1g_Ddk0FS",
"HkgHvrYptS",
"BkxBqe7GFB",
"B1lvHK3Adr",
"r1g3apzRdr",
"rJx5cqjaOH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798743640,
1573238621908,
1573238369522,
1573237500505,
1572748470420,
1571842144295,
1571816797313,
1571070093349,
1570847038685,
1570807235939,
1570777746220
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2223/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2223/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2223/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2223/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2223/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2223/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2223/Authors"
],
[
"~Zhijie_Deng1"
],
[
"ICLR.cc/2020/Conference/Paper2223/Authors"
],
[
"~Zhijie_Deng1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"A strong paper reporting improved approaches to meta-learning.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Dear R3,\\n\\nThank you for your detailed review and thoughtful comments! We will incorporate them when revising the manuscript. We agree that meta-learning to learn continually is an exciting new area of research and are thrilled to report a positive signal. Due to space constraints, we will not be able to delve deeper in this paper, but are certainly excited to push further in this direction!\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Dear R2,\\n\\nThank you for your thoughtful review and comprehensive feedback! We will revise the manuscript to address all of your concerns, as detailed below.\\n\\n[WarpGrad extend T-Nets, which only allowed for linear layers....]\\n\\nWarpGrad does indeed extend the T-Nets architecture in this way, we would like to emphasise that this extension is motivated by a subtle but important theoretical aspect of non-linearity in warp layers. Theoretically speaking, the meta-objective relies on non-linear gradient preconditioning. As we are taking a trajectory agnostic approach, the meta-learner should be able to modulate preconditioning on task data to ensure taking an expectation over parameter space generate useful preconditioning across tasks and adaptation steps. This requires non-linear warp layers, and thus the meta-objective and the architectural contribution are tied on a deeper theoretical level.\\n\\nIn practice, linear warp layers do work quite well for supervised learning, but on the other hand, as we show in the ablation study in Appendix G, if we make warp-layers non-linear, we get similar performance from a *random* initialisation. For more complex tasks, as in the RL case, we show that non-linear warp layers are crucial (detailed in Appendix I).\\n\\n[In general, even if one only considers the WarpGrad objective...]\\n\\nWe believe there may be some misunderstanding here, as WarpGrad does not limit the inner loop to one gradient step; in fact, it is independent of the number of inner steps. The canonical WarpGrad objective (Eq. 10) is the expected one-step gradient update over a joint distribution of objective functions (L) and model parameters (\\\\gamma): it is a global objective defined in terms of the vector field of the manifold W. Put simply, Eq. 10 solves for good preconditioning over all of parameter space, irrespective of how many steps of adaptation we are taking on some objective L. In practice, we approximate the distribution in Eq. 10 - as we detail in the paragraph between Eq. 10 and Eq. 11 - by constructing a Monte-Carlo estimator on the trajectories collected over K-steps of adaptation in the inner loop (see also Algorithm 1 and 2). We optimise individual steps sampled from the estimator, which effectively allows WarpGrad to be unaffected by the inner step size. Similarly, at meta-test time, WarpGrad is compatible with any number of adaptation steps. In our experiments, we use the same K for meta-training and testing: for instance, on the Omniglot experiment, the inner loop during meta-training and meta-testing use 100 steps of task adaptation. The WarpGrad objective (Eq. 11) is an expectation over these one-step parameter updates (see also Algorithm 1 and 2). We hope this clarifies, if we misunderstood this concern please do let us know.\\n\\n[statements regarding the advantages of WarpGrad...]\\n\\nWe appreciate this comment and sympathise with the reviewer\\u2019s concern. We will clarify in the revised manuscript that WarpGrad does *not* \\u201cneed to be combined with\\u201d some learned initialisation like MAML or Leap\\u2014we do so to identify the effect of the WarpGrad objective on miniImagenet and Omniglot, respectively. As R2 points out in the summary, we make two contributions: one architectural and one algorithmic. We combine WarpGrad with MAML and Leap to obtain all-else-equals comparisons of the meta-objective. We show in the Omniglot ablation study (Appendix F) that non-linear warp-layers can perform on par even from random initialisations. In the RL experiment, neither MAML nor Leap are well-defined, but applying WarpGrad is straightforward.\\n\\n[WarpGrad objective combined with MAML...]\\n\\nAs R2 correctly points out, when combined with MAML, the scalability advantage of WarpGrad is lost, but we retain its geometrical properties as well as other numerical properties (e.g. stability) with respect to warp layers. Hence while Warp-MAML does not enjoy the scalability advantage, it does retain all other properties of WarpGrad. MAML is a powerful algorithm for few-shot learning problems where we can afford to backpropagate through the adaptation process and we find that the combination of WarpGrad and MAML compares favorably to pure MAML-based preconditioning. For other meta-learning problems that are not few-shot, we show that WarpGrad can be used effectively without backpropagation through the adaptation process on a variety of large-scale meta-learning benchmarks.\\n\\nWe hope that our replies resolve above concerns and we will update our manuscript to emphasise that we use few/multi-shot learning to evaluate the meta-objective, holding the architecture and initialisation fixed. We measure the effect of our meta-architecture through ablations both with and without a meta-learned initialisation and demonstrate their combined effectiveness on complex meta-learning tasks in RL and continual learning.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Dear R1,\\n\\nThank you for your thorough review and constructive feedback, we will incorporate these when updating the manuscript to make it as accessible as possible! \\n\\n1 - This is a great point, it is indeed important and we will make sure to emphasise this in the revised manuscript. A meta-learner that is *not* trajectory agnostic has a meta-objective that is a function of the entire trajectory, and hence needs to backpropagate through the entire trajectory, such as MAML-based meta-learners. This limits their scalability and makes meta-optimization challenging (see Eq. 1 and following discussion). In contrast, WarpGrad is a trajectory-agnostic meta-learner. We use trajectories to form an empirical distribution from which we sample individual steps that we optimise independently. Because the objective is point-wise, we avoid backpropagation through the trajectory, which is what makes WarpGrad competitive even for very long trajectories: the meta-objective scales with at most linear complexity in trajectory length and does not suffer numerical instability as trajectories become long.\\n\\n2 - Thank you for the constructive feedback as this too is a central aspect of WarpGrad. To clarify the connection, MAML backpropages through the adaptation trajectory, which is essentially an RNN-like backpropagation through time operation. Hence it suffers from the same exploding/vanishing gradients and credit assignment problems that RNNs struggle with, which has been observed empirically [e.g. 1]. More specifically, because the MAML meta-gradient is a product of Hessian matrices, high or low curvature during task adaptation has a multiplicative effect on the meta-gradient (the product of many values much greater or much lower than 1 will cause gradients to explode or vanish, respectively). In contrast, WarpGrad avoids these specific issues by design as it does not backpropagate through adaptation trajectories and instead learns to optimize each gradient step individually.\\n\\n3 - While we appreciate the sentiment, tieredImageNet, miniImagenet and Omniglot are structurally similar benchmarks in that they are image classification tasks over homogenous image domains (natural images and hand-drawn characters, respectively). Hence the added benefit of running the same ablations on miniImagenet would be limited and given space as well as time constraints we have refrained from doing so. In terms of the first-order approximation (Eq. 12), our claim is that it is a useful approximation over longer adaptation processes, as in the Omniglot and RL experiment, where first-order effects tend to dominate. Hence we evaluate this approximation on these experiments. \\n\\n4 - A stop-gradient operator is an operation that prevents gradients from flowing through a variable during backpropagation. We use it in Eq. 12 to make the same approximation as in the first-order approximation of MAML [2], where the stop-gradient operator prevents the meta-gradient from backpropagating through the inner task adaptation step. That way, the meta-gradient avoids computing second-order derivatives (that is, it renders the meta-gradient Hessian-free).\\n\\n5 - As R1 correctly points out, we are making an implicit full-rank assumption. The public comment was concerned with potentially unfair comparisons in the case that warp layers increase model capacity. Our reply was directed towards this concern. While linear layers cannot add capacity, they can reduce it. This would not make comparisons to baselines unfair, though potentially unfavorable (hence our results are erring on the side of caution) for WarpGrad. Note that all baselines are tuned for model capacity through conv-layer filter sizes. \\n\\n[1] Finn et. al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. 2017.\\n[2] Antoniou et. al. How to train your MAML. 2019.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe current paper deals with meta-learning and essentially proposes a generalization of MAML (a popular gradient-based meta-learning algorithm) that mostly builds upon two main recent advances in meta-learning: 1) an architectural one (see e.g. T-Nets), which consists in optimizing the parameters of additional layers during the meta-learning outer loop (as opposed to only optimizing the initial conditions of the original parameters like in MAML), and 2) a theoretical one (see e.g. Meta-SGD, Meta-curvature), which is based on the geometrical observation that one set of parameters can precondition a second set of parameters that are consequently being optimized in a \\\"warped\\\" geometry, possibly speeding up learning.\\nThe authors provide a great and thorough overview of the literature, in particular for gradient-based meta-learning methods, which helps putting all this in perspective.\\nThe way they obtain the mentioned \\\"warped\\\" geometry in practice is by adding additional so-called warp-layers to an architecture that is being trained with meta-learning. Such warp-layer are generic deep learning modules (such as convolutions followed by BatchNorm, or LSTM layers), which are being trained in the outer-loop of the meta-learning optimization. In this sense, WarpGrad extend T-Nets, which only allowed for linear layers.\\nThe second main innovation of WarpGrad is the proposal of a new meta-learning objective, which incorporates a meta-learning internal loop of only one step of (preconditioned) SGD, meaning that, as the authors notes, \\\"in contrast to MAML-based approaches (Eq. 1), [...] avoids backpropagation through learning processes\\\".\\nThe authors test their algorithm on several meta-learning benchmarks, including few- and multi-shot learning tasks demonstrating very competitive performance when their algorithm is combined with MAML or Leap. They then deploy WarpGrad on a maze navigation reinforcemente learning task to demonstrate training of recurrent architectures, and on a continual learning toy dataset to show that their objective can be adapted to mitigate catastrophic forgetting.\", \"decision\": \"This is a good paper which proposes an interesting generalization of previous gradient-based meta-learning methods like MAML and T-Net, with an impressive number of experiments. However, some of the statements regarding the advantages of WarpGrad over previous algorithms seem a little bit misleading, in particular in situations where WarpGrad needs to be combined with these same algorithms. For instance (and I might have completely misunderstood things here), it seems that when the WarpGrad objective is being combined with MAML (which requires backpropagation through multiple-step gradient descent trajectories), then also the resulting combined objective will necessarily need to backprop through the same multi-step trajectory, defeating the stated advantage of the WarpGrad algorithm (i.e. that its objective avoids backpropagating through the learning processes).\\nIn general, even if one only considers the WarpGrad objective eq. (10), that comprises a meta-learning inner loop which consists of one step of (preconditioned) gradient descent. However, it seems like an arbitrary (and limiting) choice of the authors to only perform one step, as opposed to multiple ones. As a matter of fact, even very sophisticated second order gradient descent methods like natural gradient descent typically require more than one step to reach a local minimum. That is to say, that the main advantage showcased by the authors (the fact that the WarpGrad objective avoids backprop through a whole learning trajectory) seems like a limitation, rather than the result of a principled derivation.\\nIt would be beneficial if the authors could clarify this points. In particular, whether combining WarpGrad with MAML does not indeed negate the stated advantages of WarpGrad over MAML, and whether there is a principled way of demonstrating that executing only one step in the inner loop of the WarpGrad objective is completely general (i.e., additional steps do not help the inner loop).\", \"minor\": [\"The authors use the wrong citation key when referring to the T-net paper: it should be Lee et al 2018, instead of Lee et al. 2017\", \"I believe that when the authors mention Fast and slow weights, they are being described in the opposite way: slow weights should be in charge of meta-learning information, while fast ones are in charge of task-specific information.\", \"Line 3 and 4 of Algorithm 1 and 2: shouldn't it say \\\"mini-batch of tasks\\\" (plural), instead of \\\"mini-batch of task\\\", since several tasks are being sampled? Otherwise, it might be erroneously interpreted as \\\"mini-batch of (samples belonging to) task T\\\".\", \"The comment that \\\"learning to precondition gradients can be seen as a Markov Process of order 1\\\" is never clearly elucidated or developed. It would help to develop this.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose warped gradient descent (WarpGrad) an optimisation framework for facilitating gradient-based meta-learning. WarpGrad interleaves within the learner meta-learned warp-layers that implicitly precondition the gradients of the task-specific parameters during backpropagation. In contrast to the linear projection layers employed in T-Nets, warp-layers are unrestricted in form and induce a full Jacobian preconditioning matrix. The warp layers are meta-learned in a trajectory-agnostic fashion, thus obviating the need to backpropagate through the gradient steps to compute the updates of their parameters. The framework is readily applicable to standard gradient-based meta-learners, and is shown to yield a significant boost in performance on both few-shot and multi-shot learning tasks, as well as to have promising applications to continual learning.\", \"the_paper_is_well_structured_and_well_motivated\": \"the problem statement is clearly laid out from the outset, with appropriate context, and explanations supported well diagramatically. The idea, and perhaps more so the applications thereof, is seemingly novel and its explanation is given straightforwardly while avoiding getting bogged down in technical details. Clear comparisons and distinctions with previous work are drawn - for instance with the update rules for several gradient-based methods - MAML and its derivatives - being laid out in standard form (though it might also be nice to echo this with the WarpGrad update rule).\\n\\nThe experiments are logically ordered with the initial set covering the standard few-shot learning benchmarks with appropriate baselines (though the results for few-shot tieredImageNet are lacking in this respect), with most essential details given in the main text and full details, including those related to the datasets in question and hyperparameter selection, documented in Appendix H. Meta-learning does seem uniquely well-positioned for tackling the task of continual learning and it's heartening to see this being explored here with a degree of success - it would be interested to see how its performance compares with standard continual learning methods (such as EWC) on the same task. Particularly impressive is the depth into which the Appendices regarding the experiments, both elaborating on the details given in the main text as well as additional ablation studies.\", \"minor_errors\": [\"Page 7: \\\"a neural network that dynamically **adapt** the parameters...\\\" - should be \\\"adapts\\\"\", \"Page 22: \\\"where $I$ is the **identify** matrix\\\" - should be \\\"identity\\\"\", \"Page 27: \\\"The task target function $g_\\\\tau$ is **partition** into 5 sets of **sub-task**\\\" - should be \\\"partitioned\\\" and \\\"sub-tasks\\\", respectively\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a learning strategy to precondition gradients for meta-learning. I really enjoyed reading the paper though I admit that I couldn't fully grasp all the details yet (paper is dense). My comments below are mostly to improve the readability of the paper for readers like me (knowing a thing or two in optimization and meta-learning)\\n\\n\\n\\n1- The authors emphasize on the method being trajectory-agnostic. Can you explain why this is very important? What methods are not trajectory-agnostic?\\n\\n2 - Also in various places, the authors claim the method does not suffer from vanishing/exploding gradients and credit-assignment problem. This needs to be properly verified (and explained as I do not see the connections clearly)\\n\\n3- Some claims are based on the Omniglot experiments (eg., the effect of the stop-gradient). It would be good if this can be done on Mini-imagenet instead.\\n\\n4- I am not sure I understand the stop-gradient operator, can you be more explicit there?\\n\\n5- I read the conversation regarding linear units on openreview and I disagree with your statement. A cascade of linear layers does not necessarily match one linear layer unless some constraints on the rank of layers are envisaged, a bottleneck in the middle ruin everything.\"}",
"{\"comment\": \"Thank you for your question and valid concern. In general, interleaving nonlinear warp layers does increase the depth of the task learner. However, when warp layers are linear, the effective depth of the task learner does not increase since the composition of two linear transformations is a linear transformation itself. In other words, when warp layers are linear, they can be seen as part of existing layers in the task learner (akin to [4, 5] above).\\n\\nTo the extent possible, experiments have been carefully designed to ensure fair comparisons. All results reported in Table 1 use linear warp layers and baselines have been hyper-parameter tuned with equal computational budgets. \\n\\nWith that said, it is worth noting that non-linear warp-layers improve performance considerably. We report such results inline in the main text as well as in detailed ablation studies (see Appendix F, G, and J). We hope this addresses your concern.\", \"title\": \"Re: Question about preconditioning\"}",
"{\"comment\": \"Thanks for the instant reply. A follow-up question: when using the preconditioning layers in the forward pass, the task learner essentially adopts a deeper network for specific tasks, compared to the standard methods (e.g., MAML, MC). Although the additional layers are trained with meta objective instead of the task objective, they may enhance the expressive ability of the task learner, so are the performance comparisons in the experiment section (e.g., Table 1) unfair?\", \"title\": \"Re: Re: Question about preconditioning\"}",
"{\"comment\": \"Thank you, and great that you reached out!\\n\\nWhen preconditioning is defined by explicitly projecting a parameter gradient via some (smoothly varying) matrix [e.g. 1, 2, 3], that matrix would not be part of the forward pass of the model. \\n\\nHowever, we can also think of preconditioning as inserting layers w that \\u2018warp\\u2019 model parameters in the forward pass - backpropagating through such warp layers automatically preconditions the gradient [e.g. 4, 5, our work]. In this case, we do want w to be part of the forward pass.\\n\\nBoth perspectives describe preconditioning but in different ways. In this work, we argue that warp layers is a more effective approach because it interacts with the model both in the forward and backward pass while being simple to implement. \\n\\nHope that answers your question!\\n\\n=========\\n\\nReferences\\n\\n1. Amari. Natural gradient works efficiently in learning. Neural computation 10.2. 251-276. 1998.\\n2. Li et. al.. Meta-SGD: Learning to Learn Quickly for Few-Shot Learning. ArXiv 1707.09835. 2017.\\n3. Park et. al.. Meta-Curvature. NeurIPS. 2019.\\n4. Desjardins et. al.. Natural Neural Networks.Neurips. 2016.\\n5. Lee et. al.. Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace. ICML. 2017.\", \"title\": \"Re: Question about preconditioning\"}",
"{\"comment\": \"Nice work! I have a short question. If I understand correctly, the gradient modifier w should only stay in the back-propagation path, but it also stays in the forward-propagation path in your work. Can it be removed and why?\", \"title\": \"Question about preconditioning\"}"
]
} |
Sye57xStvB | Never Give Up: Learning Directed Exploration Strategies | [
"Adrià Puigdomènech Badia",
"Pablo Sprechmann",
"Alex Vitvitskyi",
"Daniel Guo",
"Bilal Piot",
"Steven Kapturowski",
"Olivier Tieleman",
"Martin Arjovsky",
"Alexander Pritzel",
"Andrew Bolt",
"Charles Blundell"
] | We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features. | [
"deep reinforcement learning",
"exploration",
"intrinsic motivation"
] | Accept (Poster) | https://openreview.net/pdf?id=Sye57xStvB | https://openreview.net/forum?id=Sye57xStvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mFPVG-rTb7",
"BkgvBeU5jr",
"H1luPLVOsH",
"r1xyl8VOir",
"B1euYoQ_iB",
"HJxKMiQOoS",
"rkgUCLxTKr",
"H1lEldIhtH",
"SJlUtgf2Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743610,
1573703742805,
1573566047591,
1573565926929,
1573563263546,
1573563153143,
1571780302113,
1571739627894,
1571721342078
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2222/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2222/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2222/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2222/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2222/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2222/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2222/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2222/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper tackles hard-exploration RL problems. The idea is to learn separate exploration and exploitation strategies using the same network (representation). The exploration is driven by intrinsic rewards, which are generated using an episodic memory and a lifelong novelty modules. Several experiments (simple and Atari domains) show that the proposed approach compares favourably with the baselines.\\n\\nThe work is novel both in terms of the episodic curiosity metric and its integration with the life-long curiosity metric, and the results are convincing. All reviewers being positive about this paper, I therefore recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewer #3 response to author response\", \"comment\": \"Thank you for the clarifying points and discussion. I have not updated my review given I already rated the paper as accept, but I appreciate the time put in to this response.\"}",
"{\"title\": \"Response to official blind review #1 part 2\", \"comment\": [\"Please see part 1 first.\", \"Yes, it is a typo. We selected the hyperparameters that relate to exploration based on the 2 games of appendix B, and not the 8 games in which we do the ablations of the hyperparameters of general effect. We corrected this and updated the paper with the correction.\", \"(*) We agree this was a very dense paragraph. We have changed Table 1 to a Figure with aggregated results, and we have moved the original table to the appendix. Thanks for this very helpful suggestion, we agree it improves the presentation of the ablations. We have reworded this paragraph to make it easier to parse. Whilst the standard deviation error bars do indeed overlap, the means do consistently shift on hard exploration games and superhuman performance is most reliably obtained using the full combination.\"], \"specific_comments_about_ngu_agent_paragraph\": [\"Thank you for catching this. It is true we cannot claim a progression with diminishing returns (although there is a slight trend), and further we were not clarifying that this only refers to the average performance over the 3 hard exploration games listed on Table 1. We have modified our claim which we believe that is now more fair and better adjusted to the empirical results that we observe.\", \"(*) Beta is a hyperparameter that affects the exploration/exploitation trade-off. Tuning it will change the performance of the agent, as evidenced in the ablations Table of the Appendix. A lower beta yields less exploration and so it is not surprising that reducing it yields to better performance on games not requiring extensive exploration (Pong and Beam rider). With regards to sensitivity, it is important to note that we tuned beta to give the best performance on 5 dense and 3 hard exploration games then demonstrated that the selected value generalised well to perform well on all 57 Atari games. beta does not require per-game tuning to yield reasonably good performance.\", \"That said, further tuning beta can certainly lead to improvements, and this is illustrated in the ablations Table. But note that, for all positive beta values tested, achieve non-zero reward on Pitfall, while in private eye all variants still outperform all other baselines we compare to. Specifically, in the case of Private eye the distance in score might be misleading, as rewards are very sparse of large value. For instance, after reaching a score of 40k, (ignoring some smaller rewards that add up to less than 2000 points) there are only two rewards to be collected of around 30k points. This creates what seems to be large differences in the scores. We have added this important clarification on the performance on hard-exploration games in our analysis of the ablations Table in the appendix. Finally, we have also added a section on the performance of beta = 0.2 and beta = 0.5 to that appendix analysis. As one can see, the games in which the beta parameter is most sensitive are the ones that were already shown on Table 1. We agree with Reviewer 3 that a natural extension of this model is to find ways of dynamically adjusting this hyperparameter in an online manner (please see answer to reviewer 3 below).\", \"This refers to Breakout, Space Invaders, and QBert. Given that based on the above comment we have changed Table 1 to be more compact figure, we clarified this conclusion in the ablations Table of the appendix. It may also be difficult to compare since the human baselines are in other tables, therefore we have added a human baseline row to the table of ablations for NGU(N=32) of the appendix.\", \"Comments on paragraph on \\u201chard exploration games\\u201d:\", \"Yes that is correct, NGU(N=1)-RND means training a single policy and without the use of the RND reward. This setting achieves the highest score for Pitfall. Our intuition is that in this case a single policy can achieve quite good results since exploration and exploitation policies are similar. As far as RND usage is concerned, we consistently observed that not using RND on Pitfall! leads to results that are qualitatively similar, but much more data efficient. We bring attention to the graphs shown on figure 4, and in our analysis we highlight 3 hypotheses explaining why this may happen.\", \"We have modified our analysis to reflect these comments.\", \"It refers to the \\u2018best baseline\\u2019 described in the table description. To clarify this, we have changed the name of the row to use the full name instead of this abbreviation.\"], \"minor_comments\": [\"This is to make the kernel more robust to the task being solved, as different games may have different typical distances between learnt embeddings. We have added a note to clarify this.\", \"Yes, we have corrected this.\", \"This is done in the original implementation of the RND reward. It shares motivation with the answer to the first minor comment. The RND reward is described as being normalized by a running average and standard deviation of the rewards. We have changed our wording of the definition of \\\\alpha_t to reflect this.\"]}",
"{\"title\": \"Response to official blind review #1 part 1\", \"comment\": \"Thank you for your greatly thorough review. We believe that this thoroughness (including careful attention to the Appendix) and valuable questions have strongly contributed to improving the clarity of the paper, specially regarding the clarity of the experiments section of the paper. We have incorporated the very helpful feedback from the reviewer and updated the manuscript accordingly. We hope that this is enough for the reviewer to revisit their score of our manuscript. We proceed to answer in order:\", \"general_comments\": \"* Concerning the contribution of disentangling exploration and exploitation we should have been more careful in our claims and we are aware that other methods exist in the literature. Rather than a novel contribution, it is an interesting property of our approach that we wanted to highlight, and contrasts with most recent work in intrinsic motivation. We have amended the wording to reflect this.\\n\\nMULEX is indeed very related work, and we were not aware of it at the time of writing this paper as it was posted on arXiv very recently and is, as far as we can tell, unpublished. We thank the reviewer for pointing this out. In the updated version of the manuscript we include the suggested citation and describe its relation with NGU after describing our contributions. The main difference between the way this work and MULEX combine the exploitation and exploration is that our approach does it by sharing weights between the different policies, contrary to MULEX, which shares data in a common replay buffer. One could argue that imitation learning is also a way to disentangle exploration and exploitation, done by sharing human demonstrations in the replay buffer (DQfD method). In the case of MULEX, the data added to the replay does not come from human demonstrations but by trained exploration policies. \\n\\n* This statement in our work was not very clear. When computing the intrinsic reward for a new state, we indeed compare it to the content of the episodic memory. We meant to say that, the more dissimilar the past controllable states are from the new state, the higher the reward. Looking at equation (2) in the manuscript, this means that when the new state is very different from the content of the memory, the term \\\\sqrt{\\\\sum_{f_i\\\\in N_k} K(f(x_t), f_i)} in the denominator will be small, and thus the reward will be high. Hence, in order to maximise the episodic intrinsic reward, the agent needs to go to states that have low similarity with the previously visited ones, consequently encouraging exploration.\\n\\nIn practice (2) is computed using the k-nearest neighbors from memory. If the new state is dissimilar to the k-nearest memories, it will be more dissimilar from the remaining ones. Using only k-nearest neighbors allows for faster computations and is common practice in methods using content-based look-ups on episodic memories (e.g. Neural Episodic Control)\", \"experiments_section\": [\"Section 4.1:\", \"Yes, that is correct in that beta=0.3 does not precisely balance exploration and exploitation. Let us consider the case of a single policy (N=1) with a given fixed value of beta. This policy is trained using the augmented reward r=r_e+beta r_i. Thus, the larger the value of beta, the less important the extrinsic rewards will be relative to the intrinsic rewards. Thus, there is an implicit exploitation-exploration trade off within the definition of the exploratory policy.\", \"In the case with multiple mixtures, the value of beta represents the weight of the intrinsic reward for the most exploratory policy. Here again, the higher the beta, the more exploratory the policy will be. As seen in Table 1, different values of beta (beta = 0.2, beta=0.3, beta = 0.5) lead to different results (see extended discussion on our other comment below regarding the sensitivity w.r.t. this hyperparameter).\", \"This is not intended, thank you for catching this. We have updated the paper with the correction. Beta is only multiplied once, so we have removed it from the algorithm.\", \"Yes, that is correct. We have changed the wording to reflect this.\", \"Section 4.2:\", \"Yes, it is not frame-stacking. This refers to the data received by the learner in R2D2. In R2D2, a learner samples and learns from a batch of trajectories coming from a set of actors running in parallel on separate environment instances. In our case, the length of each sequence in the batch is of 80 time steps (with minibatch size of 64 this means 80x64=5120 different states). We use all the data to train the RL loss. In contrast, we use a subset of the data to train the action prediction and the RND losses. Training these networks on 5120 states per batch is computationally inefficient. To select a subset of the data we simply take the last 5 time steps of each sequence in the batch, as we saw that to be empirically enough in our original experimentation (similarly done in the RND work). We have changed our description in the appendix to clarify this.\"]}",
"{\"title\": \"Response to blind review #3\", \"comment\": \"Thank you for your review. We believe the raised points greatly resonate with us, and they contribute to a greater overview and summarization of the method. More concretely:\\n\\n* Learning to adjust the hyper-parameters is something that we contemplate as a highly promising future direction for this work. More concretely, we think it should be possible to adapt the hyperparameter beta, which determines the maximum degree of exploration done, so that the method is able to dynamically adapt it, given that some games require much more exploration than others. Potential approaches are the use of techniques such as Population Based Training (PBT) or the use of Meta-gradients. Another advantage of such method would be to allow for the model to \\\"turn off\\\" exploration when the agent has reached a point in which further exploring does not lead to improvements on the exploitative policy. Having said that, including such a mechanism would require calibrating the adaptation to be aligned to the speed of learning of the exploitative policy. We have included a discussion on this in the conclusions section of the updated paper.\\n\\n* We agree that a summarization of the limitations of the method is a useful addition to the conclusions of this work. In the updated version of the manuscript we have added a description of these limitations.\\n\\n*We agree with the reviewer that the controllable state could be used to determine long-term novelty. A natural first choice would be to follow the work in Curiosity-driven exploration and learn a forward model in this embedding space, and then use the prediction errors as an intrinsic motivation signal. While this is a good choice (and would be interesting to test), we decided to go with the RND variant as it avoids the difficulties of using forward prediction as novelty signals (the \\u2018noisy TV\\u2019 problem, as highlighted in the RND work), it has been shown to perform better empirically, and is amenable to distributed training. Another alternative would be to use the same idea as in the episodic novelty, but without emptying the memory at the end of an episode. This would naturally provide a longer term novelty signal. The clear limitations of that approach come from an implementation standpoint: it would require to store and perform look-ups over much larger episodic memories.\"}",
"{\"title\": \"Response to official blind review #2\", \"comment\": \"Thank you for review. We appreciate the concerns raised and we wish to provide further clarity on those points. More concretely:\\n\\nWe agree that there is already extensive literature on count-based methods. In our view, while a lot of progress has been made in recent years (covered by the work we cite in the manuscript), the problem of extending count-based exploration methods to very large high dimensional state spaces (the usual setting in deep RL) is not yet solved. Given this body of literature, our method contributes 1) an exploration bonus we define, which combines life-long novelty and episodic novelty, 2) learning a family of policies that separate exploration and exploitation with shared weights, and 3) strong experimental results, with State of the Art on games such as Pitfall, where no algorithm (without demonstrations or privileged information) was performing better than random. \\n\\nCurrent state of the art methods in deep RL achieve their results by leveraging large amounts of compute by running on distributed training architectures that collect large amounts of experience from many actors running in parallel on separate environment instances. This is also the case of R2D2, the state-of-the-art agent on the Atari suite. Despite having an incredibly high average (or median) performance, it performs poorly on most hard exploration games. We agree with the reviewer that an important line for future research is to explore effective ways of significantly improving NGU's data efficiency while maintaining its performance.\\n\\nHaving said that, we want to stress that the goal of this work is to push the limits of the best performing agents available in the literature. This is, we want to push the limits of performance: what are the best achievable scores when data or compute are not a limitation? \\n\\nFinally, we want to point out that running for 35 billion frames is not used to NGU's advantage. All the baselines we implemented are also run for that number of steps. This includes R2D2 + RND, which obtains slightly stronger results than the original RND publication, but much weaker than the results obtained with NGU. In summary, unlike the baselines we compare to that we implemented, with the same amount of frames consumed, our method is able to reliably leverage that amount of compute to achieve better final performance. A comparison of the computation used by all the baselines is described in Appendix C.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The work is motivated by the goal of having a comprehensive exploration of an agent in deep RL. For achieving that, the authors propose a count-based NGU agent, combining intrinsic and extrinsic bonuses as new rewards. An extrinsic/ long-term novelty module is used to control the amount of exploration across episodes, a life-long curiosity factor as its output. In the intrinsic/episodic novelty module, an embedding net and a KNN on episodic memory are applied to compute the current episodic reward. In the experiment, a universal value function approximator (UVFA) framework is used to simultaneously approximate the optimal value function with a set of rewards. The proposed method is tested on several hard exploration games. Other recent count-based models are compared in the paper.\", \"cons\": [\"To my acknowledge, the task and the count-based methods are not too novel.\", \"They use 35 billion environment frames.\", \"Overall, this paper is well-written. Methods and results are clearly described.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"-after rebuttal:\\nI read the replies from the authors and re-read the modified version of the paper and I believe there has been a noticeable improvement in the presentation. I still think it could be improved more (in terms of wording and better exposition of the results) but due to the in-place improvements I increase my score to weak accept. \\n----------------------------------------------------------------------------------------------------------------------------------------------------\\nIn this paper, the authors present a methodology for generating intrinsic rewards for reinforcement learning agents targeting hard exploration environments. The intrinsic reward is generated using an episodic memory module and a lifelong novelty module. A state representation is learnt in such a way that the novelty signals are biased towards what the agent can control. A single neural network learns the q-values of exploratory policies with different degrees of exploration. Several experiments on a simple domain and on Atari domains are conducted to evaluate and compare the performance of the proposed method against the baselines. \\n\\n\\nIn my opinion, this is an interesting idea as it present a novel combination of methods (Random Network Distillation and Episodic Memory) that works and can inspire other researchers in the field. However, I would like to see clearer explanations in the experimental section before acceptance. For this reason I rate this paper as a weak reject but if clarity is improved I will increase my score. See below for more general comments and detailed explanation about the experimental section.\", \"general_comments\": [\"The authors state that a novel contribution is to disentangle exploration and exploitation. This is not true: see [1] for a recent paper on the topic. I believe the authors should cite this paper.\", \"In page 3: \\u201cTo determine the bonus, the current observation is compared to the content of the episodic memory. Larger differences produce larger episodic intrinsic rewards\\u201d When computing the intrinsic reward for a new state, it must be compared to the episodic memory, which is composed by recent states (and old). Therefore, in this situation the intrinsic reward is always high? Specially because it is compared with the k-nearest neighbours? Maybe the authors could elaborate on this?\"], \"experiment_section\": [\"This section should be tidied up in my view. In general, I feel that there are too many fine-grained results / experiments. I think some aggregation of results would be good for clarity. Without this aggregation there are statements made from the authors that are difficult to believe, e.g. a general result statement is valid for one or two games but not for the rest (but still the statement is formulated in a general way). This can lead to misunderstandings and overstatements. This together with lack of some experimental details makes the section very dense and difficult to parse. Below I make my points. They are in order of appearance in the main manuscript. I marked with (*) the ones I consider the most important.\", \"Section 4.1:\", \"Why the exploration policy in this section is set to have $\\\\beta = 0.3$ ? Is my understanding correct that any value of $\\\\beta$ would produce similar results since this is just a scaling of the intrinsic reward (i.e. it does not balance exploration vs exploitation)?\", \"The parameter $\\\\beta$ appears in the construction of the reward r = r^e + \\\\beta r^i , but also the output of Algorithm 1 scales the similarity by $\\\\beta$ does that mean that the final contribution of $\\\\beta$ is squared? Is this intended? If so, why?\", \"At the end of this section: \\u201cHowever, staying still is enough: staying still every state will produce \\u2026.\\u201d Since the agent can only take 4 action {left, right, up, down}, what do the authors mean by \\u201cstaying still\\u201d, is really the agent doing some sort of cyclic policy e.g. left right left right \\u2026 ?\", \"Section 4.2:\", \"In the appendix A it is stated \\u201cuse last 5 frames of the sampled sequences to train the action prediction network\\u2026 \\u201c Does this refer to frame-stacking? I assume it is not since at the beginning section 4.2 it is stated that there is no frame-stacking. If it is not frame-stacking, the authors could explain in more detail what do they refer to.\", \"In the paragraph \\u201cArchitecture\\u201d it is stated that 8 games were selected to choose the hyperparemeters and that the results are in Appendix B. However, appendix B only shows 2 games (Pitfall and Montezuma\\u2019s Revenge). Is this a typo?\", \"(*) In paragraph \\u201cNGU Agent\\u201d: This is the most dense paragraph and the most difficult to parse. First of all Table 1 shows all the results, but as one can see, the different ablations have very similar performance on most games with only a few exceptions. Note that most of the mean performances with error bars, have actually overlapping error bars for many combinations of games and methods. Therefore, further statements about this table are difficult to believe since they could have been just the result of random seeds. I think it is fine to show these fine-grained results on the appendix, but I would say it would be better to aggregate them in the main paper and show that the statements made by the authors still hold for this aggregation.\"], \"specific_comments_about_ngu_agent_paragraph\": [\"\\u201c we observe an improvement from increasing the number of mixtures (with diminishing returns) on hard exploration games.\\u201d I would say the authors cannot claim this since the hard exploration games are the ones in table 2, which are different from the ones in table 1 (only Pitfall, MR and Private Eye coincide). Also the statement is only true for 2 the three that are hard exploration games (Pitfall and Private Eye).\", \"(*) \\u201cfor smaller $\\\\beta$ we observe better performance on Pong and Beam Rider, but worse performance on all hard exploration games\\u201d This is a strange result in my opinion. If I understood correctly, the base agent has $\\\\beta = 0.3$, which supposedly has been selected to be good on hard-exploration games. However, here only changing $\\\\beta$ slightly, reduces the performance on hard-exploration games (Pitfall and Private Eye) significantly. Is this due to the parameter being very sensitive? If so, I believe the authors should report how sensitive are the results to this parameter, specially, on the full hard-exploration games.\", \"\\u201csuperhuman performance on 3 games\\u201d: which ones?\", \"Comments on paragraph on \\u201chard exploration games\\u201d:\", \"\\u201cwith NGU(N=1)-RND \\u2026.\\u201d what do the authors mean by this? This seems to be the best setting for Pitfall but it is actually not using mixture of explorations and (I guess) neither RND?\", \"Table 2 first row \\u201cbest base\\u201d what does this mean?\"], \"minor_comments\": \"-In Equation 3 the squared distance is normalized by a running average. Why?\\n\\n- Right after Equation 3: \\u201c\\u2026 episodic reward can be found in Alg. 14\\u201d . Probably meant Alg 1.\\n\\n- Similarly why the errors are normalized by a running average when computing \\\\alpha_t ?\\n\\n\\n[1] MULEX: Disentangling Exploitation from Exploration in Deep RL ( Lucas Beyer et al )\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a novel intrinsic reward/curiosity metric that combines both episodic and \\u201clife-long\\u201d novelty. Essentially two competing pressures that push agents to explore as many novel states in a single rollout as possible and to explore as many states as possible as evenly as possible. The primary contribution here is the episodic novelty measure, which relies on a state embedding that takes into account stochasticity in the environment. The paper covers this episodic curiosity measure and how it\\u2019s integrated with the life-long curiosity metric. It then demonstrates the impact of these metrics and variations compared to baselines on particular games and all 57 Arcade Learning Environment games.\\n\\nThis is a clear accept. This paper demonstrates a novel episodic curiosity metric and a means of integrating that with a more standard life-long curiosity metric. The writing is clear and the results are good and well-explained. \\n\\nI would appreciate in the final paper some discussion of whether it would be possible to adjust the hyper-parameters of the approach during training, given that different variations of the approach seemed to do consistently better or worse as the authors described. Further, I would have appreciated a summarization of the limitations towards the end of the paper. \\n\\nI recognize that the life-long curiosity approach is fairly arbitrary, but given that the controllable state is already available, I\\u2019m not sure why it isn\\u2019t used for this measure. It seems naively it would be helpful. If not, some clarity on this would be appreciated.\"}"
]
} |
H1eqQeHFDS | AdvectiveNet: An Eulerian-Lagrangian Fluidic Reservoir for Point Cloud Processing | [
"Xingzhe He",
"Helen Lu Cao",
"Bo Zhu"
] | This paper presents a novel physics-inspired deep learning approach for point cloud processing motivated by the natural flow phenomena in fluid mechanics. Our learning architecture jointly defines data in an Eulerian world space, using a static background grid, and a Lagrangian material space, using moving particles. By introducing this Eulerian-Lagrangian representation, we are able to naturally evolve and accumulate particle features using flow velocities generated from a generalized, high-dimensional force field. We demonstrate the efficacy of this system by solving various point cloud classification and segmentation problems with state-of-the-art performance. The entire geometric reservoir and data flow mimic the pipeline of the classic PIC/FLIP scheme in modeling natural flow, bridging the disciplines of geometric machine learning and physical simulation. | [
"Point Cloud Processing",
"Physical Reservoir Learning",
"Eulerian-Lagrangian Method",
"PIC/FLIP"
] | Accept (Poster) | https://openreview.net/pdf?id=H1eqQeHFDS | https://openreview.net/forum?id=H1eqQeHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"46aPr0MSAF",
"Dr9mocEUJz",
"SMZKHVhdmq",
"H1g3s5IisH",
"HJgxfRhEqS",
"SJghlklntH",
"ryxSuiIjFB"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1582507441827,
1581672983464,
1576798743582,
1573771940223,
1572290056235,
1571712756308,
1571674989266
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2221/Authors"
],
[
"~Duc_Anh_Nguyen1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2221/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2221/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2221/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2221/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Supplementary\", \"comment\": \"Hi Duc,\\n\\nThanks for your comments. We put the S3DIS results along with our brief analysis in a supplementary. Please see the latest updates. We did not test other realistic datasets such as Scannet. We found that some future investigation is needed to enable the network to handle the 'unstructured' point clouds (which potentially not follow the nature advection process). But we could conduct these tests if necessary. Thank you again for all your time and suggestions!\\n\\nBest,\\nThe authors\"}",
"{\"title\": \"Where is the result on S3DIS?\", \"comment\": \"Hi. Congratulations on the acceptance. The paper seems interesting. However, I couldn't find the result of segmentation on the S3DIS dataset. Could you please show me?\\nAlso, have you tried your method on some other realistic dataset (for e.g., Scannet)?\\nThanks.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper treats the task of point cloud learning as a dynamic advection problem in conjunction with a learned background velocity field. The resulting system, which bridges geometric machine learning and physical simulation, achieves promising performance on various classification and segmentation problems. Although the initial scores were mixed, all reviewers converged to acceptance after the rebuttal period. For example, a better network architecture, along with an improved interpolation stencil and initialization, lead to better performance (now rivaling the state-of-the-art) as compared to the original submission. This helps to mitigate an initial reviewer concern in terms of competitiveness with existing methods like PointCNN or SE-Net. Likewise, interesting new experiments such as PIC vs. FLIP were included.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Rebuttal and Revised Submission\", \"comment\": [\"Dear Reviewers,\", \"Thank you for all the constructive comments and the valuable feedback. We are very pleased to address your concerns and incorporate your suggestions by making several main updates (see the bullets below) in the revised manuscript. All the changes are colored as blue in the new version.\", \"# MAIN CHANGES\", \"A local grid-particle interpolation scheme;\", \"Scalable grid resolution (with new 16^3 and 32^3 results);\", \"State-of-the-art accuracy (compared with PointCNN and DGCNN);\", \"Simplified network architecture;\", \"More convergence and ablation tests;\", \"Suggested reference;\", \"Physical intuitions behind design decisions;\", \"Condensed pages (to 8).\"], \"scalable_grid\": \"The problem of scalability was addressed by our new implementation of a local trilinear interpolation stencil. Thanks to this software engineering effort, we are able to test the performance of grids with different resolutions and identify the best fit for a point-set. We discovered that the performance of the particle-grid structure is correlated to the average number of particles in cell (ppc), which aligns with the empirical experience of using PIC/FLIP engineering. Our experiments indicate that the best learning performance lives with ppc=1.5, implying the best grid size to be 16^3 for a 1024 point-cloud and 32^3 for a 2048 one (see Section 5.1 for details).\\n\\nROLE OF GRID IN A HYBRID REPRESENTATION\\nThe primary role of a background grid is to enable an efficient construction of the various differential operators (in our case convolution) on-the-fly without fitting a local parameter space from the potentially noisy Lagrangian samples. The existence of such grid-based differential stencils enables fast numerical implementation, efficient data access patterns thanks to its cache-friendly data storage. From the learning point of view, a grid can be regarded as a perceptron for the relation on a coarse-level (similar to the role of the furthest point step in PointNet++ or the KNN step in DGCNN), but with a more regularized and efficient spatial representation. This representation complements the local particle-based operators. \\n\\nNETWORK ARCHITECTURE\\nWe simplified the network architecture by merging the interpolation and advection modules to further approach the essence of its physical model (see Section 4 for details). This simplification leads to a more economical computing model w.r.t both memory and time.\\n\\nSTATE-OF-THE-ART PERFORMANCE\\nThe better network architecture, in conjunction with the improved interpolation stencil (see Section 4) and initialization conditions (see Section 3), leads to a better performance compared to our previous manuscript. Our current model with the improved implementations can rival the state-of-the-art approaches, in particular PointCNN and DGCNN, regarding both accuracy and memory efficiency. As shown in Table 4, the AdvectiveNet can beat PointCNN and DGCNN in approximately half of the test cases we have run, yet consuming only **4% - 25%** model parameters of the state-of-the-art. Currently, we evaluated the new version on most of the datasets and obtained very promising results (S3DIS is expected to finish in a few days). Hence, we believe this approach can become an effective tool for point-cloud processing with the state-of-the-art performance.\", \"temporal_evolution\": \"We conducted a series of further evaluations on the temporal discretization and observed that our method is stable with large time steps (see the Ablation Tests for details). We demonstrated such stability regarding both the temporal convergence (accuracy) and the spatial convergence (the final shape advected by different timesteps). We also gave more physical and numerical explanations for each of the tests.\", \"other_minor_comments\": \"Reviewer #1:\\n-- Potential advantages: \\nBesides its comparable accuracy performance, the low GPU memory consumption (1.1G GPU memory with batch size 16 and points 1024 on ModelNet10) and the low-dimensional feature space (projected on the physical space) enabling dynamic neighbor relations are the two main advantages of our Eulerian-Lagrangian approach.\\n\\n-- Suggested reference:\\nWe have incorporated the suggested references in our related work. Thank you!\\n\\nReviewer #2:\\n-- Role of advection flow\\nWe use an Eulerian flow field to dynamically rebuild the neighbor relations in a physical space, in which case the incompressibility was not our major concern. But this is an interesting property to explore and we added it to our future work. \\n\\n-- Intuitive deformation\\nWe provided further comparisons in Figure 6 to demonstrate the shape convergence.\\n\\n-- Spatial extent of grid\\nThe boundary conditions will guarantee the coverage of the grid on the particles advected by the learned velocity field. \\n\\n-- Particle sticking together\\nParticles won't stick together during advection because of the compatible resolutions between the Eulerian and Lagrangian degrees of freedom (see our discussion on the ppc number in Section 5.1).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper addresses the task of learning with point clouds for semantic labeling (classification and segmentation). The authors propose a novel point-based architecture based on viewing the learning process as an advection in the 3D space. This formulation aims at an explicit connection the two formulations for learning with point clouds, the first being focused on points (Lagrangian formulation), the second on the regular spatial grid not necessarily coinciding with points (Eulerian formulation). While the connection between the two formulation is known in the literature, the paper does a good overview of the relevant work and highlights the interplay between the two treatments for learning, which is valuable to the reader. The proposed view of learning with point clouds is, as far as I know, novel.\\n\\nWith the proposed learnable operations, the authors are able to efficiently learn the functions defined in 3D space, such as the semantic class labels. The operations include transferring the features between the grid and the point cloud, advection, and interpolation, all implemented in a unified learnable model. \\n\\nThe architecture is evaluated on classification and segmentation tasks with common datasets, where it performs on par with existing methods. While the experimental evaluation does not indicate that the proposed method is a new state-of-the-art, it convincingly validates that the proposed method is capable of learning powerful enough representations. \\n\\nI believe the paper should be accepted for publication, as (1) the proposed method is generally novel while it bases on solid and well-known foundations, (2) the experimental validation is sufficient to demonstrate the capabilities of the approach.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is about using classical PIC/FLIP scheme in Computational Fluid Dynamics for solving the learning problem of 3D object detection and segmentation. In general, there are extrinsic CNNs like the Vox net etc. which look for global features which the authors refer to as Eulerian formulation of the data representation, and there are intrinsic CNNs like the GCN(graph convolutions), Point nets etc. which look for localized neighborhood information which the authors refer to as Lagrangian formulation. The authors acknowledge that hybridizing the extrinsic CNNs and intrinsic CNNs is not new and several works are cited. The key contribution is to look at this problem from the perspective of PIC/FLIP scheme which has been used in CFD for decades.\\n\\nThe idea is very nice, well describes and quite novel in my opinion. I really liked the adoption of classical CFD approaches in learning. This provides a very interesting perspective to 3D deep learning. \\n\\nHowever, the papers struggles to demonstrate why the 3D deep learning community would adopt this approach. The results are not that conclusive. The algorithm works (understood well from the ablation study). However, the performance of the proposed approach is at best comparable to some of the state-of-the-art methods such as PointCNN or SE-Net. The authors need to clarify what potential advantages could there be other than accuracy (if any) such that the community uses the proposed method. \\n\\nAlso, the grids used in the study are too low to make any conclusive remarks on what happens when dealing with higher resolutions of grid. The authors themselves acknowledge the limitation of not being able to go higher in resolution of grid. Interestingly, such limitations of CFD has recently motivated the community to explore deep learning based fast and agile surrogates for computationally tractable approaches. \\n\\nSome of the new works in 3D object recognition and segmentation such as Deep SDF(https://arxiv.org/abs/1901.05103), AtlasNet(https://arxiv.org/abs/1802.05384), Deep Level Sets (https://arxiv.org/abs/1901.06802), occupancy networks (https://arxiv.org/pdf/1812.03828v1.pdf), http://openaccess.thecvf.com/content_cvpr_2018/html/Yu_PU-Net_Point_Cloud_CVPR_2018_paper.html, Adaptive O-CNN (https://dl.acm.org/citation.cfm?id=3275050), https://arxiv.org/abs/1805.12254, 3D Point Capsule Networks, http://t.cvlibs.net/publications/Niemeyer2019ICCV.pdf etc. can be compared with or at least contrasted in the related works.\\n\\nIn summary, I really liked the algorithmic idea, but skeptical about its practical relevance from the results.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a method for point-based learning that is inspired by a hybrid Eulerian-Lagrangian fluid simulation method. The work first explains how the simulation algorithm is mapped to the learning problem: MLPs are employed to learn sets of particle based features which are mapped to a Eulerian grid. A second MLP infers a particle based velocity, which is likewise mapped to the grid and used to advect the grid quantities. This is repeated for a certain number of steps to obtain final positions. The \\\"warped\\\" features are then projected back onto the particles to solve, e.g., a classification task. In contrast to a typical flow solver, the motion can be divergent, i.e., not necessarily conserves volume.\\n\\nThe paper presents a brief ablation study for number of iterated steps, grid size and point count, before presentation two comparisons with existing baselines.\\n\\nOverall, I found the idea to employ FLIP for Lagrangian learning tasks novel and very interesting. Unfortunately, the paper (as mentioned in the text) only contains only a somewhat preliminary study. The method does not yield clear gains over previous work, but rather a similar performance for classification and segmentation of ShapeNet and S3DIS data is shown. Given the fairly complicated construction, I think it would be important to actually show improvements at least for specific learning tasks. Several of the deformations shown in figure 5 and 6 are also not really intuitive\\n\\nAlso, on second sight, I don't fully understand the motivation for employing and learning a grid based deformation. The grids seem to inherently limit the spatial extent of the point clouds, and the features that can be resolved. Features smaller than a grid cell will essentially \\\"stick together\\\", and can't be separated. It's also not obvious how to choose parameters such as the number of time steps. Intuitively, I'd expect the method to \\\"converge\\\" to a position for a larger number of steps.\\n\\nTo conclude, the direction this paper takes is certaily new and interesting, but the preliminary results in combination with the complexity and limitations introduced by the grid-based representation make me hesitant to recommend accepting this paper in its current form. (The nine pages also contribute to this assessment.)\"}"
]
} |
rylqmxBKvH | Unsupervised Spatiotemporal Data Inpainting | [
"Yuan Yin",
"Arthur Pajot",
"Emmanuel de Bézenac",
"Patrick Gallinari"
] | We tackle the problem of inpainting occluded area in spatiotemporal sequences, such as cloud occluded satellite observations, in an unsupervised manner. We place ourselves in the setting where there is neither access to paired nor unpaired training data. We consider several cases in which the underlying information of the observed sequence in certain areas is lost through an observation operator. In this case, the only available information is provided by the observation of the sequence, the nature of the measurement process and its associated statistics. We propose an unsupervised-learning framework to retrieve the most probable sequence using a generative adversarial network. We demonstrate the capacity of our model to exhibit strong reconstruction capacity on several video datasets such as satellite sequences or natural videos.
| [
"Deep Learning",
"Adversarial",
"MAP",
"GAN",
"neural networks",
"video"
] | Reject | https://openreview.net/pdf?id=rylqmxBKvH | https://openreview.net/forum?id=rylqmxBKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BevIX-9o2Y",
"HkgHEBLhjB",
"rkgs_3rnir",
"SJeTm2SnoB",
"r1xzroShsS",
"ryl7gsB3oH",
"rkl9Dco75S",
"rJeZL4PtYr",
"HJlXPY_SFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743550,
1573836077116,
1573833843174,
1573833764996,
1573833529605,
1573833450980,
1572219489639,
1571546185507,
1571289434989
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2220/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2220/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2220/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2220/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2220/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2220/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2220/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2220/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the problem of unsupervised inpainting occluded areas in spatiotemporal sequences and propose a GAN-based framework which is able to complete the occluded areas given the stochastic model of the occlusion process. The reviewers agree that the problem is interesting, the paper is well written, and that the proposed approach is reasonable. However, after the discussion phase the critical point raised by AnonReviewer1 remains: in principle, when applying different corruptions in each step, the model is able to see the entire video over the duration of the training. This coupled with the strong assumptions on the mask distribution makes it questionable whether the approach should be considered unsupervised. Given that the results of the supervised methods significantly outperform the unsupervised ones, this issue needs to be carefully addressed to provide a clear and convincing selling point. Hence, I will recommend rejection and encourage the authors to address the remaining issues (the answers in the rebuttal are a good starting point).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Changes to the Paper\", \"comment\": \"Thanks to all the reviewers for their comments and suggestions. We tried to take all of them into account, we reorganized the paper accordingly and hope to provide all the required precisions.\\n\\nWe address below some general comments/questions raised by the reviewers and then give detailed answers for each review.\\n\\n1) We have made the details of [1] clearer, as the description raised some ambiguities.\\n2) As suggested by review 1, we performed additional tests using [2] and [3], two SOTA unsupervised image inpainting approaches. The results are provided in the table below and have been added in Appendix E in the paper. The table shows quantitative results for [2, 3] and our approach for the FaceForensics++ dataset with Raindrops noise. We show that the inpainting results of [2, 3] are quantitatively worse, especially for temporal quality metric FVD.\\n\\nA qualitative illustration of the behavior of these models has also been added to the following site, they confirm their poor performance:\", \"https\": \"//sites.google.com/view/unsup-video-inpaiting/.\\n\\n3) Finally, we corrected the typos suggested by the reviewers.\", \"references\": \"[1] Alasdair Newson, Andr\\u00e9s Almansa, Matthieu Fradet, Yann Gousseau, and Patrick P\\u00e9rez. Video inpainting of complex scenes,\\n[2] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep Image Prior.\\n[3] Antonio Criminisi, Patrick P\\u00e9rez, and Kentaro Toyama. Region filling and object removal by exemplar-based image inpainting.\"}",
"{\"title\": \"Reply to Blind Review #3\", \"comment\": \"Thank you for your comments and questions. Below we address your remarks.\\n\\nNewson et al. (2014) [1] is one of the SOTA patch-based methods for unsupervised inpainting. It searches for the nearest neighbours of an occluded area using an Approximate Nearest Neighbor (ANN) search with the PatchMatch algorithm [2]. The occluded area is then reconstructed by assembling information from these neighbors at multiple scales. To initialize the algorithm, PatchMatch needs to find a valid initial guess for each occluded pixel, which indicates where to find the corresponding patch in the sequence. This initialization will not terminate until every occluded pixel is pointed to a patch who does not include any other occluded pixel.\\n\\nSince [1] specifically looks for rectangular cuboids of video information, it is extremely well adapted for the Moving-Vertical-Bar and thus performs well. However, for more general complex types of noise such as Raindrops, Remove-Pixel, and Cloud noises, it cannot work properly. More precisely, it remains blocked in the search for relevant candidate patches for the occluded pixels. This means that no matter how hard it tries, it cannot complete the task. The absence of the experiment results for [1] in tables 1 and 2 is due to this issue and not to insufficient running time. For example, we also tried with very small patches (3x3x3), a reasonable minimum size for spatiotemporal patches, the algorithm still remains blocked and is then unable to terminate. This was probably not clear enough in the original manuscript and this will be made explicit in a new version.\", \"minor_comments\": \"sure this is SOTA, thanks.\", \"references\": \"[1] Alasdair Newson, Andr\\u00e9s Almansa, Matthieu Fradet, Yann Gousseau, and Patrick P\\u00e9rez. Video inpainting of complex scenes.\\n[2] Connelly Barnes, Eli Shechtman, Adam Finkelstein, Dan B. Goldman. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing.\"}",
"{\"title\": \"Reply to Blind Review #2\", \"comment\": \"Thank you for your kind and helpful comments. We provide below some clarifications.\\n\\n- Comments on the Experiments\\n\\nWe tried hard to perform the experiments for baseline (2) [1]. We here explain in detail its results shown in tables 1 and 2. \\n\\n[1] searches for the nearest neighbours of an occluded area using an Approximate Nearest Neighbor (ANN) search with the PatchMatch algorithm [11]. The occluded area is then reconstructed by assembling information from these neighbors at multiple scales. To initialize the algorithm, PatchMatch needs to find a valid initial guess for each occluded pixel, which indicates where to find the corresponding patch in the sequence. This initialization will not terminate until every occluded pixel is pointed to a patch who does not include any other occluded pixel.\\n\\nSince [1] specifically looks for rectangular cuboids of video information, it is extremely well adapted for the Moving-Vertical-Bar and thus performs well. However, for more general complex types of noise such as Raindrops, Remove-Pixel, and Cloud noises, it cannot work properly. More precisely, it remains blocked in the search for relevant candidate patches for the occluded pixels. This means that no matter how hard it tries, it cannot complete the task. The absence of the experiment results for [1] in tables 1 and 2 is due to this issue and not to insufficient running time. For example, we also tried with very small patches (3x3x3), a reasonable minimum size for spatiotemporal patches, the algorithm still remains blocked and is then unable to terminate. This was probably not clear enough in the original manuscript and this will be made explicit in a new version.\\n \\n- Additional Experimental Results\\n\\nAs suggested,we performed additional tests using [2] and [3], two SOTA unsupervised image inpainting approaches. The results are provided in the table below and have been added in appendix E in the paper. The table shows quantitative results for [2, 3] and our approach for the FaceForensics++ dataset with Raindrops noise among, and our model. We show that the inpainting results of [2, 3] are quantitatively worse, especially for temporal quality metric FVD.\\n\\n-------------------------------------------------------------\\n | FID | FVD | MAE \\n-------------------------------------------------------------\\n [2] | 44.84 | 2410.62 | 0.2271\\u00b10.1560 \\n [3] | 147.86 | 3617.92 | 0.5533\\u00b10.1246 \\n-------------------------------------------------------------\\nOurs | 43.72 | 1574.89 | 0.0834\\u00b10.0187 \\n-------------------------------------------------------------\\n\\nA qualitative illustration of the behavior of these models has been added to the following site, they confirm their poor performance: https://sites.google.com/view/unsup-video-inpaiting/. \\n\\n- On the Mask Distribution Hypothesis\\n\\nThe assumption of a specific observation model (i.e. in our case, the known mask distribution) for image imputation/denoising is widely adopted in the vision and deep learning community [2, 7, 8] and also in the physical modeling community [4-6]. For unsupervised video inpainting approaches, even stronger assumptions on the content and/or the form of masked and unmasked region are often used. For example the assumption of the existence of a object-background segmentation in [9] or the existence of spatiotemporal patches in [1]. Our assumptions follow this line of research. Recently, MisGAN [10] relaxed this assumption, in the case when missing pixel position can be identified. This could be an extension of our work.\", \"references\": \"[1] Alasdair Newson, Andr\\u00e9s Almansa, Matthieu Fradet, Yann Gousseau, and Patrick P\\u00e9rez. Video inpainting of complex scenes.\\n[2] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep Image Prior.\\n[3] Antonio Criminisi, Patrick P\\u00e9rez, and Kentaro Toyama. Region filling and object removal by exemplar-based image inpainting.\\n[4] Unified Notation for Data Assimilation. Operational, Sequential and Variational\\n[5] M. Bocquet, C.A. Pires, and L. Wu. Beyond Gaussian Statistical Modeling in Geophysical Data Assimilation.\\n[6] R. Lguensat, P. Tandeo, P. Ailliot, M. Pulido, and R. Fablet. The Analog Data Assimilation.\\n[7] Ashish Bora, Eric Price, and Alexandros G. Dimakis. AmbientGAN: Generative models from lossy measurements.\\n[8] Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. Noise2Noise: Learning image restoration without clean data.\\n[9] S. S. Cheung, J. Zhao, and M. V. Venkatesh. Efficient object-based video inpainting. \\n[10] Steven Cheng-Xian Li, Bo Jiang, Benjamin M. Marlin. MisGAN: Learning from Incomplete Data with Generative Adversarial Networks.\\n[11] Connelly Barnes, Eli Shechtman, Adam Finkelstein, Dan B. Goldman. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing\"}",
"{\"title\": \"Reply to Blind Review #1 (Part 2)\", \"comment\": \"3) Let us first introduce some more details on [5].\\n\\n[5] searches for the nearest neighbours of an occluded area using an Approximate Nearest Neighbor (ANN) search with the PatchMatch algorithm [15]. The occluded area is then reconstructed by assembling information from these neighbors at multiple scales. To initialize the algorithm, PatchMatch needs to find a valid initial guess for each occluded pixel, which indicates where to find the corresponding patch in the sequence. This initialization will not terminate until every occluded pixel is pointed to a patch which does not include any other occluded pixel.\\n\\nSince [5] specifically looks for rectangular cuboids of video information, it is extremely well adapted for the Moving-Vertical-Bar and thus performs well. However, for more general complex types of noise such as Raindrops, Remove-Pixel, and Cloud noises, it cannot work properly. More precisely, it remains blocked in the search for relevant candidate patches for the occluded pixels. This means that no matter how hard it tries, it cannot complete the task. The absence of the experiment results for [5] in tables 1 and 2 is due to this issue and not to insufficient running time. For example, we also tried with very small patches (3x3x3), a reasonable minimum size for spatiotemporal patches, the algorithm still remains blocked and is then unable to terminate. This was probably not clear enough in the original manuscript and this will be made explicit in a new version.\\n\\nFor the concern on the performance comparison, even though the official code of [5] is written in MATLAB, it uses essentially C++ to achieve optimal performance on CPU. However, due to the nature of [5]\\u2019s iterative algorithm, an end-to-end acceleration by GPU is not possible. For each inference, it must search in the whole sequence for a small patch for each missing point, which naturally requires a lot of computing. As it is also iterative, the algorithm cannot be parallelized between different iterations and different substeps of each iteration. In contrast, as our inpainter is a neural network, it can fully benefit from GPU speedup leading the inference time down to around 0.5 seconds per sequence. We will update our manuscript to better explain this comparison.\", \"references\": \"[5] Alasdair Newson, Andr\\u00e9s Almansa, Matthieu Fradet, Yann Gousseau, and Patrick P\\u00e9rez. Video inpainting of complex scenes.\\n[6] Zoubin Ghahramani, and Michael I. Jordan. Supervised learning from incomplete data via an EM approach.\\n[7] Roderick J. A. Little, and Donald B. Rubin. Statistical Analysis with Missing Data.\\n[8] Jinsung Yoon, James Jordon, Mihaela van der Schaar. GAIN: Missing Data Imputation using Generative Adversarial Nets.\\n[9] Ashish Bora, Eric Price, and Alexandros G. Dimakis. AmbientGAN: Generative models from lossy measurements.\\n[10] Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip H\\u00e4usser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox. FlowNet: Learning Optical Flow with Convolutional Networks.\\n[11] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, Thomas Brox: FlowNet 2.0. Evolution of Optical Flow Estimation with Deep Networks.\\n[12] Ruoteng Li, Loong Fah Cheong, Robby T. Tan. Heavy Rain Image Restoration: Integrating Physics Model and Conditional Adversarial Learning.\\n[13] He Zhang, Vishwanath Sindagi, Vishal M. Patel. Image De-raining Using a Conditional Generative Adversarial Network.\\n[14] Wenhan Yang, Robby T. Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, Shuicheng Yan. Deep Joint Rain Detection and Removal from a Single Image\\n[15] Connelly Barnes, Eli Shechtman, Adam Finkelstein, Dan B. Goldman. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing\"}",
"{\"title\": \"Reply to Blind Review #1 (Part 1)\", \"comment\": \"Thank you for your kind and helpful remarks. We provide below clarifications for your questions.\\n\\n1) You are right, a different mask is applied at different time steps. This mask could be time independent (as in Remove-Pixel noise) or time-dependent (for the three other noises: Raindrops, Moving-Vertical-Bar and Cloud noises). The underlying hypothesis is that in many practical situations, the noise is a dynamic and time-evolving process. Alternatively, we could have considered a fixed noise pattern per sequence, in which case our formalism is still valid, or a fixed noise pattern for all the sequences, in which case it is not applicable anymore (because of the non stochasticity of the noise position process). Note that our hypothesis is similar to the one used in the classical data imputation setting in statistics [6, 7], for non dynamic data. This is also the usual assumption in the GAN literature (again on still data): for example in [8, 9] the missing data process is supposed to be stochastic in the sense that the position of the missing data follows a given distribution. The model will then have a chance of observing the whole data distribution, provided it has access to enough data.\\n\\nThe problem is still considered unsupervised because there is no direct supervision, be it paired (noisy image, ground truth image) or unpaired noisy sequences and ground truth sequences without direct correspondence between the two types of sequences. More details are provided below.\\n\\n(a) Unsupervised versus supervised:\\n\\nYour remark is perfectly relevant since given enough data, one can imagine that the model will have access to the whole information from the videos. However, i) this would require an extremely large number of observed sequences, ii) this is an extremely complex task when no guidance or prior is provided to the model. For unsupervised inpainting, the model has to discover by itself which information is relevant for the reconstruction (in this case local spatiotemporal information).\\n\\nIn [1, 2], for example, the inpainter is trained with ground truth information. More than that, the inpainter has access to additional information about pixel displacement through optical flow estimation performed by FlowNets [10, 11]. This further encourages the model to make use of neighbor image information for the reconstruction. Note that optical flow is not available in our case because of the nature of the noise itself.\\n\\nAlternative frameworks based on GANs could be used such as Pix2Pix (Vid2Vid for videos) or variants of CycleGAN (RecycleGAN for videos). These methods have been developed for image or video translation and could be used as well for imputation. However, the former relies on paired supervision and the latter on unpaired supervision, which again are not available in our case. \\n\\nNote that for time-dependent slowly moving masks like Clouds at heavy coverage, there exist numerous areas that will never been seen in the sequence, and our model can still recover the masked area with good spatial and temporal quality.\\n\\n(b) Comparison with supervised baselines:\\n\\nWe propose in the paper a comparison with two supervised variants of our model. One makes use of paired supervision and the other one of unpaired supervision. This is addressed in Section 3.3 for a description of the model variants and in Section 4.1 (last paragraph) for a quantitative comparison. Supervision brings a lot of additional information and considerably improves the model performance. However, in most cases such a supervision is never available and this analysis has been performed only to show the performance gap w.r.t an ideal situation.\\n\\n2) Thank you for proposing a large number of existing de-rain datasets. However, even if we consider the datasets for frame-by-frame de-raining, the associated mask distributions in [3, 4] are not natural. They are often simulated with control parameters, as for instance in [3, 12]. Some of the methods have a natural subset of rainy images, but no annotated mask comes with the datasets. They are usually used uniquely while testing after training the model with synthetic masks, such as in [13, 14]. Note that our Cloud measurement is already a quite realistic mask distribution based on very sophisticated large-scale cloud simulation, showing that our model is capable to work in the real world application.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper addresses the problem of reconstructing a video sequence that contains occlusions. The paper focusses on remove four particular types of simulated occlusion - Raindrops, Remove pixel, Cloud and Vertically moving Bar. The authors show good results in reconstructing these video sequences in an unsupervised manner. The paper uses a GAN based network to accomplish inpainting in the occluded regions. The authors claim that the method is very flexible in terms of the data that needs to come in, and test this by deploying the method to solve quite different missing data problems (Using different type of occlusion and in different contexts).\\n\\nOne interesting note about the experiments is that the presented method is outperformed by Newson et. al[2014] in all experiments where Newson's method does not complete. I think we need more data for this to be a reasonable constraint. How long was the experiment allowed to run without completing? Do you think there is a reasonable prospect that it could finish, given more time? Other than that the method performs extremely well across all tasks.\\n\\nThe paper is well written, and the methods and experiments are convincing. Another very good aspect of this paper is that the accompanying website is very good and contains code for the methods and experiments used. The work is also sufficiently novel, and has some clever tricks (such as having the discriminator consider the inter frame difference).\\n\\nMinor issues\\nOn pg 7 - I am unfamiliar with the uses of sota, and my intuition is that is should perhaps be SOTA. If this is common usage, please ignore me :)\\nOn pg 9 - framwork (typo)\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\nThe paper presents an approach to perform video inpainting from corrupted input only . Their approach proposed a method that uses GANs for denoising images to now handle full sequences of images by building on top work that handles inpainting in single images.\\n\\nStrengths\\n1) The authors present extensive experiments on many datasets. \\n2) The presented approach is simple and general enough to be applied to many video problems.\\n3) The provided code is well structured and easy to read and atrached webpage showcases their results well.\\n4) The paper is well-written.\\n\\nWeakness\\n1) From the code and the description in paper, it seems that a different corruption (https://github.com/anon-ustdi/ustdi/blob/7a81db4972ef9d4eabbd8fe354a8984a7771ae5d/src/datasets/corrupted.py) is applied for each step. Can the authors confirm if that is the case? \\n\\nIf this is the case, I am afraid the method cannot be called unsupervised as across many steps the model would have seen different corruptions of the same video and across many such corruptions the model can learn what an uncorrupted video looks like. It is okay if that is the case but the claim of unsupervised would not hold. If that is the case, the authors need to make comparison with other supervised inpainting methods as well [1,2]\\n\\n2) Is there a dataset for which these corruptions exist naturally? May be [3, 4]. It would be nice to have experiments on a dataset where the corruptions are present naturally.\\n\\n3) While the experiments are incomplete for Table 1, [4] outperforms the proposed approach for the one dataset the numbers have been reported. Performance comparison with [4] are not fair as [4]'s implementation is in MATLAB.\\n\\nDecision\\nWhile the presented approach is good, further experiments are required to further validate the effectiveness of their approach in an unsupervised setting. \\n\\nReferences\\n[1] Chuan Wang, Haibin Huang, Xiaoguang Han, and Jue Wang. Video inpainting by jointly learning temporal structure and spatial details.\\n[2] Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Deep video inpainting. \\n[3] https://github.com/stayhungry1/Video-Rain-Removal\\n[4] https://github.com/nnUyi/DerainZoo\\n[5] Alasdair Newson, Andr\\u00e9s Almansa, Matthieu Fradet, Yann Gousseau, and Patrick P\\u00e9rez. Video inpainting of complex scenes.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a GAN based approach for unsupervised video inpainting. The idea is to learn an auto encoder to reconstruct the unoccluded regions of the frames, and reuse the autoencoder as the generator for a video GAN, which discriminates between real noisy videos and generated videos masked by a known mask distribution.\\n\\nTo me the general idea is a straightforward application of GANs to the general inpainting/denoising problem, with the novelty being 1) not relying on paired or unpaired references and 2) extension to the video domain. The main critique I have is that the work seems incomplete and does not show great empirical results, which makes this work a much weaker contribution. Obviously, the authors should complete the experimental results on baseline method 2. It'd also be informative to compare with SOTA unsupervised image inpainting algorithms, if more such baselines are available than video based ones. \\n\\nAnother limitation of this work is that a known mask distribution is assumed, while this might be a reasonable assumption in some cases, it can also be easily violated in real world problems. It'd be great improvement if such an assumption can be relaxed, for example, assuming a family of mask distributions but not the exact instantiation. \\n\\nOverall I think this work attempts to solve an interesting problem with an incremental but reasonable approach, and more empirical evaluation is needed to make the paper meet the bar for acceptance.\"}"
]
} |
r1xF7lSYDS | Transferable Recognition-Aware Image Processing | [
"Zhuang Liu",
"Tinghui Zhou",
"Zhiqiang Shen",
"Bingyi Kang",
"Trevor Darrell"
] | Recent progress in image recognition has stimulated the deployment of vision systems (e.g. image search engines) at an unprecedented scale. As a result, visual data are now often consumed not only by humans but also by machines. Meanwhile, existing image processing methods only optimize for better human perception, whereas the resulting images may not be accurately recognized by machines. This can be undesirable, e.g., the images can be improperly handled by search engines or recommendation systems. In this work, we propose simple approaches to improve machine interpretability of processed images: optimizing the recognition loss directly on the image processing network or through an intermediate transforming model, a process which we show can also be done in an unsupervised manner. Interestingly, the processing model's ability to enhance the recognition performance can transfer when evaluated on different recognition models, even if they are of different architectures, trained on different object categories or even different recognition tasks. This makes the solutions applicable even when we do not have the knowledge about future downstream recognition models, e.g., if we are to upload the processed images to the Internet. We conduct comprehensive experiments on three image processing tasks with two downstream recognition tasks, and confirm our method brings substantial accuracy improvement on both the same recognition model and when transferring to a different one, with minimal or no loss in the image processing quality. | [
"Image Recognition",
"Image Processing"
] | Reject | https://openreview.net/pdf?id=r1xF7lSYDS | https://openreview.net/forum?id=r1xF7lSYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"UDcp0KviE8",
"ByeL8DRVor",
"HyxXILCEor",
"SygheU04sB",
"HJe7nBR4sH",
"rJeSsN0VoH",
"rJlzwMAEiH",
"Hkl_dbR4oB",
"ryxXHWAViS",
"SJgmkJdkqB",
"r1xh8HK_Fr",
"BklAQWtjOB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743517,
1573345101793,
1573344842797,
1573344756140,
1573344683434,
1573344413028,
1573343834286,
1573343600513,
1573343546518,
1571942107051,
1571489108112,
1570636069930
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2219/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2219/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2219/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents several models for recognition-aware image enhancement. The authors propose to enhance the image quality in the presence of image degradation (e.g., low-resolution, noise, compression artifacts) as well as to improve the recognition accuracy in a joint model. While acknowledging that the paper is addressing an interesting direction, the reviewers and AC note the following potential weaknesses: presentation clarity, limited technical contributions, insufficient empirical evidence. AC can confirm all the reviewers have read the rebuttal and have contributed to the discussion. All the reviewers and AC agree that the rebuttal was informative, and the authors have partially addressed some of the concerns (e.g. additional experiments). R2 has raised the score from reject to weak reject. However, at this stage AC suggest the manuscript is below the acceptance bar and needs a major revision before submitting for another round of reviews. We hope the reviews are useful for improving and revising the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your positive feedback! We are glad to see your acknowledgement on our contributions, and we are happy address your concerns below:\\n\\n1. In the updated revision, we\\u2019ve added a few citations of classic papers on super-resolution and denoising, in the first sentence of related work: \\u201cImage processing/enhancement problems such as super-resolution and denoising have a long history [1,2,3,4].\\u201d\\n\\n[1] Tsai, R. Multiframe image restoration and registration. Advance Computer Visual and Image Processing 1 (1984): 317-339.\\n[2] Park S C, Park M K, Kang M G. Super-resolution image reconstruction: a technical overview. IEEE signal processing magazine, 2003, 20(3): 21-36.\\n[3] Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 1992, 60(1-4): 259-268.\\n[4] Cand\\u00e8s E J, Romberg J, Tao T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory, 2006, 52(2): 489-509.\\n\\n2. Thanks for your suggestion on the naming of the \\u201cTransformer\\u201d. We are also considering renaming it possibly into \\u201ctransforming model\\u201d. But to keep the naming consistent throughout the discussion period, we will keep the original name for now.\\n\\nThank you for your review again! Any further questions or suggestions are welcome.\"}",
"{\"title\": \"Response to AnonReviewer2 [1/4]\", \"comment\": \"Thank you for your constructive feedback! We are happy to address your concerns below and we have uploaded a revision reflecting the changes. For easier reading, we\\u2019ve pasted some of your comments in our response, and please bear with the length of our response.\", \"response_to_major_concern\": \"Technical novelty and contribution\\nWe are glad to see the reviewer agrees that we are addressing an important and timely problem (making image processing outputs more accurately recognized by machines), and our method brings substantial performance gain. We agree that our method towards this goal is simple and straightforward, but we would like to view this simplicity as a strength. It makes our methods easy to implement and potentially more widely used in practice. Our contribution does not lie in designing network architectures or components, but is to showcase our simple methods can work favorably on an important but largely ignored problem, and more interestingly the improvement is even \\u201ctransferable\\u201d. We agree that architecture-wise, our method consists of two separate networks, but the key idea is to use a fixed recognition loss upon the original image processing loss for better machine recognizability of image processing outputs. Our technical contributions are bringing this problem to the community, developing a simple and effective method which imposes a recognition loss, with its variants that are useful in different scenarios, and showing the performance gain is transferable under various conditions.\\n\\n\\u201cOverly heavy\\u201d system\\nOur system might be \\u201cheavier\\u201d during training than plain processing, since it incorporates an additional loss computation with the recognition model. Our training still finishes in a reasonable amount of time (less than one hour to a few hours with a single GPU). More importantly, once the training is finished, the recognition model used as loss is not needed anymore, and during inference, we only need the processing model P, so no additional overhead is introduced when the model is actually put to deployment. We have included this point in section 3.2 in the revision.\\n\\nHyperparameter $\\\\lambda$\\nIncorporating a new loss function often requires tuning of the coefficient hyperparameter, but in our case this hyperparameter $\\\\lambda$ is only grid searched once within a short range for each variant of our methods (RA Processing, Unsupervised RA and RA w/ Transformer), using ResNet-18 as the recognition model and super-resolution as processing tasks. The same $\\\\lambda$ is then used on other processing tasks and recognition models. This means a consistent $\\\\lambda$ works well with different conditions. An analysis of the hyperparameter $\\\\lambda$ with RA Processing is presented at Table 6, we can see that $\\\\lambda$ from 1e-4 to 1e-2 all bring substantial improvement in terms of recognition accuracy. As for each variant of our method, this brief grid search is necessary since unsupervised RA and RA processing have very different forms of loss functions and the two losses differ a lot in scales. \\n\\n\\u201cIt was hard hard to find interesting ideas that future readers may learn from the paper\\u201d\\nOverall, we raised an important problem to the community, developed a simple method (and several variants for different use cases) that can work on the problem, and presented the intriguing \\u201ctransferability\\u201d of our method. This transferability phenomenon is quite surprising to us. It could bring insights into questions like how neural networks function and what do different neural networks share in common. We believe our work can be useful to the research community as a new problem is raised, and we hope it encourages researchers to further develop better methods on this problem. Also our method could be useful for industry usage as it is simple and gives substantial performance gain on a very practical problem.\", \"response_to_other_comments\": \"1.\\u201cThe 2nd model based on knowledge distillation (KD) is called \\\"unsupervised\\\", which however sounds weird. As already mentioned in the manuscript, the teacher network for KD is trained in a fully supervised manner for the target task, so it cannot be considered as an unsupervised model. Further, the advantage of the 2nd model is marginal in practice.\\u201d\\n\\nFirst we would like to clarify on the relation with KD. Our system is similar to the KD in terms of the loss function (using a predicted \\u201csoft\\u201d probability distribution to guide the training instead of \\u201chard\\u201d ground truth labels), but not in terms of the \\u201cteacher-student\\u201d model paradigm. In our system, the probability distribution is obtained by feeding the original image to the same pretrained recognition model, but in KD, it\\u2019s obtained by feeding the image to a different teacher model. Thus, the pretrained model R is not considered as a \\u201cteacher model\\u201d, but the original image can be considered as a \\u201cteacher image\\u201d.\"}",
"{\"title\": \"Response to AnonReviewer2 [2/4]\", \"comment\": \"(..continued) We mentioned in section 3.3 that here \\u201cunsupervised\\u201d is only in terms of training the image processing model P, but not the recognition model R. Our problem setting (section 3.1) assumes that the recognition model R is a given fixed pretrained model, and it can be pretrained either in full supervision or in an unsupervised manner, because its training is not part of our training process and we only use it as a loss function. We only concern about the training of the image processing model P, and the method described in section 3.3 allows us to train P without ground truth label of images. Also, the dataset used to train P is not necessarily the same dataset used to train R (as in section 4.4 & appendix B.2, when we evaluate transferability among recognition tasks, one model is trained with PASCAL VOC and the other model is trained with ImageNet), so even if R needs to be trained by us with full supervision of its dataset, P can still be trained with another dataset of interest without label supervision, so this method could sometimes be practically useful. We\\u2019ve also added more explanation on this point in the section 3.3 of our revision. We welcome suggestions on how to describe the method in a more clear manner. \\n\\n2. \\u201cThe advantage of the transformer in the 3rd model is not clearly discussed. It is unknown in the paper why the 3rd model with the transformer works best in the experiments. Also, regarding the main goal of the paper (i.e., image enhancement not for human but for recognition networks), the reason for adopting the transformer is hard to understand.\\u201d\\n\\nThe transformer often performs best possibly because with this extra network in the middle, the capacity of the whole system is increased: in RA Processing the processing model P optimizes both processing and recognition loss, but now P optimizes processing loss while T optimizes recognition loss. We\\u2019ve added this in the result analysis at the end of section 4.1.\\n\\nThe advantages and disadvantages of using this transformer model was discussed at the end of section 3.4. The reason for adopting this method might be not wanting to affect the original image processing performance (in terms of P\\u2019s outputs), or sometimes the better recognition performance as shown in experiments.\\n\\n3. Degree of corruption\\n\\nThe degree of image corruption (along with other settings) was mentioned at Appendix A as \\u201cExperimental Details\\u201d, due to space limit. To obtain the input images, for super-resolution, we use a downsampling scale factor of 4; for denoising, we add Gaussian noise on the images with a standard deviation of 0.1 to obtain the noisy images; for JPEG deblocking, a quality factor of 10 is used to compress the image to JPEG format. We have moved it to the experiment section in the main text to make it more clear.\\n\\n4. \\u201cThe transferability is one of the most important benefits of the proposed model, but not convincing sufficiently. The proposed models are transferable between different object categories, but the plain models seem to be also transferable, sometime more transparently. Also, it is not clearly discussed what makes the proposed models attaining the transferability.\\u201d\\n\\nOur \\u201ctransferability\\u201d means the improvement over plain processing is transferable, not over \\u201cno processing\\u201d. Our baseline in transferability experiments is \\u201cplain processing\\u201d, but not \\u201cno processing\\u201d. The fact that plain processing\\u2019s improvement over no processing is not specific to any recognition model, is not that surprising in our opinion. This is because plain processing improves image\\u2019s overall quality, and it does not use a recognition model in training. The improvement on recognition accuracy of plain processing was shown in some prior works [1,2] which use recognition performance as the metric for processing quality, but is not the focus of our work.\\n\\nIn this work, our aim is better recognition performance than plain processing (only using the traditional image processing loss as in most prior works), and in that regard, the improvement is shown to be transferable in multiple conditions (architectures, categories, recognition tasks) in experiments.\\n\\nOne of the reasons why our method attains transferability is possibly that these models learn many common features that could be useful for general computer vision, especially in shallower layers. More importantly, the reason could be similar to the reason why adversarial examples can transfer among models: different models\\u2019 decision boundaries are similar. [3] studies adversarial exmaples\\u2019 transferability and shows decision boundaries of different models align well with each other; [4] quantitatively analyzes similarity of different models' decision boundaries, and shows that the boundaries are close in arbitrary directions, whether adversarial or benign. We added this discussion in section 4.2. Thanks for your question.\"}",
"{\"title\": \"Response to AnonReviewer2 [3/4]\", \"comment\": \"5. ImageNet-C benchmark.\\nThanks for the suggestion. ImageNet-C is indeed a related benchmark to our work. This benchmark focuses on improving the robustness of the recognition model under various image conditions, while our work focuses on making the images more recognizable by a conventional recognition model. Thus we believe the emphasis is slightly different. \\n\\nDespite having a different emphasis, ImageNet-C is still a dataset where we can evaluate our methods on. We have run some experiments on ImageNet-C to showcase the results. Since only the corrupted validation set but not the training set is provided by the authors, we divide the val set into two halves and use the first/second half for training/testing. \\n\\nIn the table below, we evaluate RA Processing on all 17 types of corruptions, with corruption level 5. We observe that RA Processing brings consistent improvement over plain processing, sometimes by an even larger margin than the tasks considered in section 4.\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\\nType orig brit contr defoc elast gaub gaun glass impul jpeg motn pixel shot satr snow spat speck zoom\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\\nNo Proc. 69.9 51.3 3.3 11.3 17.1 9.3 1.2 8.7 1.0 29.4 11.1 23.1 1.8 39.5 10.7 19.1 7.7 17.6\\nPlain Proc. N/A 59.9 18.3 25.3 18.9 21.5 21.8 20.1 24.1 43.0 42.4 50.1 24.9 54.4 34.5 60.8 36.6 17.0\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\\nRA Proc. N/A 61.4 30.7 33.8 35.4 27.0 32.8 25.3 35.1 46.1 48.2 54.0 35.2 57.1 43.7 63.0 45.2 31.9\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nIn the table below, we experiment with different levels of corruptions with corruption type \\\"snow\\\" and \\\"speckle noise\\\". We also evaluate our variants -- Unsupervised RA and RA with Transformer. We observe that when the corruption level is higher, our methods tend to bring more recognition accuracy gain.\\n\\n----------------------------------------------------------------------------------------------------------\\nCorruption Type Snow Speckle noise\\n----------------------------------------------------------------------------------------------------------\\nCorruption Level 1 2 3 4 5 1 2 3 4 5\\nNo Processing 46.7 23.6 28.0 17.6 10.7 50.5 42.8 22.9 14.5 7.7\\nPlain Processing 57.1 45.1 46.0 37.1 34.5 60.3 57.0 48.4 43.2 36.6\\n-----------------------------------------------------------------------------------------------------------\\nRA Processing 60.3 51.7 51.7 45.7 43.7 62.7 60.8 54.2 50.3 45.2\\nUnsupervised RA 60.2 51.3 50.6 43.6 41.5 62.9 60.5 53.8 49.4 43.9\\nRA w/ Transformer 55.7 46.7 48.1 42.7 40.9 59.0 57.7 52.2 49.2 44.7\\n-----------------------------------------------------------------------------------------------------------\"}",
"{\"title\": \"Response to AnonReviewer2 [4/4]\", \"comment\": \"In the table below, we examine the transferability of RA Processing between recognition architectures, using the same two tasks \\\"snow\\\" and \\\"speckle_noise\\\", with corruption level 5. Note the recognition loss used during training is from a ResNet-18, and we evaluate the improvement over plain processing on ResNet-50/101, DenseNet-121 and VGG-16. We observe that the improvement over plain processing is transferable among different architectures.\\n------------------------------------------------------------------------------------------------------------------\\nCorruption Type Snow Speckle noise\\n------------------------------------------------------------------------------------------------------------------\\nEvaluation on R18 R50 R101 D121 V16 R18 R50 R101 D121 V16\\n------------------------------------------------------------------------------------------------------------------\\nNo Processing 10.7 16.6 20.9 21.7 10.5 7.7 11.7 14.5 18.6 7.1\\nPlain Processing 34.5 39.1 44.6 41.1 27.4 36.6 42.4 47.7 43.0 31.3\\n------------------------------------------------------------------------------------------------------------------\\nRA w/ R18 43.7 47.9 51.7 47.9 37.4 45.2 50.3 53.3 49.1 39.0\\n------------------------------------------------------------------------------------------------------------------\\n\\n\\nThe ImageNet-C results have also been included in Appendix-D for reference in the revision.\\n\\n6. Missing reference.\\nThanks for pointing us to the related works. We have added the missing references in the related work section.\\n\\nThank you again for your detailed review! We hope our answers address your concerns. If you have any further questions, we are very happy to answer.\\n\\n[1] Colorful Image Colorization. Zhang et al. ECCV 2016.\\n[2] EnhanceNet: Single Image Super-resolution through Automated Texture Synthesis. Sajjadi et al. ICCV 2017.\\n[3] Delving into Transferable Adversarial Examples and Black-box Attacks. Liu et al. ICLR 2017.\\n[4] The Space of Transferable Adversarial Examples. Tram\\u00e8r et al. 2017.\"}",
"{\"title\": \"Response to AnonReviewer1 [1/3]\", \"comment\": \"Thank you for your constructive feedback! We answer your questions below, and we have uploaded a revision addressing your concerns. For easier reading we pasted some of your comments, and please bear with the length of our response.\", \"first_paragraph\": \"\\u201cthe reuse of known neural networks, many simplifications (shortcuts), a not clear enough methodology (see below), limited processing & recognition tasks used to support it, do not justify in our opinion the main (overarching) work\\u2019s claim\\u201d\\n\\n\\u201creuse of known neural networks\\u201d\\nWe would like to clarify that our contribution does not lie in designing new neural architectures, but the use of recognition loss on image processing outputs. To demonstrate the general usefulness of our method, we chose popular neural networks (SRResNet, VGG, ResNet, DenseNet) in the literature for experiments.\\n\\n\\u201cmany simplifications (shortcuts)\\u201d\\nCould you elaborate more on this point? Sorry but we are not sure what \\u201csimplifications\\u201d refer to here. If \\u201csimplifications\\u201d refers to the use of neural networks as processing/recognition models, we acknowledge this point, but we also would like to mention that NNs are currently popular models for such tasks. \\n\\n\\u201ca not clear enough methodology (see below)\\u201d\\nWe answer your detailed questions about our method below. \\n\\n\\u201climited processing & recognition tasks\\u201d\\nWe experimented with three image processing tasks & two recognition tasks (in total six pairs), and transferability between the two recognition tasks, for all three of our main methods and five recognition architectures, thus we respectfully disagree that our used tasks are limited. Most prior works (e.g., [1,2]) only consider one processing/recognition task. In the revision, we also added some results on the ImageNet-C benchmark [3] which has 17 types of corruptions in Appendix D.\\n\\n\\u201cdo not justify in our opinion the main (overarching) work\\u2019s claim\\u201d\\nWe believe the experiments in section 4.2-4.4 and appendix B demonstrated how the performance gain is transferable (as in the claim) under these various conditions.\", \"second_paragraph\": \"1. Definition of \\u201cNetwork\\u201d\\nThe \\u201cnetwork\\u201d mentioned in the paper means a (deep) convolutional neural network, we have clarified this in abstract in the revision (network -> neural network), following your suggestion.\\n\\n2. \\u201cRetraining/Adaptation\\u201d\\nHere \\u201cretraining/adapting\\u201d means training a recognition specifically on the image processing outputs (instead of natural images as usual) or adapting (e.g., using some domain adaptation approaches) the naturally trained image recognition model so that it specifically recognizes images output by an image processing model (e.g., denoised images). This was briefly explained in the sentence before, and we\\u2019ve made it more clear in the revision.\\n\\n3. \\u201clook \\u2018natural\\u2019 to human\\u201d\\nWe agree that denoised or enhanced images are not necessarily more \\u201cnatural\\u201d, and we\\u2019ve changed it to \\u201cfor making the output images more perceptually pleasing to human\\u201d. Thanks for your suggestion.\\n\\n4. Definition of recognition.\\nIn the revision, we added a brief explanation about \\u201crecognizable\\u201d in paragraph 3: \\u201cIn other words, recognition systems (e.g., image classifier or object detector), should be able to accurately explain the underlying semantic meaning of the image content .\\u201d In our context, this recognition system could be any neural network including a neural-based captioning system.\\n\\n5. Specifying as enhancement/restoration.\\nFollowing your suggestion we have changed in the last paragraph of introduction from \\u201cWe conduct extensive experiments, on multiple image processing tasks\\u201d to \\u201cWe conduct extensive experiments, on multiple image enhancement/restoration tasks\\u201d. We also added a clarification in the experiment section: \\u201cMore specifically, these are image enhancement or restoration tasks, where usually the target image is an enhanced image or the original image. Other more broader image processing tasks such as pattern detection, segmentation, object extraction are not considered in this work.\\u201d We will also consider updating the title to reflect this modification.\\n\\n6. Figure 1. \\nWe agree that many recognition systems would still recognize this particular noisy bird image correctly, but the accuracy on noisy images is also severely hurt on noisy images compared with normal ones. In this case, we used a modern network architecture (i.e., ResNet-18) and it indeed incorrectly classifies this noisy image as a kite.\\n\\n7. Meaning of \\u201cunsupervised\\u201d.\\nThis \\u201cunsupervised RA\\u201d scheme means in the recognition loss we regress the probability output of the original image (\\u201csoft label\\u201d), rather than the hard label in the supervised case. \\u201conly \\u2018unsupervised\\u2019 for training model P\\u201d means that in training the image processing model P, we do not need image labels, but the recognition model R could be trained in any manner, either with or without full label supervision, because we assume R is a pretrained model in our problem setting. We\\u2019ve added more clarification in the revision.\"}",
"{\"title\": \"Response to AnonReviewer1 [2/3]\", \"comment\": \"Third paragraph:\\n1. Is your goal a universal \\u201crecognition model\\u201d applicable to anything?\\nOur goal is to learn an image processing model so that the processed images can be more accurately recognized by various downstream recognition models, rather than a processing model that just aims at good-looking images to human as most previous works focus on.\\n\\n2. Terminology\\na) Here \\u201cenhanced machine semantics\\u201d has the same meaning as \\u201cbetter machine recognizability\\u201d, which means images more accurately recognized by machines. We have changed it to \\u201cenhanced machine recognizability\\u201d to make it more clear.\\nb) Yes, it means different neural network architectures, and in this work the recognition is only performed by DNN. We have changed \\u201carchitectures\\u201d into \\u201cneural network architectures\\u201d in the revision. We agree that \\u201cis not specific to any concrete recognition model\\u201d is a too broad description and have changed it to \\u201cis not specific to any concrete neural network-based recognition model\\u201d, but we also would like to point out that neural networks are currently popular choices as image processing/recognition models, and are more related to the ICLR community.\\n\\n3. Intro paragraph 3.\\na) Thanks for asking. Here a more accurate expression than \\u201cargue\\u201d would be \\u201cadvocate\\u201d, and we have changed the word. Also we changed \\u201cmachine semantics\\u201d to \\u201cmachine recognizability\\u201d. Prior works didn\\u2019t explicitly try to optimize for \\u201cmachine semantics/recognizability\\u201d and in this paper we \\u201cadvocate\\u201d it\\u2019s also important, other than human perception.\\n\\nb) This is a good summary of our work but we would also like to add that our work is not trying to optimize recognition accuracy by processing images, it\\u2019s more about making image processing outputs better recognized by recognition models. Traditionally, people try to make the image processing outputs look good to human, but we develop techniques to make them also better recognized by machines. Here recognition is defined as identifying the content/object of the image (e.g., classification, object detection).\\n\\n4. Related work\", \"explanation_of\": \"\\u201c .. we assume we do not have the control on the recognition model, as it might be on the cloud or decided in the future, thus we advocate adapting the image processing model only. This also ensures the recognition model is not harmed on natural images.\\u201d\\n\\nIn past works, when a recognition model is used to guide an image processing model, it was shown that the recognition accuracy on that particular recognition model can be improved. But in this work, we show that we can develop approaches such that the improvement is transferable among different recognition models, as we explained in (#2) above. This transferability property removes the requirement that we must have access/control to the particular recognition model on which we want to improve accuracy, as we can resort to other models in training.\\n\\nAlso, in past works, the recognition model is mostly fine-tuned together, but we do not fine-tune the recognition model jointly with the image processing model (this is what we mean by \\u201cadapting the image processing model only\\u201d) \\u2014 the recognition model are pretrained and only serves as a loss function. Its weights are fixed, thus guaranteeing that its accuracy on normal/natural images would not degrade. We have shown in section 5 (\\u201cFine-tuning the Recognition Model\\u201d) that if it is jointly trained, the accuracy on normal images would be hurt.\\n\\n\\u201cto achieve better recover the face identity from low-resolution images\\u201d\\nThanks for pointing out the typo, we have removed the word \\u201cachieve\\u201d.\\n\\n5. \\u201c .. might not look \\u201cnatural\\u201d to machines\\u201d\\nFollowing your advice we have removed \\u201c .. might not look \\u2018natural\\u2019 to machines\\u201d and only kept the second part of the sentence.\\n\\n6. a) Explanation of \\u201cOne could specifically train a recognition model...\\u201d: please refer to our response #2 in the first paragraph of questions.\\nb) \\u201cMore complicated images (noisier, multiple obstructions etc.) are recognized nowadays and true to actual applications.\\u201d\\nYes, we agree that now some recognition systems can robustly recognize noisy/obstructed content, but many models are still trained on normal images and perform best when tested in such cases, where our methods could be used. Our work is different from those works which aim for robustness of the recognition model since we focus on training of the processing model and assume the recognition model is given. We have added related references [3,4,5] in the related work.\"}",
"{\"title\": \"Response to AnonReviewer1 [3/3]\", \"comment\": \"Fourth paragraph:\\n1. About the Transformer model.\\na) Transformer characteristics\\nHere the transformer T takes an input image from the image processing model P, and output an image that is optimized for machine recognition, thus creating the \\u201ctwo instances of images\\u201d situation. With the help of T, the processing model P focuses on optimizing the processing loss and T focuses on optimizing the recognition loss (Eqn. 6). Because the recognition loss is only imposed on the output of T, and the gradient is cut off from flowing back to P, it is as if there\\u2019s no recognition loss to P. Thus the output of P (input of T) is guaranteed not affected as for human perception.\\n\\nIn the last paragraph of section 3.4 we discussed the pros and cons of using this Transformer model instead of using the most simple variant of our method (RA Processing): it can guarantee performance for human perception in terms of output from P, but also create this \\u201ctwo-image\\u201d situation. In practice one can choose whether to use the Transformer based on practical needs.\\n\\nb) Figure 2 (right).\\nThis is mainly due to the space/page width limit and the \\u201crecognition loss\\u201d part is the same as Figure 2 left (dashed box, \\u201cRecognition Loss\\u201d), so we use this to save some space. We are happy to include the full figure for clarity if needed. \\n\\nThank you again for your detailed review! We hope our response addresses your concerns. Any further questions or suggestions are welcome.\\n\\n[1] Classification-driven dynamic image enhancement. Sharma et al. 2018.\\n[2] Task-driven super resolution: Object detection in low-resolution images. Haris et al. 2018.\\n[3] Benchmarking neural network robustness to common corruptions and perturbations. Hendrycks et al. 2019.\\n[4] Episodic training for domain generalization. Li et al. 2019.\\n[5] Generalizing across domains via cross-gradient training. Shankar et al. 2018.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The goal in this work is to improve machine interpretability of images.\", \"the_authors_main_claims_are\": [\"Their proposed approach improves image recognition accuracy even without knowing subsequent recognition tasks and recognition models used to perform them (transferable model to different recognition models/tasks).\", \"For this they propose what they call \\u201cRecognition-Aware\\u201d processing that combines image processing loss and recognition loss.\", \"The approach is evaluated on three image processing tasks with two downstream recognition tasks:\", \"o\\tImage super-resolution, de-noising, and JPEG-de-blocking processing tasks, with\", \"o\\tImage classification and object detection recognition tasks.\", \"The paper is well written and organized, experiments carried are extensive but the reuse of known neural networks, many simplifications (shortcuts), a not clear enough methodology (see below), limited processing & recognition tasks used to support it, do not justify in our opinion the main (over-arching) work\\u2019s claim:\", \"In 3.2 optimizing recognition loss/Last paragraph: \\u201cInterestingly, we find that image processing models trained with the loss of one recognition model R1, can also boost the performance when evaluated using recognition model R2, even if model R2 has a different architecture, recognizes a different set of categories or even is trained for a different task.\\u201d.\", \"The paper would greatly benefit (to understand the context of the work or the explanations provided) from clarification of the many under-defined, not clearly introduced concepts it carries:\", \"Meaning of \\u201cNetwork\\u201d is not clearly defined:\"], \"oabstract\": \"\\u201cimage processing network\\u201d.\", \"ointroduction\": \"\\u201cthe network maps an image to a semantic label\\u201d\\no\\tLater in the paper only networks introduced are deep neural networks. That should be clear from beginning of the paper.\\n-\\t\\u201cRetraining/Adaptation\\u201d in 1st paragraph page 2.\\n-\\tIn 1. Introduction/Paragraph 1: You use \\u201c.. techniques .. have been proposed for making the output images look natural to human\\u201d:\\no\\tNoise is part of nature. A de-noised (smoothed) image is not \\u201cmore natural\\u201d.\\no\\tEnhanced (processed) images are not necessarily \\u201cmore\\u201d natural, rather they take advantage of the human visual perception characteristics to enhance recognition for example.\\n-\\tIn 1. Introduction/Paragraph 3:\\no\\t\\u201c.. of great importance that the processed images be recognizable\\u201d Should explain the concept of image recognition! Because it could be related to contained objects, overall description (for captioning for example) etc. \\n-\\t\\u201cImage processing\\u201d in the context of the paper is intended only as \\u201cimage enhancement for recognition\\u201d. Pattern detection, segmentation, object extraction etc. are not included in this restrictive definition. Should specify for example: image enhancement and restoration. \\n-\\tFigure 1: As an illustration, it\\u2019s completely counterproductive for your discourse as many simple image recognition algorithms would recognize the bird even in the noisy image.\\n-\\tIn 3. Unsupervised optimization of recognition loss: The \\u201cunsupervised RA\\u201d process is not clear enough to us especially the statement:\\no\\t\\u201c.. only \\u201cunsupervised\\u201d for training model P, but the target pre-trained model R can still be trained in full supervision.\\u201d.\\n\\n\\n-\\t\\u201cWe may not know what network architectures (e.g. ResNet or VGG) will be used for inference, what object categories the downstream model recognizes (e.g. animals or scenes), or even what task will be performed on the processed image (e.g. classification or detection)\\u201d. \\no\\tIs your goal a universal \\u201crecognition model\\u201d applicable to anything?\\n-\\tI also have some trouble with the terminology: \\no\\tIn 1. Introduction/Paragraph 4: \\u201cIt is also important that the enhanced machine semantics is not specific to any concrete recognition model\\u201d: \\u201cenhanced machine semantics\\u201d!\\no\\tIn 1. Introduction/Paragraph 4: \\u201c..transferable among different recognition architectures..\\u201d. Does \\u201carchitectures\\u201d refer to deep neural networks (DNN)? If yes, is recognition performed only by DNN? What about the preceding bullet (\\u201cis not specific to any concrete recognition model\\u201d)?\\n-\\tIn 1. Introduction/Paragraph 3: \\no\\t \\u201c.. we argue that image processing systems should maintain/enhance machine semantics\\u201d. Do not see what\\u2019s to argue here?\\no\\t\\u201cRecognition-Aware Image Processing\\u201d is it simply put Image Processing techniques for recognition enhancement (\\u201cRecognition\\u201d still needs to be defined)?\\n-\\tIn 2 Related work :\\no\\t \\u201c .. we assume we do not have the control on the recognition model, as it might be on the cloud or decided in the future, thus we advocate adapting the image processing model only. This also ensures the recognition model is not harmed on natural images.\\u201d Care to explain?\", \"o\": \"\\u201cto achieve better recover the face identity from low-resolution images\\u201d, Typo?\\n-\\tIn 1. Introduction/Paragraph 1:\\no\\t\\u201c .. might not look \\u201cnatural\\u201d to machines\\u201d: Care to explain this concept? \\n\\uf0a7\\tWould advise to just keep the second part of the sentence.\\n-\\tIn 1. Introduction/Paragraph 2: \\u201cOne could specifically train a recognition model only on these output images produced by the de-noising model to achieve better performance on such images, but the performance on natural images can be harmed.\\u201d Care to explain?. \\no\\tMore complicated images (noisier, multiple obstructions etc.) are recognized nowadays and true to actual applications.\\n\\n-\\t3.4 using an intermediate transformer/Last paragraph:\\no\\t\\u201c .. that there are two instances for each image (the output of model P and T), one is \\u201cfor human\\u201d and the other is \\u201cfor machines\\u201d.\\u201d:\\n\\uf0a7\\tThe \\u201cTransformer\\u201d characteristics are not clearly defined for the intended output (For machines?).\\n\\uf0a7\\tWhy is output of model T not represented in Figure 2 (Right)?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents several models for visual recognition in the presence of image degradation (e.g., low-resolution, noise, compression artifacts). In the models, an image enhancement network is placed in front of a recognition model and trained together with the recognizer to improve the recognition accuracy as well as to enhance the image quality. The proposed approach is simple, straightforward, yet effective. It has been also shown that the image enhancement module is transferable between different recognition tasks and architectures.\\n\\nAlthough the paper addresses a timely topic and the performance gain is substantial, my current decision is reject mainly because of its weakness in technical novelty and contribution. The proposed models are simple and straightforward combinations of two separate networks, one for image enhancement and the other for recognition. This approach also makes the entire networks overly heavy, and introduces hyper-parameters (e.g., lambda) that have to be carefully tuned. Overall, it was hard to find interesting ideas that future readers may learn from the paper.\", \"other_comments\": \"The 2nd model based on knowledge distillation (KD) is called \\\"unsupervised\\\", which however sounds weird. As already mentioned in the manuscript, the teacher network for KD is trained in a fully supervised manner for the target task, so it cannot be considered as an unsupervised model. Further, the advantage of the 2nd model is marginal in practice.\\n\\nThe advantage of the transformer in the 3rd model is not clearly discussed. It is unknown in the paper why the 3rd model with the transformer works best in the experiments. Also, regarding the main goal of the paper (i.e., image enhancement not for human but for recognition networks), the reason for adopting the transformer is hard to understand.\\n\\nThe degrees of image corruption (e.g., down-sampling, noise, compression) applied during testing are not mentioned at all, although they are important to understand the empirical advantage of the proposed models. \\n\\nThe transferability is one of the most important benefit of the proposed model, but not convincing sufficiently. The proposed models are transferable between different object categories, but the plain models seem to be also transferable, sometime more transparently. Also, it is not clearly discussed what makes the proposed models attaining the transferability.\\n\\nIt would be nice to apply the proposed models to the ImageNet-C benchmark.\\n\\nMissing references\\n- Studying Very Low Resolution Recognition Using Deep Networks, CVPR 2016\\n- Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, ICLR 2019\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Claims:\\n\\nThe paper presents a concept of \\\"recognition-aware (RA) image processing\\\": when one enhances image in a some way, not only human judjement should be taken into account, but also performance of various computer vision application using that image.\\n\\nAs an example of processing tasks, authors take super-resolution, denoising and JPEG-artifacts removal. Downstream applications covered are image classification and object detection.\", \"authors_propose_a_several_training_schemas_to_solve_this_problem_and_discuss_a_limitations_of_each_one\": [\"\\\"simple\\\" preprocessing, when the only image enhancement loss is optimized\", \"\\\"RA\\\" joint optimization of recognition and enhancement loss (supervised and unsupervised)\", \"a variant when two images are created: one for human and one for machine.\", \"****\"], \"recommendation\": \"strong accept\\n\\n****\", \"comments\": \"\", \"experiments_are_vast_and_performed_on_a_variety_of_cnn_architectures\": \"ResNets, DenseNet ant VGGNet.\\n Because one cannot predict, which computer vision tasks will be needed in the future, the natural question arise: how the results got for one set of tasks, architectures and image enhancement types transfer to another. Paper carefully studies this aspect as well.\\n \\n Overall paper is well written and is pleasure to read. While reading, I made notes to ask in review - just to see the my questions answered in a next section.\\nAuthors also provide source code for training. I haven`t run them though, but glanced through them.\", \"weaknesses\": \"I cannot really find a significant one. As a minor points:\\n - I would recommend to cite not the last papers for image enhancement porblems themselves like super-resolution and denoising: these are old problems with rich history, e.g.\\n \\n L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms Physica D, 60 (1992), pp. 259\\u2013268.\\n \\n - \\\"Transformer\\\" is probably bad name for deep learning component, as it is already widely used for a specific seq2seq architecture \\n\\n\\n****\", \"after_rebuttal\": \"I am now even more convinced that paper should be accepted.\"}"
]
} |
BklOXeBFDS | Transfer Active Learning For Graph Neural Networks | [
"Shengding Hu",
"Meng Qu",
"Zhiyuan Liu",
"Jian Tang"
] | Graph neural networks have been proved very effective for a variety of prediction tasks on graphs such as node classification. Generally, a large number of labeled data are required to train these networks. However, in reality it could be very expensive to obtain a large number of labeled data on large-scale graphs. In this paper, we studied active learning for graph neural networks, i.e., how to effectively label the nodes on a graph for training graph neural networks. We formulated the problem as a sequential decision process, which sequentially label informative nodes, and trained a policy network to maximize the performance of graph neural networks for a specific task. Moreover, we also studied how to learn a universal policy for labeling nodes on graphs with multiple training graphs and then transfer the learned policy to unseen graphs. Experimental results on both settings of a single graph and multiple training graphs (transfer learning setting) prove the effectiveness of our proposed approaches over many competitive baselines. | [
"Active Learning",
"Graph Neural Networks",
"Transfer Learning",
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=BklOXeBFDS | https://openreview.net/forum?id=BklOXeBFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ylPSjiQo4V",
"S1gVscRKir",
"Bylru5RYjB",
"S1xwX5RYoH",
"BklvSUYRFH",
"SJejFIWTFH",
"SJev7G8sYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743488,
1573673627773,
1573673580848,
1573673503230,
1571882558888,
1571784323415,
1571672606690
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2217/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2217/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2217/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2217/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2217/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2217/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Paper proposes a method for active learning on graphs. Reviewers found the presentation of the method confusing and somewhat lacking novelty in light of existing works (some of which were not compared to). After the rebuttal and revisions, reviewers minds were not changed from rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your appreciation and the valuable comments!\\n1.\\tWe will try to further improve the results of our method in the paper.\\n2.\\tWhen working on this project, we didn\\u2019t notice the IJCAI paper you mentioned. Currently, we find it hard to run the method on our datasets, as the codes of the methods are not provided online. We will try to compare with the work in the future.\\n3.\\tSince the number of training point is so limited in the active learning setting, Under different kinds of initialization, the accuracy fluctuate dramatically. But since the reported average is calculated from 50 experiments with different random seeds, the mean accuracy is stable, so we can still argue that our method is superior to AGE. \\n4.\\tThe possible reason for the good performance of DAG-Joint is that the two training graphs (Cora and Citeseer) are similar. In the future, we will try to conduct experiments on more diverse graphs to validate the effectiveness of DAG-Distill.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your appreciation of our work and the valuable suggestions!\\n1.\\tAs you mentioned, the state in our framework is defined on the graph level. Based on the state (i.e., the current graph), we learn a set of representations for nodes, and further compute the probability of each action (i.e., a node)\\n2.\\tIt is a very good point to do some significance test. We will try to improve the results of our method, and also conduct significant tests in the future.\\n3.\\tThere are 20 graphs in the PPI datasets. To better validate the effectiveness of our method, we split the 20 graphs into 4 groups (i.e., 0th \\u2013 4th graphs in the first group, 5th-9th graphs in the second group...), and we evaluate our method on each group.\\n4.\\tWe will add the figure of the curve in our Appendix.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for the insightful comments!\\n1.\\tActive learning for graph neural networks is a challenging problem, which requires analyzing the informativeness of each node, and hence it is quite important to incorporate all different kinds of heuristics. The main focus of the paper is not on the theoretical analysis. Instead, we aim at designing a method to integrate all these heuristics for prediction. Nevertheless, deriving theoretical proofs is still an important suggestion, and we will leave it as a future work.\\n2.\\tConsidering the correlation of data is a very intuitive suggestion. Indeed, the correlation of data is considered in our approach. This is because we leverage a GNN in our policy network, which is able to aggregate information from the neighbors of each node, and thereby capture the dependency of different nodes.\\n3.\\tWe will try to improve the writing and make it clearer in the revised draft.\\n4.\\tWe have added the details of the heuristic features in the updated draft.\\n5.\\tIn our paper, we formalize the problem as a sequential decision process, therefore we select different nodes step-by-step, instead of selecting all of them at once. Sorry about the confusion in writing, and we will keep polishing the writing.\\n6.\\tI guess that you are wondering the performance gain of using N_seed + N_budget labeled nodes for training over using N_seed labeled nodes for training. The results are presented in the figure 3. The accuracy of training with only the N_seed nodes is lower than the starting point of the curve corresponding to \\u201crandom\\u201d.\\n7.\\tThank you for pointing it out! We have fixed them in the revised draft.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Positive\\n1. The paper studies a universal policy for labeling nodes on graphs with multiple training graphs which can be transferred to new unseen graphs.\\n2. The paper focuses on minimizing human efforts in obtaining labeled data. The model is based on active learning and transfer learning. \\n\\nNegative\\n1. The proposed method combines existing models as the solution, which is heuristic and lacks persuasive theoretical proofs. \\n2. In graphs, data (nodes) to be labeled are highly correlated. However, there is no method for solving this challenge.\\n3. In Section 3.2, ACTIVEL EARNING ON A SINGLE GRAPH, the authors formalize the problem, i.e., learning a policy for selecting a set of nodes for annotation, as a sequential decision process, and the reinforce algorithm is applied to optimize the objective function. However, explanation about the active learning is confusing. More details are needed to explain their respective goals and to explain how to integrate active learning and reinforcement learning. \\n4. The authors claim that the details of heuristic features are represented in Appendix, please add these information.\\n5. The settings of active learning need more consideration. The total budget for active learning is set as $5\\\\times N_{class}$. How to choose these nodes? Is it to select all samples at once or in batches during iterative epoch? If the samples are selected in batches, what is the specific experimental setting?\\n6. The experimental method about active learning. The paper focuses on minimizing human efforts in obtaining labeled data. Compared to the results of the model before selecting, how significant is the improvement after selecting all the node in the budget? More experiments are needed.\\n7. Minor format issue: the fonts in Table 2 and Table 3 are different.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Thank you for the author response.\", \"original_review\": \"This paper presents a method for active learning on graphs, including a novel setting of transferring an active learning policy to unseen graphs. The problems tackled here are important and the method is shown to improve over previous work in some cases. On the down side, the evaluation may be missing one important method of comparison, the reasons for the proposed approach winning over previous work are not made explicit, and the empirical advantage of the approach is inconsistent (by my count, in the majority of cases the F1 advantage over previous work is within the standard error). This paper has strengths but I feel it needs further refinement before publication.\\n\\nThe introduction of the paper claims that existing approaches for active learning on graphs are domain-specific and may not apply well to new domains. But later, in the experiments, different reasons for the proposed approach's wins are given (in particular, on large graphs the proposed approach does relatively better vs. AGE which the paper suggests is due to the more complex nonlinear models used in the proposed approach). In general, this paper\\u2019s approach tends to win over the primary baseline (the AGE method from Cai et al. 2017), but the wins are relatively small and inconsistent (esp. taking into account the standard error) whether the methods are evaluated in the single-graph setting or the transfer learning setting (of the homologous or heterogeneous variety). If the limitation of previous work was domain-specificity, I would expect to see much larger wins on the transfer learning setting. In general, an analysis that explains more what is driving the gains of this approach over the AGE approach would help us know how to build on this paper\\u2019s method in future work.\\n\\nI was curious why the paper does not compare against the following work, which also presents an approach that wins over AGE:\\n\\u201cActive Discriminative Network Representation Learning,\\u201d Gao et al., IJCAI 2018\\n\\nLastly, the distillation-based approach, which learns graph-specific policies that are trained to fit their target graphs and to minimize their KL divergence from a single global shared policy, was interesting. The fact that it doesn\\u2019t work much better than the joint policy is somewhat disappointing, but it\\u2019s still interesting.\", \"minor\": \"Sec 3.2: unclosed parenthesis in first paragraph\\n\\u201cMoreover, we also average the struc2vec features of all previously annotated nodes to capture the historical information\\u201d -- since the model has only node-level features, I didn\\u2019t understand how this average across multiple nodes was fed in as a node feature. Is it used in all nodes? Only annotated nodes?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors proposed a new method for active learning on node classification with GCN. RL based framework is used. The labeled graph is treated as state and the action is labeling the nodes. Validation accuracy on the hold out set is used as reward. Further transfer learning framework is also proposed, where graph-specific policy and master policy are jointly learned. Experiments on benchmark dataset show the effectiveness of proposed method compared to several baselines.\\n\\nThe idea of applying RL on active learning with GCN seems to be new and it sounds natural and technically. Also the idea of transferring the learned policy to new graphs make sense for similar graphs. However, the empirical results are a bit weak and not convincing enough for me. Please find the detailed comments below.\\n\\n1. Is the state defined on node level or graph level? Eq (1) is defined on node level, but I believe it should be a global policy on graph.\\n2. All the results have a rather high variance. To compare such results, the authors should make a significant test. Otherwise, one cannot say that the performance from one method is better than the other. Especially, for Table 3 and 4, DAG-distill performs not better than DAG-Joint.\\n3. What do \\\"0-4, 5-9,...\\\" mean in Table 3?\\n4. Can the authors show curve as in figure 2 for table 2 and 3. It is important to see the progress for active learning.\\n5. Why the results differ so much in Figure 2 for only 1 query? I believe the first one should be randomly picked. Thus all the methods should perform equally.\\n6. \\\"Graphs encode the relations between different objects and are ubiquitous in real-world.\\\" Typo in first sentence.\\n7. homologous or homogenous?\"}"
]
} |
rkedXgrKDH | Trajectory growth through random deep ReLU networks | [
"Ilan Price",
"Jared Tanner"
] | This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks. We generalise existing results, providing an alternative, simpler method for lower bounding expected trajectory growth through random networks, for a more general class of weights distributions, including sparsely connected networks. We illustrate this approach by deriving bounds for sparse-Gaussian, sparse-uniform, and sparse-discrete-valued random nets. We prove that trajectory growth can remain exponential in depth with these new distributions, including their sparse variants, with the sparsity parameter appearing in the base of the exponent. | [
"Deep networks",
"expressivity",
"trajectory growth",
"sparse neural networks"
] | Reject | https://openreview.net/pdf?id=rkedXgrKDH | https://openreview.net/forum?id=rkedXgrKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ARsXkGTci",
"SJgm51BhiH",
"HyxQGXj4oH",
"HkeLXYbXsS",
"S1e66dZQsS",
"r1eOLBb7iB",
"BJemDG-XiH",
"Skv8-W7ir",
"rJen4nfhqB",
"r1xMfg2LKS",
"HkxWMcyQtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743458,
1573830539501,
1573331722893,
1573226781591,
1573226693195,
1573225807957,
1573225051387,
1573224782515,
1572772915723,
1571368970240,
1571121673244
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2216/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2216/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2216/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This article studies the length of one-dimensional trajectories as they are mapped through the layers of a ReLU network, simplifying proof methods and generalising previous results on networks with random weights to cover different classes of weight distributions including sparse ones. It is observed that the behaviour is similar for different distributions, suggesting a type of universality. The reviewers found that the paper is well written and appreciated the clear description of the places where the proofs deviate from previous works. However, they found that the results, although adding interesting observations in the sparse setting, are qualitatively very close to previous works and possibly not substantial enough for publication in ICLR. The revision includes some experiments with trained networks and updates the title to better reflect the contribution. However, the reviewers did not find this convincing enough. The article would benefit from a deeper theory clarifying the observations that have been made so far, and more extensive experiments connecting to practice.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Final revision uploaded - includes experiments with trained networks\", \"comment\": \"We have now uploaded a further revised submission with an extra two figures in Appendix C.2 which show the results of some experiments on trained networks, and a pointer to these figures at the end of Section 4 on page 9.\\n\\nThe results indicate, firstly, that even in trained networks, trajectory growth through the network can still be exponential, and secondly, that the expected growth factor still depends on the standard deviation of the trained weights in an approximately linear relationship - both of which resemble the results on random nets. It also seems that the growth factor from one layer to the next tends to be somewhat larger for trajectories connecting points which belong to different classes, than trajectories connecting points from the same class. However, in contrast to the results for random networks, expected trajectory growth through trained networks appears not to be trajectory independent.\"}",
"{\"title\": \"Revised submission uploaded\", \"comment\": \"We thank the reviewers again for their thorough reading of our paper and thoughtful comments.\\n\\nWe have uploaded a revised submission which makes the changes mentioned in our direct responses to each review. In particular:\\n\\n\\u2022\\tWe changed the title to \\u201cTrajectory growth lower bounds for random sparse deep ReLU networks\\u201d\\n\\u2022\\tWe clarified the condition involving $M$ and $u$ in Theorem 2, and shifted to Section 1.1 (top of page 3) the definition of the restriction of a row of the weight matrix to its P-distributed entries.\\n\\u2022\\tWe have highlighted the fact that the lower bounds give some principled guidance on the choice of the combination of $\\\\sigma_w$ and $\\\\alpha$ (on page 4, below Corollary 3) depending on the growth properties you want for the network at initialisation. We also highlight (on page 9, end of paragraph 2) how the numerical experiments give similar guidance on these parameter choices which may accord more precisely with what occurs in practice, as compared with the theoretical lower bounds.\\n\\u2022\\tWe have highlighted that the universal dependence of the growth factor on $\\\\sigma_w$ and $\\\\alpha$ across distributions (shown in Figures 3a and 3b) remain true if you repeat the experiments with different trajectories and different, random datapoints (page 9, paragraph 2). We have included Figures for these experiments in Figure 5, Appendix C.\\n\\nThe proposed experiments on trained networks are still in the works and as of yet are not included in the revised submission.\"}",
"{\"title\": \"Response to Review #2, Part 2\", \"comment\": \"Our responses continue below:\\n\\n\\u2022\\t\\u201cHowever, it looks everything is done in a normal manner and nothing special happens\\u201d\\n\\nThe proof is designed to be clear, easy to understand and generalise. This proof is in no way an obvious extension of the proof of Raghu et al. which is substantially more complex than it need be, and this complexity is linked to dense Gaussian weight matrices.\\n\\n\\u2022\\t\\u201cWhat does the \\\"restriction\\\" of $\\\\hat{w}_i$ mean?\\u201d\\n\\nAs per our definition of the sparse random neural network, some entries of the weight matrix will be distributed according P, and others will be zero. The phrase \\u201c$\\\\hat{w}_i$ is restriction of any row of a weight matrix to its P distributed entries\\u201d means that $\\\\hat{w}_i$ is the vector you get when you take a row of the weight matrix delete all the zero entries, i.e. a vector containing only the entries which are P-distributed. We will add a suitable definition of the term \\u2018restriction\\u2019 to clarify this. \\n\\n\\u2022\\t\\u201cDoes the condition $\\\\mathbb{E}[] >= M \\\\|u\\\\|$ hold for any $u$ and $M$?\\u201d\\n\\nThe condition on $\\\\hat{w}_i$ must hold for some constant $M$ for any constant vector $u$. Thank you for pointing out the lack of clarity here, we will fix the phrasing to make this clear. \\n\\n\\u2022\\t\\u201cFinally, I have a simple question about the relationship between the scale of the output and the trajectory length. Let W be a weight and x be an input. When the smallest singular value of $W$ is $m$, $x$ is amplified at least by $m$, i.e., $\\\\|Wx\\\\| >= m\\\\|x\\\\|$ for any $x$. This means that (a part of) the trajectory growth is explained by the smallest singular value of the sparse random matrix $W$. Can you clarify the difference?\\u201d\\n\\nThank you for raising this. Indeed, one could lower bound the length as a product of (suitably modified) lower singular values of the weight matrices. Such a bound would be a lower bound on the trajectory length rather a lower bound on the *expected* trajectory length. While also very interesting, this is a different phenomenon and one which would have an even smaller lower bound. In particular, consider as an example $\\\\sigma_w=1/\\\\sqrt{k}$ and the ReLU setting to zero half of the pre-activation values. In such a case the expected smallest singular value is $1-\\\\sqrt{1/2}\\\\approx 0.29$ (see [1]), in contrast with the value of $1/\\\\sqrt{2\\\\pi}\\\\approx 0.4$ from $\\\\alpha=1$ and $\\\\sigma_w=1/\\\\sqrt{k}$ in Corollary 1. Moreover, for a rigorous bound, as in our case, one would need to justify that the expected lower singular with half the rows active would be appropriate (it gets much smaller as more rows become active), otherwise one would need to include further information about the distribution of rows and possibly the lower restricted isometry constants of the weight matrices. Lastly, such bounds would be in terms of expected values of the lower singular values of random matrices, which are in general less precise for non-Gaussian distributions than the bounds we obtain which only require considering the bound involving $M$ in Theorem 2. \\n\\n\\n\\n[1] Rudelson, Mark, and Roman Vershynin. \\\"Smallest singular value of a random rectangular matrix.\\\" Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 62.12 (2009): 1707-1739.\"}",
"{\"title\": \"Response to Review #2, Part 1\", \"comment\": \"Thank you for your detailed reading of our work, and for your positive comments regarding the clarity of our results.\\n\\nLet us respond to the specific concerns listed about our paper.\\n\\n\\u2022\\t\\u201cThe motivation is not strong enough and it makes the entire work somehow incremental.\\u201d; \\u201cI feel the extension is reasonable but it is not a big \\\"jump\\\" from the original work, and the originality is less significant.\\u201d\\n\\nAllow us to motivate the significance and innovation of our work further. The crucial point is that even after you read and understand the prior work by Raghu et al., there is no way to use their approach to achieve a more general result like ours. As such our work is not simply an extension of their work or their proof, but rather a proof of a distinct, more general result, albeit inspired by the idea of considering trajectory growth (for which they deserve full credit). To be precise, the innovation of our proof is the strategy for explicitly accounting for the statistical dependence of an infinitesimal piece of the trajectory $dz$ on $z$ in such a way which allows us to prove a more general and more precise result. Conversely, the key fundamentals of the proof by Raghu et al. *limit* extension to more general cases: In particular, their proof is based on splitting the components of $dz$ into parts which are perpendicular and parallel to $z$ (which we do not do). One consequence of this strategy is that it requires that, after splitting the weight matrix into parallel and perpendicular parts $W_\\\\top$ and $W_\\\\|$, that these matrices are independent random matrices (their lemma 2). Everything from this point on in their proof relies on this fact \\u2013 and *this is only true for Gaussian random matrices*. Thus, their proof strategy has no clear extension to sparse matrices and other distributions. Ours is not an extension *of their proof*, but rather is a completely different method to derive more general and more accurate bounds. A consequence of this is that in the specific standard Gaussian case, our work provides an alternative and more straightforward proof of a similar result to theirs. We apologise if in attempting to \\u2018give credit where credit is due\\u2019, we did not make clear enough in our writing the extent to which Raghu et al.\\u2019s work did not in any way constitute or provide a blueprint for the key innovation of this work, but rather was the inspiration and the first such bounds in the case of dense Gaussian weight matrices.\\n\\nFinally, it is only by considering examples from this more general class of distributions that we empirically observe the common and exclusive dependence of length growth on standard deviation and sparsity level, independent of distribution choice (Figures 3a and 3b).\\n\\n\\u2022\\t\\u201cHowever, I am not so excited about these results because I cannot find a practical value from them. For example, (the current form of) Corollary 1 does not tell us how to control the sparsity level $\\\\alpha$ to maintain some accuracy-sparsity tradeoff.\\u201c\\n\\nThe bounds show how one can trade-off between weight variance and sparsity so as to control lower bounds on the expected length growth. When the base in lower bound is greater than one the network will necessarily generate structures with exponential growth, even if that is expressly not desired. The bounds give formulae by which a user can adjust these parameters to, potentially, avoid this. For instance, in the case of sparse-uniform initialisation, with uniform weights sampled from $U(-10/\\\\sqrt{k}, 10/\\\\sqrt{k})$, then if one sets the sparsity parameter $\\\\alpha < 0.56$, the lower bound will converge to zero and the network need not (but might as this is a lower bound) have an exponential growth in trajectory length. To the best of our knowledge no such information was previously available to practitioners. Moreover, our numerical experiments provide further practical insight for practitioners. For example, Figure 3b considers the initialisation scheme with $\\\\sigma_w = 2/\\\\sqrt{k}$. We see that the empirically observed growth factor from one layer to the next is approximately 1.5 when the matrices are dense ($\\\\alpha=1$), while the growth factor is 1 with $\\\\alpha \\\\approx 0.5$, and less than one as $\\\\alpha$ decreases further. This gives clear guidance on a choice of $\\\\alpha$ in this initialisation scheme depending on whether you want exponential growth, shrinkage to zero, or approximately constant trajectory length through the network in expectation. We will gladly include a brief explanation of these implications of our results and experiments in an updated draft.\\n\\nMoreover, outside direct application, these results give underpinning theory on the random object generated by a random network. This is of interest from a purely mathematical point of view. We expect this will also be helpful when considering random initialisation of GANs as a data model, where we show how one can control the complexity of the data model through the variance and sparsity.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your thorough reading of our paper, and for your positive feedback on the paper\\u2019s writing, results, and timeliness.\\n\\nLet us respond to specific points raised in your review.\\n\\n\\u2022\\t\\u201cI have a question which may be invalid. For Figure 3, the observed expectation matches perfectly with the lower bound for all three distributions. This seems amazing, have the authors try with other dataset or other settings to do this experiment? Did it always match perfectly?\\u201d\\n\\nThe observed expectations (solid lines) in Figure 3 match perfectly across all three distributions, though they are all above their distributions' lower bounds (dashed lines). Yes, this is indeed remarkable, and yes, this is reproduced when choosing different datapoints, or indeed even points chosen uniformly at random, and connecting them with a straight line, or paths which are arcs in two or more dimensions. We think that this is a significant empirical observation, and one of the paper\\u2019s contributions, insofar as it indicates that there may be a universal dependence in expectation (within this class of distributions) on the standard deviation and sparsity level, such that the specific distribution is not important. We will edit this section in the paper to highlight that these variations of the experiments produce the same results, and include relevant extra figures in the appendices. Thank you for the suggestion. We are also considering some experiments on trained networks which might illustrate whether this is true more generally in over-parameterised networks.\\n\\n\\u2022\\t\\u201cIt also seems that most derivation and insights are from previous literature Raghu 2017, which makes the contribution of this submission limited.\\u201d \\n\\nIt is true that Raghu et al. deserve full credit for (1) the idea of considering expected lower bounds of trajectory growth and (2) doing so by considering the growth of a small piece of the trajectory, both of which inspired our work. However, a direct comparison of the proofs in our manuscript with those by Raghu et al. would show that it is not the case that \\u201cmost derivation and insights are from Raghu et al.\\u201d. On the contrary, our result is a more general one, and our proof is fundamentally different. To be precise, the innovation of our proof is the strategy for explicitly accounting for the statistical dependence of an infinitesimal piece of the trajectory $dz$ on $z$ in such a way which allows us to prove a more general and more precise result. Conversely, the key fundamentals of their proof *limit* extension to more general cases: In particular, their proof is based on splitting the components of $dz$ into parts which are perpendicular and parallel to $z$ (which we do not do). One consequence of this strategy is that it requires that, after splitting the weight matrix into parallel and perpendicular parts $W_\\\\top$ and $W_\\\\|$, that these matrices are independent random matrices (their lemma 2). Everything from this point on in their proof relies on this fact \\u2013 and *this is only true for Gaussian random matrices*. Thus, their proof strategy has no clear extension to sparse matrices and other distributions. Ours is not an extension *of their proof*, but rather is a completely different method to derive more general and more accurate bounds. A consequence of this is that in the specific standard Gaussian case, our work provides an alternative and more straightforward proof of a similar result to theirs. We apologise if in attempting to \\u2018give credit where credit is due\\u2019, we did not make clear enough in our writing the extent to which Raghu et al.\\u2019s work did not in any way constitute or provide a blueprint for the key innovation of this work, but rather was the inspiration and the first such bounds in the case of dense Gaussian weight matrices.\"}",
"{\"title\": \"Response to Review #4, Part 2\", \"comment\": \"Our responses continue below:\\n\\n\\u2022\\t The aforementioned critiques were listed after the statement \\u201cI do not believe there are enough results to constitute an accept.\\u201d \\n\\nBesides the issue of experiments on trained networks, we feel that this comment may be a consequence of framing our work as \\u201cextending the proof of Raghu et al. (2017)\\u201d. We apologise if we have incorrectly framed our contribution in this way. The impression we have created that we have simply extended their proof is misleading in the sense that it implies that once you have read and understood Raghu et al.\\u2019s proof, there is a natural extension to the more general case we consider. On the contrary, the key fundamentals of their proof *limit* extension to more general cases: In particular, their proof is based on splitting the components of an infinitesimal piece of the trajectory $dz$ into parts which are perpendicular and parallel to $z$ (which we do not do). One consequence of this strategy is that it requires that, after splitting the weight matrix into parallel and perpendicular parts $W_\\\\top$ and $W_\\\\|$, that these matrices are independent random matrices (their lemma 2). Everything from this point on in their proof relies on this fact \\u2013 and *this is only true for Gaussian random matrices*. Thus, their proof strategy has no clear extension to sparse matrices and other distributions. Ours is not an extension *of their proof*, but rather is a completely different method to derive more general and more accurate bounds. A consequence of this is that in the specific standard Gaussian case, our work provides an alternative and more straightforward proof of a similar result to theirs.\\n\\nWhat we were attempting to credit Raghu et al. with is (1) the idea of considering expected lower bounds of trajectory growth and (2) doing so by considering the growth of a small piece of the trajectory, a starting point we share. Beyond this, however, our result is a more general one, and the proofs are fundamentally different, in particular in our explicit accounting for the conditional dependence of $dz$ on $z$, and everything which follows, which is the real key to the generality of our result. We encourage the reviewer to contrast the proofs in Raghu et al. (2017) with those in our manuscript.\\n\\n\\u2022\\t\\u201cI also think the title of the paper is too general for the specific results contained in the paper, namely sparsity should at least be mentioned in the paper.\\u201d \\n\\nThank you for raising this. We take your point that a number of very different papers could use the same title we began with. In particular one could consider a more experimental investigation, and sparsity is not mentioned which, while the results do include the case of dense networks ($\\\\alpha=1$), sparsity is one of our innovations that should be highlighted. Taking this into account we propose changing the title to: \\u201cTrajectory growth lower bounds for random sparse deep ReLU networks\\u201d. We choose this title to convey both that the results are theoretical lower bounds and to emphasise the sparsity. Thank you.\"}",
"{\"title\": \"Response to Review #4, Part 1\", \"comment\": \"Thank you for thoroughly reading our submission and your kind remarks regarding the writing and clarity of the proofs.\", \"let_us_speak_to_the_weaknesses_listed\": \"\\u2022\\t\\u201cI would have liked to see analysis on trained networks as done in Raghu (2017) for example.\\u201d \\n\\nThe manuscript\\u2019s focus is on developing mathematically rigorous theoretical lower bounds on the expected length of a trajectory passed through a random deep network. The numerical experiments included are to illustrate the parameter dependencies of our bounds in the main Corollaries 1-3, and in doing so to show any gaps between the theory and results observed in practice. Specifically, in Figure 2, to show the exponential dependence on depth given different sparsity parameters, in Figure 3(a) to illustrate the growth factor\\u2019s dependence on the weight variance and remarkable similarity across the different distributions (as noted too by Reviewer 3), in Figure 3(b) to show the $\\\\alpha$ dependence of the growth factor, again with remarkable similarity for different distributions, and in Figure 4 to explain the qualitative difference between the form of the bound and the observed behaviour in the $\\\\alpha$ dependence. While we appreciate that Raghu et al. included some numerical experiments which used trained or partially trained networks, these experiments were to convey different phenomena discussed in the more wide reaching paper (for example, Figure 6 looks at the impact of noise in different layers of a trained network on accuracy; Figure 7 shows the accuracy of nets which have only one layer trained, for different choices of trained layer; Figure 8, 9, and 18 show that training without batch normalisation increases trajectory length, suggesting a potential role of batch normalisation and motivating their own regularisation method). We have instead written a focused paper whose aim is to give a general framework by which the exponential expected length growth can be derived for large class of distributions (which includes classical Gaussian results of Raghu et al., but also include uniform and discrete distributions, and in all cases allow the weight matrices to be sparse). Experiments on trained networks like those mentioned above would be very interesting, but would not shed any further light on our bounds or parameter relationships. More generally experiments on trained networks are complicated by there being many notions of what would constitute an \\u2018expectation\\u2019 over a trained network\\u2019s weights, and secondly, because the results would be conflated by the issue of how much of the trajectory which was passed through the network was within or between classes of the trained network (for example, we would not necessarily expect that portions of a trajectory *within a specific class* would grow exponentially through a trained network.) We are currently considering what experiments on trained networks might be most illuminating given these difficulties. One option is to try show whether the $\\\\sigma_w$ dependence remains the same in different trained, over-parameterised networks; we aim to complete these and report back before the end of the review discussion period. We welcome suggestions from the reviewer.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary: The authors examine trajectory growth of deep ReLU neural networks whose weights come from a (random) \\u201csparse-Gaussian\\u201d, \\u201csparse-uniform\\u201d, and \\u201csparse-discrete\\u201d distribution. They give definitions of these distributions in the paper. They do this by extending the proof of Raghu (2017) so that it can handle more general distributions than the standard Gaussian. They also provide some numerical experiments verifying their theories.\", \"strengths\": \"The paper is well-written and the proofs are clearly explained. I\\u2019m grateful that the authors specifically mentioned where their proof deviates from the original and they clearly delineate how their proof method extends Raghu (2017)\", \"weaknesses\": \"This is an interesting direction, but I do not believe there are enough results to constitute an accept. If the authors are following Raghu (2017), then I would have also liked to see analysis on trained networks as done in Raghu (2017) for example. I also think the title of the paper is too general for the specific results contained in the paper, namely sparsity should at least be mentioned in the title.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This submission proposes an alternative way to lower bound the trajectory growth through random networks. It generalizes to a variety of weights distributions. For example, the authors showcase their approach on sparse-Gaussian, sparse-uniform, and sparse-discrete-valued random nets and prove that trajectory growth can be exponential in depth with these distributions, with the sparsity appearing in the base of the exponential.\\n\\nI give an initial rating of weak accept because (1) the paper is well written and well-organized. (2) the numerical simulation results support the claims and proofs. (3) the investigation on sparsely connected networks seems timely. However, I'm not an expert in this area. It also seems that most derivation and insights are from previous literature Raghu 2017, which makes the contribution of this submission limited. \\n\\nI have a question which may be invalid. For Figure 3, the observed expectation matches perfectly with the the lower bound for all three distributions. This seems amazing, have the authors try with other dataset or other settings to do this experiment? Did it always match perfectly?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, how the length of the trajectory in the input space is amplified by a ReLU neural network is analyzed. Specifically, the paper studied the case when the weights and biases are sparse random matrices. Some theoretical lower bounds are derived and are also empirically checked.\\n\\nThough the results are interesting, I am slightly lean to the rejection side. The main reason is that the motivation is not strong enough and it makes the entire work somehow incremental.\\n\\nAs described in the paper, the analysis of the trajectory length of NN has been initiated by Raghu et al. (2017). Although Raghu et al. considered the case of densely connected NNs, this work extended the notion to the sparsely connected NNs. I feel the extension is reasonable but it is not a big \\\"jump\\\" from the original work, and the originality is less significant. A good point of the derived results is that they are simple and easy to understand the dependency of the variance of the weights and the sparsity level. However, I am not so excited about these results because I cannot find a practical value from them. For example, (the current form of) Corollary 1 does not tell us how to control the sparsity level \\\\alpha to maintain some accuracy-sparsity tradeoff. \\n\\nFrom the technical side, it is nice the proof is written in line by line. However, it looks everything is done in a normal manner and nothing special happens. Also, the condition of Theorem 2 is unclear. What does the \\\"restriction\\\" of \\\\hat{w}_i mean? Does the condition E[] >= M ||u|| hold for any u and M? \\n\\nFinally, I have a simple question about the relationship between the scale of the output and the trajectory length. Let W be a weight and x be an input. When the smallest singular value of W is m, x is amplified at least by m, i.e., ||Wx|| >= m||x|| for any x. This means that (a part of) the trajectory growth is explained by the smallest singular value of the sparse random matrix W. Can you clarify the difference?\"}"
]
} |
SyxD7lrFPH | Frequency Pooling: Shift-Equivalent and Anti-Aliasing Down Sampling | [
"Zhendong Zhang",
"Cheolkon Jung"
] | Convolutional layer utilizes the shift-equivalent prior of images which makes it a great success for image processing. However, commonly used down sampling methods in convolutional neural networks (CNNs), such as max-pooling, average-pooling, and strided-convolution, are not shift-equivalent. This destroys the shift-equivalent property of CNNs and degrades their performance. In this paper, we propose a novel pooling method which is \emph{strict shift equivalent and anti-aliasing} in theory. This is achieved by (inverse) Discrete Fourier Transform and we call our method frequency pooling. Experiments on image classifications show that frequency pooling improves accuracy and robustness w.r.t shifts of CNNs. | [
"pooling",
"anti-aliasing",
"shift-equivalent",
"frequency"
] | Reject | https://openreview.net/pdf?id=SyxD7lrFPH | https://openreview.net/forum?id=SyxD7lrFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"NligvqpEYK",
"ryxwPzW3jr",
"SJgjcJWnjH",
"B1eWm06isr",
"H1e6lQ9Mjr",
"Hyl3QZqMjS",
"BJxED1cGsB",
"r1xXF0YMoS",
"Hyl73mFK9H",
"Skx-jEBr5B",
"r1lTmBYHtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743428,
1573814878555,
1573814163387,
1573801496530,
1573196533106,
1573196067731,
1573195612102,
1573195387501,
1572602795053,
1572324505001,
1571292453133
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2215/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2215/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2215/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2215/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2215/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2215/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2215/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2215/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2215/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2215/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission has been assessed by three reviewers and scored 3/6/1. The reviewers also have not increased their scores after the rebuttal. Two reviewers pointed to poor experimental results that do not fully support what is claimed in contributions and conclusions. Theoretical support for the reconstruction criterion was considered weak. Finally, the paer is pointed to be a special case of (Zhang 2019). While the paper has some merits, all reviewers had a large number of unresolved criticism. Thus, this paper cannot be accepted by ICLR2020.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"To all reviewers: our paper is updated.\", \"comment\": \"We have updated our paper based on your suggestions and comments.\\n\\nWe study how zero padding and circular padding of convolutions affect the results. F-pooling is beneficial from circular padding more than AA-pooling and baseline. The results are shown in table 3 and table 4.\", \"we_believe_we_are_dealing_with_a_fundamental_problem_of_cnns_which_is_ignored_by_the_community\": \"pooling destroys the shift-equivalent of CNNs. We provide a strict definition of shift-equivalent when down sampling involved and develop some results based on it.\\n\\nWe believe that F-pooling plays a more important role in applications where shift-equivalent is more serious, such as object detection and semantic segmentation.\\n\\n*****************************************\\nOur anonymous code is released.\"}",
"{\"title\": \"rebuttal\", \"comment\": \"1. Our definition is not the special case of (Zhang 2019). In fact, their definition is not correct for functions where down sampling involved. In our definition, an up sampling operator is used. This operator plays an important role in our proofs.\\n\\n2. Our theoretical results are presented in section 2.2, 2.3 and 2.4. In section 2.3, we prove that F-pooling is the optimal anti-aliasing down sampling. We suggest you read that part carefully. \\n\\nThese results can't be developed based on the previous definition of shift-equivalent. These results are derived based on our strict definition. \\n\\nIn summary, we provide a mathematical treatment of this problem and develop some results. Thus we believe we have theoretical contributions.\\n\\nMoreover, we discuss some practical issues such as how to deal with the imaginary part.\\n\\n3. We apologize for our mistake when calculating the shift consistency. The updated results are not new, but to fix our code mistake.\"}",
"{\"title\": \"Acknowledging rebuttal\", \"comment\": \"I have read the authors' rebuttal.\\n\\nI summarized the authors' response as follows:\\n(1) the proposed computation is not novel;\\n(2) the novelty lies in:\\n(2a) a strict definition of shift-equivalence;\\n(2b) theoretical properties of F-pooling.\\n\\nFor (2a), the author's definition is an adaptation of (Zhang 2019) plus the pooling setting and is, therefore, a special case of the shift-equivalence discussed by Zhang (2019).\\n\\nFor (2b), I could not find where the theoretical results are presented, besides the derivation to show that F-pooling satisfies the shift-equivalent property. As far as I understand, this is a technical contribution (rather than theoretical), where the usual storyline is to propose a new method and show its empirical results.\\n\\nOverall, the novelty is indeed limited. As the authors are still updating their experimental results, I couldn't have enough reasons to update my score.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thanks for your comments.\\nWe are sorry to say that we miss some previous works, especially the ECCV one. And we will provide more literature review in our updated paper. We admit that the computation process of F-pooling and the ECCV method is the same.\\n\\nHowever, we defend the novelty of our work. The values of our work are not how the output of F-pooling is computed. Instead, the values are the strict definition of shift-equivalence and the theoretical properties of F-pooling. In previous works, they even don\\u2019t give an operable definition shift-equivalence when down sampling involved. \\n\\nPlease refer to our general response for more of F-pooling\\u2019s values.\\n\\nMoreover, we discuss some practical problems of F-pooling. Such as how to deal with the imaginary part and the zero-padding of convolutions. With suitable settings, the shift consistency of F-pooling is much better.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thanks for your comments.\\nTo our knowledge, close to 2% improvement of accuracy is not small in CIFAR100. Because we only change pooling layers while keeping others exactly the same. \\n\\nNow, we respond to your questions one by one:\\n\\n1. The results of AA-pooling and F-pooling are not the same. In Fig. 1, we show the results of average-pooling and F-pooling. If you carefully look at the corner of curves, you can find the differences. Without convolution, AA-pooling is similar to average-pooling (both of them are low-pass filters but with sightly different kernels. So AA-pooling gives different results for sine waves.\\n\\n2. We believe F-pooling plays a more important rule in applications where shift-equivalent is serious, such as object detection and object tracking. Because we need to predict the location or shifts of an image object. Moreover, F-pooling may be better for complex-valued CNNs, such as [1].\\n\\n3. The limitation of imaginary part is easy to overcome: set the resolution of F-pooling\\u2019s output to an odd number or padding it to an odd number when the resolution is an even number. In this way, the imaginary part is zero. Moreover, the word shift in this paper means circular shift. So it is better to use circular padding in convolutional layers. However, we find circular padding slower the training speed in PyTorch. If we use zero paddings as in most situations, the beneficial of F-pooling is reduced. Our current experiments use zero paddings. See our general response for what happens when we replace zero paddings with circular padding.\\n\\n4. In all experiments of our current paper, the imaginary part is already ignored. \\n\\nWe can\\u2019t directly measure how the imaginary part affects the performance unless we use complex-valued CNNs. Ignoring this part will destroy the reconstruction optimality, but the effect is small. Suppose the output size of F-pooling is 2N+1. We first transform a signal into frequency domain and keep 2N+1 components with the lowest frequencies: f(-N), \\u2026 , f(0), \\u2026 ,f(N). Then we transform it back into time domain. In this case, the imaginary part in time domain is zero because of symmetry. Now, suppose the output size is 2N+2: f(-N), \\u2026 , f(0), \\u2026 f(N), f(N+1). In this case, the imaginary part is not zero. However, if we set f(N+1) to 0, it imaginary part becomes zero again. Thus, the error of ignoring imaginary part is not larger than ||f(N+1)||. Fig.4 shows an example of odd and even output size of F-pooling.\\n\\n[1] Deep complex networks, ICLR2018\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We think your suggestions are very meaningful. We respond to them one by one:\\n\\n1. We will explain anti-aliasing in our updated paper. Roughly, anti-aliasing is helpful for signal reconstruction. However, we can\\u2019t provide a strict treatment of how anti-aliasing relates to classification. But we have intuitions: first, we believe reconstruction relates to classification (see our next response); second, frequency components are orthogonal. Aliasing means different components are mixed again. This may mislead the next layers for processing. \\n\\n2. To our knowledge, researchers haven\\u2019t fully understood the whole process of image classification until now. Thus, we can\\u2019t provide a strict treatment of how reconstruction optimality relates to classification optimality. But we have intuitions and empirical evidence of their relation: convolution layers are used to transform a signal which makes it easier to be classified. So if we accept that the feature extracted by previous convolution layers is useful, then it is best to keep it as much as possible for the current pooling layer. In this way, it is reasonable to assume that reconstruction optimality is consistent with classification optimality. On the other hand, it is difficult to directly define classification optimality for an intermediate layer. Moreover, several works, such as [1] have shown that using self reconstruction loss as an auxiliary is helpful for classification.\\n\\n3. Please refer to our general response. With suitable settings, the shift consistency of F-pooling is much better.\\n\\n[1] Semi-Supervised Learning with Ladder Networks, NIPS2015\"}",
"{\"title\": \"General response to all reviewers:\", \"comment\": \"Thank you for your comments and suggestions. We are sorry that our paper is written in a hurry. We accept that some contents are not explained well and the experiments are weak in the current version of this paper.\\n\\nHowever, we defend the value and novelty of our work in the following aspects:\\n\\n1. We believe shift-equivalence is very important for CNNs. Our work aims to recovery this property of CNNs which are destroyed by down sampling in a principle way. We provide theoretical guarantees of optimal anti-aliasing and shift equivalence.\\n\\n2. Although everyone talks about shift-equivalence, we find there is no strict definition of it for CNNs when down sampling is involved. We have shown that a corresponding up sampling operation must be involved in the strict definition. We also have shown that this up sampling operation plays an important role when we prove the properties of F-pooling. We believe a formal definition and a mathematical treatment have great value for academic research.\\n\\n3. Some reviewers mention that F-pooling is not consistently better than AA-pooling. But we never claim that F-pooling must beat AA-pooling. We choose AA-pooling just to provide a performance reference of the recently proposed method in this topic. Experiments in Tab 1-3 are consistently better than the baseline. We think this is enough to show the effectiveness of F-pooling. \\n\\n4. We choose image classifications to evaluate F-pooling just because it is commonly used and is easy to implement. There are some computer vision applications where shift-equivalence plays a more important role, such as object detection and object tracking. In those applications, F-pooling may be more valuable.\\n\\n************************************************************************************************\\nWe apologize again because we make a terrible mistake when we test shift consistency. The consistency results in all table are incorrect. We have updated the correct values in our new pdf files. The relative order is similar as before.\", \"one_of_the_most_important_reasons_why_the_consistency_of_f_pooling_is_not_as_good_as_we_expect_is_that\": \"F-pooling is designed to be circular shift equivalence. However, convolutional layers with zero paddings destroy circular shift equivalence. Thus, one should expect that using circular padding in convolutional layers will greatly increase the shift consistency of F-pooling.\\n\\nWhen we use circular padding, the shift consistency of F-pooling becomes much better. For ResNet18 on CIFAR100 with circular padding without shift argument, we have the following attractive results:\", \"baseline\": \"60.47\", \"aa_pooling\": \"65.14\", \"f_pooling\": \"84.04\\n\\n************************************************************************************************\\nWe will revise our paper based on your comments and update our paper during rebuttal. We plan to add more experiments to show the effect of F-pooling better. We also plan to submit our code during rebuttal.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper researches the pooling operation, which is an important component in convolutional neural networks (CNN) for image classification. Taking the perspective from signal processing, this paper proposes a pooling operation called frequency pooling (F-pooling). The key motivation is to make the pooling operation shift-equivalent and anti-aliasing. This paper gives an improved definition on shift-equivalent functions and shows that the proposed F-pooling is optimal in the sense of reconstructing the orignal signal. The F-pooling is then implemented with matrix multiplications and tested with recent convolutional neural networks for image classifiation on CIFAR-100 and a subset of ImageNet dataset.\\n\\nIt is interesting to take the perspective from signal processing to give pooling operation in CNN a formal treatment. As indicated in the recent literature, enforcing shift-invariance does help to improve the performance of a CNN on classification accuracy and the robustness with respect to image shift. At the same time, this work can be further enhanced at the following aspects:\\n1. This work can make it clearer in principle how anti-aliasing contributes to improving the classification performance and robustness. This will help to make this paper more self-contained. \\n2. When showing the optimality of F-pooling in Section 2.3, the criterion is to reconstruct the original signal x. Considering that the ultimate goal is classification, the information to be maximally preserved through each operation through the layers shall be the information that relates to the class label y. In light of this, some justification and explanation shall be provided for using this criterion for optimality.\\n3. The experimental study is weak. Experiments could be conducted on more benchmark datasets with more CNN architectures to convincingly show the effectiveness of the proposed F-pooling. Also, from the three Tables in the experimental part, the improvement of F-pooling over AA-pooling (developed by the main reference of this work) does not seem to be significant or consistent. For example, in Table 2, the F-pooling only wins at either accuracy (marginally) or consistency, but not both. In Table 3, the F-pooling consistently shows inferior classification performance, although obtaining slightly higher consistency. This makes the advantage of F-pooling over the existing AA-pooling unclear.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a new pooling method (Frequency pooling) which is strict shift equivalent and anti-aliasing in theory. The authors first derived the theory of F-pooling to be optimal anti-aliasing down sampling and is shift-equivalent in sec 2, and then demonstrated the experimental results of 1D signals and image classification tasks.\\n\\nThe experimental results are actually less impressive than what are claimed in contribution and conclusion. The authors stated that \\\"F-pooling remarkably increases accuracy and robustness w.r.t. shifts of moderns CNNs\\\"; however, in Table 1-3, the winning margin of accuracy is actually quite small (<2%), and the consistency (<3.5% compared to the second best baseline except resnet-18 on CIFAR 100 has large improvement ~7-8%).\", \"questions\": \"1. For the experiment of 1D signal on sine wave, the AA-pooling and F-pooling give the same result? \\n2. Compared to AA-pooling, it seems that F-pooling has a better theoretical guarantee (i.e. the optimal anti-aliasing down sampling operation given U). But other than this, the empirical performance seem not showing particular advantage over AA-pooling. Are there any other advantages for F-pooling s.t. people might want to use it as opposed to AA-pooling?\\n3. What are the limitations of the F-pooling? It is good to me that the authors discuss one limitation on the imaginary part of output and I would like to hear more on other potential limitations for this method. \\n- also, if the authors can explain more on sec 2.5 it will be helpful. If we simply ignore the imaginary part, although the theory is not applicable, but what would the empirical performance be?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposed \\\"F pooling\\\" for Frequency Pooling, which is a pooling operation satisfying shift equivalence and anti-aliasing properties. The method is very simple: first, transform the input 1D/2D signal into the spectrum domain based on discrete Fourier transform (DFT), then cut the high-frequencies, then transform back to the time domain using the inverse DFT. The method can be implemented using FFT and auto differentiation frameworks. The method is tested on Resnet/Desnet on CIFAR-100 and subsets of ImageNet, showing better performance than the original models.\\n\\nThe reviewer votes for rejection as the method has limited novelty. Spectrum pooling has been used in the community of computer vision and machine learning. Taking a random example (there are others by simple searching), in the ECCV paper \\\"DFT-based Transformation Invariant Pooling Layer for Visual Classification, Ryu et al., 2018\\\" The DFT magnitude pooling is almost the same as the authors' propositions, where the \\\"Fourier coefficients are cropped by cutting off high-frequency components\\\". \\n\\nThe reviewer encourages the authors to make further new developments and have a more comprehensive literature review. But in the current form, the paper has less value to be published in ICLR.\"}"
]
} |
HklvmlrKPB | Improving Sequential Latent Variable Models with Autoregressive Flows | [
"Joseph Marino",
"Lei Chen",
"Jiawei He",
"Stephan Mandt"
] | We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques. We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models. Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models. | [
"Autoregressive Flows",
"Sequence Modeling",
"Latent Variable Models",
"Video Modeling",
"Variational Inference"
] | Reject | https://openreview.net/pdf?id=HklvmlrKPB | https://openreview.net/forum?id=HklvmlrKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"L6uhbfuLi",
"r1eoA5cssB",
"rkepiqqsir",
"ryeqOccosH",
"r1eNXq5ioH",
"Skld8grnYB",
"HylbHoHSFS",
"HyejGEhEFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743393,
1573788371086,
1573788325483,
1573788274029,
1573788188369,
1571733583613,
1571277624732,
1571238931385
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2214/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2214/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2214/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2214/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2214/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2214/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2214/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper scores low on novelty. The experiments and model analysis are not very strong.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your comments! Here, we will attempt to address additional specific points:\\n\\n\\u201cIs the claimed contribution new methodology for modeling sequences? In my opinion, using flows as VAE decoders, or adding latent variables to a flow model and training it variationally, are standard applications of existing techniques and I wouldn't consider them particularly novel.\\u201d\\n\\nAs mentioned, flows and VAE models have been combined in various ways (though, in our opinion, this is still under-explored), and we do not claim to introduce this combination. While affine autoregressive flows are a popular class of flow-based models, to the best of our knowledge, the application of these flows across time steps for the purposes of simplifying video modeling is novel. Specifically, our main contribution is identifying flows as a useful technique for pre-processing sequences to simplify downstream modeling.\\n\\n\\u201cIs the claimed contribution improved modeling performance? The main results are that (a) replacing Gaussian decoders with autoregressive flows improves performance, and (b) adding latent variables to the base distribution of an affine autoregressive flow also improves performance. Both of these results are exactly what one would expect from our experience with these methods.\\u201d\\n\\nImproved modeling performance is one of the results of our method. It does seem that including more flexible model components should obviously improve performance. However, from the perspective from a sequential latent variable model, it is unclear where to incorporate dynamics. For example, one could include more latent variables or recurrent networks at various stages. We specifically propose using autoregressive flows as a type of \\u2018pre-processing\\u2019 stage, resulting in a new sequence with dynamics that are hopefully easier to model. In our experiments, we attempt to control for the complexity of each model.\\n\\n\\u201cIs the claimed contribution useful representations?\\u201d\\n\\nThis is not something that we investigated in this paper, although we intend to investigate this more thoroughly. \\n\\n\\u201cEq. (8): As written, the expression makes little sense as \\\\sigma is a vector. I understand that there is supposed to be a sum over the elements of log\\\\sigma, so I'd suggest expressing that more clearly.\\u201d\\n\\nYou are correct. This has been updated in the submission. Thank you!\\n\\n\\u201cEq. (9): It seems to me that the last Jacobian is upside down.\\u201d\\n\\nIndeed. Thanks again!\\n\\n\\u201cIn the particle analogy of the motivating example of section 3.1, it would be good to say explicitly that x is the position, u is the velocity and w is the force, to make the example even more intuitive.\\u201d\\n\\nWe have stated this in the updated submission.\\n\\n\\u201cThe paper only considers affine autoregressive flows, but there has been a lot of recent work on non-affine autoregressive flows that are more expressive\\u2026At the very least, it would be good to discuss them as more flexible alternatives.\\u201d\\n\\nWe have included a discussion of these non-affine flows in the updated submission. We chose affine flows for their relative simplicity while still yielding reasonable performance. \\n\\n\\u201cIn section 3.2, a third and very significant limitation of the flows discussed here is that they act elementwise on the dimensions (e.g. pixels) of y_t.\\u201d\\n\\nWe have stated this limitation more specifically. However, for the purposes of removing correlations across time, rather than space, they are useful.\\n\\n\\u201cIn the experimental section, it would be good to describe on a high level what the architecture of the VAE is, especially the architecture of the prior and the encoder, and the types of distributions used there (e.g. diagonal Gaussians or otherwise).\\u201d\\n\\nWe have included a more thorough discussion of the model architectures, as well as diagrams in Appendix B.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your comments! Here, we will attempt to address additional specific points:\\n\\n\\u201cThe paper has a lengthy section 3.1 that convincingly explains that decorrelating latent variables in time is important for sequence modeling. However the proposed approach in fact produces latents that are correlated in time!\\u2026 Is there solid quantitative (or even qualitative) evidence that the model learns a \\u2018more decorrelated\\u2019 representation\\u201d\\n\\nIt should be noted that while these flows have the capability of removing temporal correlations, they may not be able to remove all temporal dependencies. Thus, it can still be beneficial to model these dependencies in the base distribution of the flow. The motivation is not that we want to remove all temporal dependencies, but rather that we would like to remove as much as possible to simplify modeling for the sequential latent variable model. Based on your suggestion, we have provided a quantitative confirmation that the result of the flow is less temporally correlated than the input.\\n\\n\\u201cWere modern techniques beyond affine flows considered, such as from Kingma\\u201918, Kumar\\u201919? Two layers of affine flows are likely insufficient to model the complexity of these data, which makes the comparison to the purely flow-based models somewhat unfair.\\u201d\\n\\nIt is important to note that we are applying flows across time steps, rather than within a time step. If we were to apply a method like GLOW (which is affine) in an analogous way, this would involve applying the flow on half the time steps as a function of the other half of the steps. Methods like VideoFlow apply flows within time steps. While this may further improve performance by removing spatial correlations, this is not the motivation of our work. Many flows are part of the general family of affine flows (e.g. NICE, RealNVP, IAF, MAF, GLOW), so we felt this was an important place to start in developing this technique. We included comparisons with standalone flow-based models to demonstrate that these models work well as generative models on their own. Note that a single affine autoregressive flow is exactly equivalent to an autoregressive model, which can perform quite well in practice.\\n\\n\\u201cDo Kumar et al. not \\u201cdemonstrate flows across time steps\\u201d?\\u201d\\n\\nWhile Kumar et al. do apply flows within a sequential context, we mean to distinguish between applying flows within a time step vs. across time steps, as in our work. Kumar et al. use non-flow-based models to model temporal dependencies. Other works, such as van den Oord et al. with audio data, do use flows across time steps, as we do here.\\n\\n\\u201cEq (10) and (12) seem to be inconsistent. Perhaps x_t = x_t-1 + u_t-1 was meant in eq (10)?\\u201d\\n\\nWe understand the point of confusion, however, Eqs. 10 and 12 are consistent. In Eq. 10, x_t = x_t-1 + u_t gives the exact value of x_t, but in Eq. 12, x_t-1 + u_t-1 gives the mean of the Gaussian distribution over x_t. This can be seen by plugging the random variable for u_t, i.e. Eq. 13, into Eq. 10.\\n\\n\\u201cLine before eq(14): it not true that u_t-1 = x_t-1 - x_t-2. It would be true if the deterministic x_t = x_t-1 + u_t-1 model was assumed instead of the gaussian N(x_t; x_t-1 + u_t-1, Sigma). It is possible that eq(14) is still correct as the variance of Gaussians is additive.\\u201d\\n\\nThis follows directly from the definition of u. To be clear, x and u are simply different ways of expressing the same randomness, subject to different offset values. This is because the transform between u_t and x_t is deterministic. All of the stochasticity originates from w_t.\\n\\n\\u201cThe following work uses autoregressive flows for modeling temporal dynamics and should be cited: Rhinehart\\u201918,19\\u201d\\n\\nThank you for these references. We have included them in the updated draft.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your comments! Here, we will attempt to address additional specific points:\\n\\n\\u201cthe paper misses broader performance comparisons against other state of the art models, in particular videoflow which is quite related to the models introduced in this paper.\\u201d\\n\\nAs discussed in our common response, our goal was not to propose a specific video-modeling architecture, but rather to propose a technique for improving sequence modeling. VideoFlow applies flows within each time step, unlike our proposed technique, which operates across time steps. VideoFlow is also significantly larger than the models that we investigated, consisting of 3 levels of latent variables, each with 24 steps of flow, and each flow containing 5 residual blocks. In contrast, the models in our experiments consist of just one or two flows, with each component of our models parameterized with relatively simple convolutional or recurrent networks. As stated in our common response, our quantitative results are on-par with log-likelihood estimates for previous works, like SVG (Denton & Fergus, 2018).\\n\\n\\u201cWhat would happen if we used the same trick of modeling the conditional likelihood in this way in other SOTA models?\\u201d\\n\\nWe chose a representative sequential latent variable model for our experiments. However, we suspect this technique will apply broadly to many sequence modeling settings. Indeed, as we noted in our submission, VideoFlow models differences in variables, which we discuss as a special case of our technique. Before the camera-ready deadline, we intend to conduct additional experiments applying our technique to some of these previously proposed models.\\n\\n\\u201cwhat are the computational requirements of the models presented in this paper?\\u201d\\n\\nAutoregressive flows, in the sequential context, add only a constant computational cost to each time step, requiring only a single forward pass for evaluation and generation.\"}",
"{\"title\": \"Response to All Reviewers\", \"comment\": \"We would like to thank the reviewers for their feedback; we found their comments insightful and constructive. We were also happy to see that the reviewers found the idea \\u2018crystal clear.\\u2019 The draft has been updated to reflect their comments. We will attempt to address common points in this post, with separate comments to each reviewer addressing specific points.\\n\\n\\u2014Comparison with prior work\\n\\nThe motivation behind our work is to provide a general-purpose technique for improving sequence modeling. To that end, we are not proposing a specific model architecture, instead focusing on relative improvements over a representative video modeling architecture. The sequential latent variable model architecture that we used for conducting experiments is a fairly standard convolutional encoder-decoder architecture with fully-connected latent variables. This architecture resembles previous works like world models (Ha & Schmidhuber, 2018) and SVG (Denton & Fergus, 2018).\\n\\nHowever, the difficulty in comparing video modeling performance is that many previous works employ a variety of custom techniques. For instance, many previous works do not train or evaluate their models with proper log-likelihood (or lower bound) objectives, e.g. Ha & Schmidhuber, 2018 and Denton & Fergus, 2018 both down-weight the KL term in the objective, yielding an improper lower bound. These previous works also evaluate squared pixel error, implicitly setting the std. dev. to 1. Indeed, when SVG is trained with a variational bound, the results are comparable with our reported results, e.g. -2.86 nats/dim (SVG) vs. -2.39 nats/dim (ours) on KTH Actions (Marino et al., 2018). Other works, like Hafner et al., 2019 restrict the bit-depth of the images, yielding log-likelihood results that are not directly comparable with ours. Still other works, like SAVP (Lee et al., 2018), apply combinations of lower bound and adversarial losses. To our knowledge, the most similar recent work to evaluate log-likelihood performance is VideoFlow (Kumar et al., 2019). While their quoted performance is significantly higher than our models, their models are also substantially larger, consisting of 3 levels of latent variables, with 24 steps of flows between each level. In contrast, we use 1 or 2 levels of latent variables, we only 1 or 2 steps of flow. We have noted this in the updated draft.\\n\\nWe focused on log-likelihood as a metric of model performance, choosing video data for its ability to visualize aspects of autoregressive flows. Importantly, quantitative and qualitative metrics of images are not always well-aligned (Theis et al., 2015). Indeed, Kumar et al., 2019 note that there is only a weak correlation. While we agree that employing the aforementioned techniques to improve qualitative metrics is a useful direction, we felt it was more important to establish performance improvements on a clear quantitative basis. We intend to run an even more comprehensive set of experiments before the camera-ready submission deadline to investigate these possible improvements.\\n\\n\\u2014Main contribution and related work\\n\\nWe have updated the background section to include the references suggested by the reviewers, as well as to clarify the main contribution of the work relative to these previous works. In our updated draft, we clarify that \\u201cwe demonstrate that autoregressive flows can serve as a useful, general-purpose technique for improving sequence modeling as components of sequential latent variable models. To the best of our knowledge, our work is the first to focus on the aspect of using flows to pre-process sequential data to improve downstream dynamics modeling.\\u201d While Kumar et al., 2019 apply flows to video data, they do so by applying flows separately within each time step. Their process attempts to remove spatial correlations to get to the base distribution. We, instead, apply flows across time steps, processing the current frame based on previous frames. This process attempts to remove temporal correlations.\\n\\n\\u2014Quantitative evaluation of decorrelation\\n\\nWe have updated the draft with an additional quantitative analysis of the temporal correlation. Indeed, we find that temporal correlation decreases in all three cases, corroborating the qualitative results. We also present a plot in Appendix C showing this temporal correlation decreasing during training, suggesting that these flows gradually learn a simplified basis for downstream dynamics estimation. We would like to thank R2 for this useful suggestion.\\n\\n\\u2014Generated samples\\n\\nWe have included sets of generated samples in Appendix C. While these samples do not remain sharp over long temporal horizons, they capture backgrounds reasonably well. As noted above, we did not employ the range of techniques used to improve sample generation in video modeling, instead focusing on quantitative metrics. We intend to run additional experiments to improve sample generation quality, such as adjusting sampling temperature (Kumar et al., 2019).\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes to model temporal sequences using autoregressive flows across time steps, that allow to model more explicitly temporal changes of the input, i.e. how the input x_t has changed w.r.t x_{<t}. As also stated by the authors, this is a generalization of other work that instead of modelling the input at each time step, models temporal differences between consecutive time steps.\", \"To the best of my knowledge, this is the first work that models normalizing flows in the sequential setting in this way (to be fair however, the idea is fairly obvious).\", \"Overall I found the paper interesting, and I think it is well written, so I am leaning towards acceptance. My biggest concern in the paper is the experimental section that could be improved in several ways:\", \"the paper misses broader perfoemance comparisons against other state of the art models, in particular videoflow which is quite related to the models introduced in this paper.\", \"how does the model perform on longer sequences, e.g. for long term generation? I would expect that such a direct dependence of the temporal dynamics on the frames of the video may make it hard for the model to coherently predict future latent states for many time steps.\", \"What would happen if we used the same trick of modelling the conditional likelihood in this way in other SOTA models?\", \"what are the computational requirements of the models presented in this paper?\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary \\nThe paper proposes to combine the video modeling approaches based on autoregressive flows (e.g. Kumar\\u201919) with amortized variational inference (e.g. Denton\\u201918), wherein an autoregressive latent variable model optimized with variational inference is extended with an autoregressive flow that further transforms the output of the latent variable model while allowing to compute exact conditional probability. This is motivated with a physical intuition, where a dynamics model can benefit from decorrelating the inputs, and it is demonstrated that layers of autoregressive flows can represent derivatives of the original signal. In a proof-of-concept experiment, it is shown that using a layer of autoregressive flow improves NLL of a latent variable model.\\n\\nDecision\\nThe paper presents an interesting method and tackles an important problem. At the same time, the properties of the proposed method are not well exposed and the experimental evaluation is incomplete. Moreover, the motivation of the paper is confusingly disconnected from the proposed model. I rate this paper as borderline, but am hopeful that some of the issues will be clarified during the discussion period.\\n\\nPros\\n- The paper is well-motivated and tackles a significant problem.\\n- The proposed method is novel.\\n- The paper is well-written.\\n\\nCons\\n- The experimental evaluation is incomplete and does not expose the properties of the method fully. Comparisons to prior art are missing. (see below)\\n- The motivation is disconnected from the proposed model. The introduction of the paper motivates a model that hierarchically decorrelates a sequence of frames to arrive at a fully factorized model, which is later motivated with a physical example. However, the method proposed in the paper is instead a single layer of autoregressive flow on top of a powerful latent variable model! This is expressed in the title, but only glossed over in the abstract and introduction. The writing has to be updated to coherently focus on the contribution of the paper. \\n\\nQuestions (ordered by decreasing importance)\\n1. In table 1, quantitative results are reported for the introduced methods. It is shown that introducing autoregressive flows achieves better likelihood and better generalization. However, quantitative comparisons with published methods that were evaluated on these datasets are missing, such as Denton\\u201918 and Kumar\\u201919. A quick calculation shows that Kumar et al. achieves a log-likelihood of -0.43 in Table 1 when converted to this paper\\u2019s metric, although it is possible my conversion is incorrect. Is the presented model competitive with previously published results? \\n2. No qualitative generation results are presented. Since the model achieves a high likelihood it is likely to do well on one-frame prediction, and possibly would even work on autoregressive multi-step prediction. Is the model capable of generation of diverse and plausible video?\\n3. The paper has a lengthy section 3.1 that convincingly explains that decorrelating latent variables in time is important for sequence modeling. However the proposed approach in fact produces latents that are correlated in time! Since the prior over latent variables is conditioned on past frames, the model can in fact learn a correlated representation and still achieve optimal likelihood. Moreover, the position of both the digit and the robot arm could be seen in what should be the decorrelated image in Fig 4. Is there solid quantitative (or even qualitative) evidence that the model learns a \\u2018more decorrelated\\u2019 representation beyond the fact that it copies the background and that the likelihood improves? The evaluation in this paper does not convince me that the model learns a temporally decorrelated representation.\\n4. Were modern techniques beyond affine flows considered, such as from Kingma\\u201918, Kumar\\u201919? Two layers of affine flows are likely insufficient to model the complexity of these data, which makes the comparison to the purely flow-based models somewhat unfair.\\n5. It is stated that the paper is \\u201cthe first to demonstrate flows across time steps for video data\\u201d, however, the related work by Kumar et al. proposes a somewhat similar model in which conditional flows are used to model video data. Do Kumar et al. not \\u201cdemonstrate flows across time steps\\u201d?\\n\\nMinor comments\\n1. Eq (10) and (12) seem to be inconsistent. Perhaps x_t = x_t-1 + u_t-1 was meant in eq (10)?\\n2. Line before eq(14): it not true that u_t-1 = x_t-1 - x_t-2. It would be true if the deterministic x_t = x_t-1 + u_t-1 model was assumed instead of the gaussian N(x_t; x_t-1 + u_t-1, Sigma). It is possible that eq(14) is still correct as the variance of Gaussians is additive.\\n3. The following work uses autoregressive flows for modeling temporal dynamics and should be cited: Rhinehart\\u201918,19\\n\\nRhinehart et al, Deep Imitative Models for Flexible Inference, Planning, and Control\\nRhinehart et al, PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings\\n\\n--------------------- Update 11.19 -----------------------\\nThe newly provided experiments support some of the claims of the paper. In particular, I appreciate the plot showing that the proposed method successfully learns a more decorrelated representation over time, and the provided qualitative samples from the model. The authors also clarified my questions about motivation. At the same time, the proposed method is not shown to compare well to state-of-the-art approaches. I am leaning towards accepting the paper, but I believe the method would have a much larger impact if its properties were more fully exposed.\\n\\n== comparison with Denton&Fergus'18 (SVG) ==\\nWhen trained with beta=1, as the authors suggest for comparison, this method is known to perform poorly. There are two possible ways of alleviating this: 1) to train with the modified objective as in the paper but evaluate the true lower bound on the likelihood, or 2) interpret the beta as the fixed variance of the decoder distribution. Given the results the authors have provided, I believe the latter option will lead to SVG outperforming the proposed approach.\\n\\n== Correlation plot == \\nThanks for performing this experiment! While measuring correlation only captures linear dependencies, which is likely mostly the background image, this plot shows that the model indeed learns to (linearly) decorrelate the frames in the sequence. \\n\\n== Samples == \\nThanks for providing samples from the model! While the performance on BAIR is not quite convincing, the MNIST samples look very good.\\n\\n= Kumar et al. comparison ==\\nThe author's response convinces me that the proposed model is significantly different from Kumar et al. in scope, as Kumar et al simply use a per-frame normalizing flow encoder coupled with a sequential prior.\\n\\n== eqs. 10, 12 ==\\nThe authors' response cleared my confusion, the equations are correct.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThe paper discusses ways to use autoregressive flows in sequence modelling. Two main variants are considered:\\n(a) An affine autoregressive flow directly modelling the data.\\n(b) An affine autoregressive flow whose base distribution is a sequential VAE; equivalently, a sequential VAE whose decoder is an affine autoregressive flow.\", \"pros\": \"The paper is very well written and crystal clear. I particularly appreciated the motivating example that shows how each layer of an affine autoregressive flow reduces the order of a linear dynamical system by 1, and the connections with modelling temporal changes and moving reference frames.\\n\\nThe methods are technically correct and well-motivated. The experiments are done well.\\n\\nOverall, the paper scores high on writing and technical quality.\", \"cons\": \"In my opinion, the paper scores low on novelty and original contribution.\\n\\nIn general, it's not clear to me what the claimed contribution is. More specifically:\\n\\nIs the claimed contribution new methodology for modelling sequences? In my opinion, using flows as VAE decoders, or adding latent variables to a flow model and training it variationally, are standard applications of existing techniques and I wouldn't consider them particularly novel.\\n\\nIs the claimed contribution improved modelling performance? The main results are that (a) replacing Gaussian decoders with autoregressive flows improves performance, and (b) adding latent variables to the base distribution of an affine autoregressive flow also improves performance. Both of these results are exactly what one would expect from our experience with these methods. Other than that, the paper doesn't present any results that indicate the particular models used enable us to do things we couldn't do before, or improve against the state of the art in sequence modelling.\\n\\nIs the claimed contribution useful representations? The motivation for using the flow in this particular way as a VAE decoder is that the flow will model low-level correlations whereas the latent variables will capture high-level dynamics. However, the experiments (e.g. the visualizations) don't support this claim, and the usefulness of the learned representations hasn't been demonstrated in an alternative way,\", \"decision\": \"Even though the paper is technically correct and well written, my decision is weak reject because of the lack of novelty and original contribution.\", \"suggestions_for_improvement\": \"My main suggestion to the authors is to keep up the good work, but also reflect on what the specific contribution of the paper is, and try to make a stronger case for it. Some minor suggestions/corrections follow:\\n\\nEq. (8): As written, the expression makes little sense as \\\\sigma is a vector. I understand that there is supposed to be a sum over the elements of log\\\\sigma, so I'd suggest expressing that more clearly.\\n\\nEq. (9): It seems to me that the last Jacobian is upside down.\\n\\nIn general, it would be good to be more thorough on how this paper is similar to related work and how it differs. There is also this related work which may be good to discuss:\\n\\nLatent Normalizing Flows for Discrete Sequences, https://arxiv.org/abs/1901.10548\\n\\nIn the particle analogy of the motivating example of section 3.1, it would be good to say explicitly that x is the position, u is the velocity and w is the force, to make the example even more intuitive.\\n\\nThe paper only considers affine autoregressive flows, but there has been a lot of recent work on non-affine autoregressive flows that are more expressive, for example:\\n\\nNeural Autoregressive Flows, https://arxiv.org/abs/1804.00779\\nSum-Of-Squares Polynomial Flow, https://arxiv.org/abs/1905.02325\\nNeural Spline Flows, https://arxiv.org/abs/1906.04032\\n\\nSuch flows could improve the experimental results of the paper. At the very least, it would be good to discuss them as more flexible alternatives.\\n\\nIn section 3.2, a third and very significant limitation of the flows discussed here is that they act elementwise on the dimensions (e.g. pixels) of y_t.\\n\\nIn the experimental section, it would be good to describe on a high level what the architecture of the VAE is, especially the architecture of the prior and the encoder, and the types of distributions used there (e.g. diagonal Gaussians or otherwise).\\n\\nIt would be good to show samples from the models in the experimental results.\"}"
]
} |
rkgvXlrKwH | SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference | [
"Lasse Espeholt",
"Raphaël Marinier",
"Piotr Stanczyk",
"Ke Wang",
"Marcin Michalski"
] | We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out. | [
"machine learning",
"reinforcement learning",
"scalability",
"distributed",
"DeepMind Lab",
"ALE",
"Atari-57",
"Google Research Football"
] | Accept (Talk) | https://openreview.net/pdf?id=rkgvXlrKwH | https://openreview.net/forum?id=rkgvXlrKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"eEx-yC4jiT",
"B1lAv2N2jr",
"rJlXiabPsS",
"Skxw_a-vjr",
"ryeuWTbPsH",
"Byg1qYe5qS",
"S1lmYz5Ycr",
"S1g1dnlAYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743364,
1573829733606,
1573490075329,
1573490030804,
1573489919533,
1572632966591,
1572606587166,
1571847271195
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2213/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2213/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2213/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2213/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2213/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2213/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2213/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"The paper presents a framework for scalable Deep-RL on really large-scale architecture, which addresses several problems on multi-machine training of such systems with many actors and learners running. Large-scale experiments and impovements over IMPALA are presented, leading to new SOTA results. The reviewers are very positive over this work, and I think this is an important contribution to the overall learning / RL community.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper updated with apple-to-apple comparison\", \"comment\": \"We thank again the reviewer for their positive comments.\\n\\nWe have updated the paper with an \\\"apple-to-apple\\\" comparison by running both agents on an Nvidia P100 GPU. See table 1 for update figures, as well as additional analysis in section 4.1.2 and additional cost comparison in section A.6.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the time, comments on the paper and the appreciation of open sourcing the content of the paper.\"}",
"{\"title\": \"Support for including SEED at the ICLR conference\", \"comment\": \"We thank the reviewer for the time and positive comments on the paper.\\n\\nTo support including the paper at the ICLR conference, we note that ICLR in previous years included papers with similar flavor to SEED such as,\\nDistributed Prioritized Experience Replay (Ape-X), ICLR 2018\\nRecurrent Experience Replay in Distributed Reinforcement Learning (R2D2), ICLR 2019\"}",
"{\"title\": \"Regarding apple-to-apple comparison\", \"comment\": \"We thank the reviewer for the time and the positive comments.\\n\\nWith regards to comparing apples-to-apples, we will add the performance of running SEED with Nvidia P100\\u2019s. Note, the cost of running IMPALA does not improve significantly with TPUs as the cost is dominated by inference on CPU.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper presents a scalable reinforcement learning training architecture which combines a number of modern engineering advances to address the inefficiencies of prior methods. The proposed architecture shows good performance on a wide variety of benchmarks from ALE to DeepMind Lab and Google Research Football. Important to the community, authors also open source their code and provide an estimate which shows that the proposed framework is cheaper to run on cloud platforms.\", \"pros\": \"1. This work is solid from the engineering perspective. It effectively addresses the problems with prior architectures and the accompanying source code is clear and well structured. It is also extensively tested on several RL benchmarks.\\n\\n2. The proposed framework is especially suited for training large models as the model parameters are not transferred between actors and learners.\\n\\n3. The paper is well written and organized.\", \"cons\": \"1. The gain of the main algorithmic improvement (SEED architecture) over the baseline (IMPALA architecture) is obscured by the usage of different hardware. TPUv3 has different characteristics than Nvidia P100/V100 GPU chips which also might contribute to the speed up.\", \"questions\": \"1. Is it possible to provide more \\u201capple-to-apple\\u201d comparison by running SEED and IMPALA on the same hardware (TPUv3 or Nvidia P100/V100 GPU)?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a new reinforcement learning agent architecture which is significantly faster and way less costly than previously distributed architectures. To this end, the paper proposes a new architecture that utilizes modern accelerators more efficiently. The paper reads very well and the experimental results indeed demonstrate improvement. Nevertheless, even though working in deep learning for years and have also some experience with Reinforcement learning I am not in the position to provide an expert judgment on the novelty of the work. I do not know if ICLR is the right place of the paper (I would probably suggest a system architectures conference for better assessment of the work).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents SEED RL, which is a scalable reinforcement learning agent. The approach restructure the interface / division of functionality between the actors (environments) and the learner as compared to the distributed approach in IMPALA (a state-of-the-art distributed RL framework). Most importantly, the model is only in the learner in SEED while it is distributed in IMPALA.\\n\\nThe architectural change from to IMPALA to SEED feels reasonable, and the results support the choices in a positive way.\\n\\nSEED is evaluated using a large number of benchmarks using three environments, and the performance is compared to IMPALA. The results are very good, shows good scalability, and significantly reduced training times. \\n\\nThe paper is well written, easy to read, and I enjoyed it. \\n\\nThe code for SEED is released open source, which enables future research to build upon SEED.\"}"
]
} |
Hye87grYDH | Sparse Transformer: Concentrated Attention Through Explicit Selection | [
"Guangxiang Zhao",
"Junyang Lin",
"Zhiyuan Zhang",
"Xuancheng Ren",
"Xu Sun"
] | Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Sparse Transformer. Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance.
Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation. In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance. | [
"Attention",
"Transformer",
"Machine Translation",
"Natural Language Processing",
"Sparse",
"Sequence to sequence learning"
] | Reject | https://openreview.net/pdf?id=Hye87grYDH | https://openreview.net/forum?id=Hye87grYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"cOn8m4Y4mI",
"HJgIfWo2oB",
"rkezjkjniS",
"HylfgT9njB",
"BkeSP792or",
"Bkx1jJ53ir",
"BylJAgnjjB",
"BylYWr7msB",
"SyenHj3h9H",
"SkxkjiCatr",
"BkehHdjsFS",
"B1x0T9sPtS",
"Skg1vbiHtH",
"HyxM3_4yKS",
"BJlZADEp_H"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798743333,
1573855502263,
1573855129784,
1573854441687,
1573851996756,
1573851031076,
1573793991427,
1573233921432,
1572813635533,
1571838870786,
1571694660372,
1571433158504,
1571299671343,
1570879657753,
1570748361049
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2212/AnonReviewer2"
],
[
"~Wenjie_Li3"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2212/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2212/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2212/AnonReviewer2"
],
[
"~Hongyi_Cui1"
],
[
"ICLR.cc/2020/Conference/Paper2212/Authors"
],
[
"~Hao_Zhang1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a variant of Sparse Transformer where only top K activations are kept in the softmax. The resulting transformer model is applied to NMT, image caption generation and language modeling, where it outperformed a vanilla Transformer.\\n\\nWhile the proposed idea is simple, easy to implement, and it does not add additional computational or memory cost, the reviewers raised several concerns in the discussion phase, including: several baselines missing from the tables; incomplete experimental details; incorrect/misleading selection of best performing model in tables of results (e.g. In Table 1, the authors boldface their results on En-De (29.4) and De-En (35.6) but in fact, the best performance on these is achieved by competing models, respectively 29.7 and 35.7. The caption claims their model \\\"achieves the state-of-the-art performances in En-Vi and De-En\\\" but this is not true for De-En (albeit by 0.1). In Table 3, they boldface their result of 1.05 but the best result is 1.02; the text says their model beats the Transf-XL \\\"with an advantage\\\" (of 0.01) but do not point out that the advantage of Adaptive-span over their model is 3 times as large (0.03)).\\n\\nThis prevents me from recommending acceptance of this paper in its current form. I strongly encourage the authors to address these concerns in a future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Sorry for the late reply and updates\", \"comment\": \"The response and paper updates may answer some of your concerns.\"}",
"{\"title\": \"Response to reviewer3\", \"comment\": \"Thank you for your valuable comments. We have empirically addressed your concerns about the optimal choice of k and the comparisons to the previous sparse attention methods in the updates.\\n\\nAs you said, our approach is simple, so the Explicit Sparse Transformer is significantly faster in both inference and training than the previous methods of sparse attention in Transformer.\\n\\nFor open-sourcing, we provide a simple implementation of the method in the Appendix, and we will publicly release all the code and training instructions in the near future to help replicate this work.\\n\\nFor sparse attention methods of local attention(OpenAI\\u2019s sparse transfomers, adaptive span), these methods directly ignore long-distance dependence, and they mainly work for language models but have not been proved effective on standard transformers. Therefore, we did not compare them in the experiment, but we take the variants of sparsemax into consideration because they have demonstrate improvement in standard transformer.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your detailed and helpful reviewers.\\nWe have updated the pdf.\", \"questions_about_the_novelty_and_significance\": \"In the paper Explicit sparse Transformer, the proposed method is straightforward, simple, and easy to implement. \\n\\nWe invested a lot of time in the study of sparse transformers. In January of this year, we submitted a model to SQuAD in the name of Sparse Transformer (but it doesn't work because we do not apply the mothod to the pretrain phase at the time). We also thought about other complicated methods of sparsee attention, but the current method is simple and effective.\\n\\nAlthough several sparse attention methods have been applied to the sequence-to-sequence transformer model and improve the performance, detailed comparisons between the proposal and their methods based on strong baseline show that our methods are much faster in training and testing and achieve slightly better results\\n\\nQuestion about the number of experiments \\nIn the current version of the paper, for all methods on IWSLT datasets, we have experimented under three different initializations, and reported the highest results. Because of resource limits, we didn\\u2019t do this on other data sets.\", \"question_about_the_alignment_of_transformer\": \"We found that the randomly selected samples of the last two layers of transformer would cause excessive attention to the end token.\", \"question_about_the_redundancy\": \"We have moved the review of standard transformer into Appendix and it may help newcomers.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Thanks for your careful reviews, analysis of the value of k, comparisons between previous sparse attention methods, and missing reference are all included in the newer version of the paper. The answers to the remaining questions are as follows:\", \"q\": \"Where the numbers in Figure 1 come from? Is it a single attention head or average of all?\", \"a\": \"For the sake of simplicity, we show the attention score of first head.\"}",
"{\"title\": \"General Response and we have updated the pdf to address most of the reviewers' questions.\", \"comment\": \"We not only analyze the value of k but also compare our methods with previous sparse attention methods in transformer in the revision. In these new experiments, we performed experiments for each method under three different initializations. Our method has achieved slightly better results than the previous sparse attention method\\uff0c but the inference and training speed are much faster than the previous methods. For example, in the transformer model, our method is twice as fast as the sparsemax during the inference. We empirically tried different values of k on the valid set of two translation datasets. As the value of k increases, the BLEU scores rises first and then falls, and the optimal k is around 8.\"}",
"{\"title\": \"AnonReviewer2 Response\", \"comment\": \"Having read the other reviews and the limited discussion, I feel that the summary of the reasons given for the rating in my original review still stands, so my rating remains unchanged, 1: Reject. The other reviews and the responses to them reinforced my original decision to reject the paper.\"}",
"{\"title\": \"question about Top K operation.\", \"comment\": \"It's very interesting that the top k self-attention performs such well in those tasks listed in paper. since the top-k operation is not differentiable, the implementation/approximation method plays an important role.\\ndo you plan to publish your implementation code of top-k attention?\"}",
"{\"title\": \"Results of sparsemax, entmax-1.5, entmax-alpha\", \"comment\": \"Hi\\uff0cwe test the above 3 sparsemax variants on fairseq platform, envi and deen translation datasets woth single 1080ti and FP32 training. For speed, sparse transformer is 160k tokens per second, entmax-1.5 is 150k, sparsemax is 140+k and entmax-alpha trains with only 80k tokens per second.\\n\\nFor results, sparsemax converges much slowly and get much worse results. Entmax-1.5 get 0.1 BLEU scores better than the implemented transformer baseline on both two datasets, and entmax-alpha get 0.2 BLEU scores worse than the baseline, if I use the entmax correctly.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes \\\"sparse self-attention\\\", where only top K activations are kept in the softmax. The resulting transformer model is applied to NMT, image caption generation and language modeling, where it outperformed a vanilla Transformer model.\\n\\nIn general, the idea is quite simple and easy to implement. It doesn't add any computational or memory cost. The paper is well written and easy to read. The diverse experimental results show that it brings an improvement. And I think this can be combined with other improvements of Transformer.\\n\\nHowever, there are quite many baselines are missing from the tables. The sota on De-En is actually 35.7 by Fonollosa et.al. On enwik8, Transformer XL is not the best medium sized model as the authors claimed. See below:\", \"ntm_en_de\": [\"Wu et.al. Pay Less Attention with Lightweight and Dynamic Convolutions, 2019\", \"Ott et.al. Scaling Neural Machine Translation, 2018\"], \"ntm_en_vi\": [\"Wang et.al. SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine Translation, 2018\"], \"ntm_de_en\": [\"Wu et.al. Pay Less Attention with Lightweight and Dynamic Convolutions, 2019\", \"Fonollosa et.al. Joint Source-Target Self Attention with Locality Constraints, 2019\", \"He et.al. Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation, 2018\"], \"lm_enwik8\": [\"Sukhbaatar et.al, Adaptive Attention Span in Transformers, 2019\"], \"other_comments\": [\"More experimental details are needed. What is the value K? How different K values affect performance? What is the number of parameters of NMT models.\", \"The claim \\\"top layer of the vanilla Transformer focuses on the end position of the text\\\" can't be true generally. Probably only true for a certain task.\", \"Where the numbers in Figure 1 come from? Is it a single attention head or average of all?\", \"Page 4, \\\"the high are ...\\\" probably typo?\", \"The related work is missing \\\"Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes\\\" by Rae et.al., which also uses sparse attention.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"1. What is the specific question/problem tackled by the paper?\\n\\nThe authors tackle the problem of sparse attention for various generative modeling tasks such as machine translation and image captioning. The main motivation behind studying this problem is the premise that sparse varieties of attention might generalize better than full attention. The authors propose a sparse attention mechanism based on the top-k selection where all attention values in a row are dropped if they are not higher than the k^{th} largest item in the row. Since this is a non-differentiable operation the authors propose to train this model by setting the gradients of the non-selected items to 0. The authors report results on machine translation, language modeling and image captioning.\\n\\n2. Is the approach well motivated, including being well-placed in the literature?\\n\\nIn my view the main reasons to study sparse variants of attention are either 1) scale to sequences longer than are possible with full attention (this is e.g., the motivation behind [1]) or 2) generalize better than full attention. The motivation of this work seems to be the latter as the authors claim improvements in terms of performance over full attention. The authors cite prior work on sparse attention mechanisms.\\n\\n3. Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.\\n\\nThe authors report good results on machine translation, showing that their sparse attention method improves performance on En-De to 29.4 BLEU, on De-En to 35.6 BLEU and on En-Vi to 31.1 BLEU, improving on full attention baselines. However, the authors have not submitted code for reproducing their results. The authors also do not report what choice of k is used for the top-k operation and how they made their choice of the optimal k? The paper would be well served by more ablation experiments demonstrating what the impact the choice of k has on the model performance. For example, I would expect to be able to reproduce original Transformer results using k = maximum sequence length. \\n\\nI am also not fully clear about how gradients are propagated through the top-k operation. It seems that if an index is not selected (i.e. it's attention value is smaller than top-k) it's gradient is set to 0. However, this seems problematic - for e.g., in the initial stages an important item might have a low attention value due to random initialization and might not make it to the top-k. Because of the way gradients are propagated it will not receive any gradient, and therefore will not be incentivized to increase its value. This doesn't seem like a good solution to me.\\n\\nSince the paper is mainly an empirical work, it would be improved by open-sourcing anonymized code so that it's results and claims may be verified. It would also be improved in more ablation experiments or explanations in what the optimal choice of k should be for the top-k and how that affects the results. \\n\\n[1] Generating Long Sequences with Sparse Transformers by Child et al (https://arxiv.org/abs/1904.10509)\"}",
"{\"rating\": \"Reject\\n\\nREASONS FOR RATING (SUMMARY). The innovativeness seems low given the several previous proposals for sparse attention, the results are not dramatic enough to compensate for the lack of originality, and the comparison to other models is wanting.\\n\\nREVIEW\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"CONTRIBUTIONS:\\nC1. Sparse Transformer: A modification of the Transformer, limiting attention to the top-k locations. (That is a complete statement of the proposed model.)\\nC2. Experiments showing that, quantitatively, the Sparse Transformer out-performs the standard Transformer on translation, language modeling, and image captioning.\\nC3. Experiments showing that, qualitatively, in translation, when generating a target word, the Sparse Transformer better focusses attention on the aligned source word\", \"strengths\": \"The paper is clearly written. The question of whether the Transformer\\u2019s attention is too diffuse is of interest. The proposal is admirably simple. The quantitative metrics include comparison against many alternative models.\", \"weaknesses\": \"A primary area of deficiency concerns the relation of the proposed model to other proposals for sparse attention: the authors cite 5 of them (and 2 more are cited in the comment by Cui). The paper should clearly identify the differences between the proposed model and earlier models: it does not discuss this at all. The deficiencies in these previous models should be clearly stated and demonstrated: they are only described as \\u201ceither restricted range of attention or training difficulty\\u201d (Sec 6). A rationale for why the proposal can be expected to remedy these deficiencies should be stated clearly: it is not stated at all. Experimental demonstration that the proposed innovation actually remedies the identified deficiencies should be provided, but is not.\\n\\nA proposal to use a top-k filter immediately raises the question of the value of k. This is not discussed at all. In particular, no empirical results are given concerning the sensitivity of the reported successes to choosing the correct value for k. We are only told that \\u201ck is usually a small number such as 5 or 10\\u201d (Sec 3). The experimental details in the appendix do not even state the value of k used in the models reported.\\n\\nIt is an interesting discovery that in the translation task, attention at the top layer of the standard Transformer is strongly focused on the end of the input. This is described as an \\u201cobvious problem\\u201d (Sec 7). But it can\\u2019t obviously be a problem because the performance of the standard Transformer is only very slightly lower than that of the Sparse Transformer: if anything is obvious, it is that processing in the standard Transformer packs a lot of information into its final encoding of the end of the input string, which functions rather like an encoding of the entire sentence.\\n\\nPresumably, the experimental results reported are those from a single model, since we are not told otherwise. There should be multiple tests of the models with different random initializations, with the means and variances of measures reported. It is possible, however, that limitations of computational resources made that infeasible, although the Appendix seems to indicate that no hyperparameter tuning was done, which greatly reduces computational cost.\\n\\nCOMMENTS FOR IMPROVEMENT, NOT RELEVANT TO RATING DECISION\\n\\nAlthough the tiny sample of visualized attention weights provided is useful, a large-scale quantitative assessment of a main claim concerning translation might well be possible: that attention is in fact concentrated on the aligned word might be testable using an aligned bilingual corpus or perhaps an existing forced aligner could be used.\", \"much_space_could_be_saved\": \"it is not necessary to review the standard Transformer, and the modification proposed is so simple that it can be precisely stated in one sentence (see C1 above): the entire page taken up by Sec. 3 is unnecessary, as it adds only implementation details.\\n\\nErrors that took more than a moment to mentally correct, all on p. 12:\\n\\nThe definition of the BPC should be E[log P(x(t+1) | h(t))]: all parentheses are missing\\n\\u201cregrad\\u201d should be \\u201cregard\\u201d\\n\\u201cderivative\\u201d should be \\u201cdifferentiable\\u201d in the final sentence\"}",
"{\"comment\": \"Thanks for the impressive work. Top-k method seems to be very effective and easy to implement.\\nHave you compared with some related work with sparsemax function such as [1, 2] ? Concentrate attention berfor softmax or during softmax, which one would be better\\uff1f\\n\\n[1]Chaitanya Malaviya, Pedro Ferreira, Andr\\u00e9 F. T. Martins. Sparse and Constrained Attention for Neural Machine Translation. ACL 2018\\n[2]Gon\\u00e7alo M. Correia, Vlad Niculae, Andr\\u00e9 F.T. Martins. Adaptively Sparse Transformers. EMNLP 2019\", \"title\": \"Comparision with some related work.\"}",
"{\"comment\": \"To Q1: Since we apply teacher forcing, we feed the shift right ground truth in the training phase or the generated words in the valid or test phase into the decoder, and decoding states s means the c representation\", \"to_q2\": \"We use pertained ResNet to exact feature maps, and then feed these feature maps into the encoder and then formulate it as a sequence-to-sequence task.\", \"to_q3\": \"Thanks for your advice. The image caption is similar to machine translation, and due to length limit, we do not include its visualization yet.\", \"title\": \"Answer your questions\"}",
"{\"comment\": \"I have some questions about your work:\\n\\n1) You said your proposed sparse attention can extend to context attention. You use W_Qs to replace Q. Question: what does decoding states s means? could you give some examples? \\n\\n2) How to perform image caption by Transformer? Maybe you can give clearly illustration about that?\\n\\n3) Maybe you can give some visualization of image caption results?\", \"title\": \"Questions about your paper\"}"
]
} |
SklSQgHFDS | Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration | [
"Jingwei Zhang",
"Niklas Wetzel",
"Nicolai Dorka",
"Joschka Boedecker",
"Wolfram Burgard"
] | Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are added as bonus rewards, which results in a mixture policy that neither conducts exploration nor task fulfillment resolutely.
In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning. Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific. It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation. We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite. The results show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives. A video of our experimental results can be found at https://gofile.io/?c=HpEwTd. | [
"Reinforcement Learning",
"Exploration",
"Intrinsic Motivation",
"Sparse Rewards"
] | Reject | https://openreview.net/pdf?id=SklSQgHFDS | https://openreview.net/forum?id=SklSQgHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"TEIHULnYrd",
"r1ecSRLOiB",
"rylhp2IOjr",
"BkesHtI_iB",
"B1l-41IuoH",
"HyewqIr_iH",
"S1lM84BuoH",
"Hkx7NQrOsS",
"B1xGsCVdjH",
"BkeuWsu0qS",
"BkxkKNOh9H",
"BJev5lEbcS",
"Syga7o0atr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743303,
1573576257984,
1573575876272,
1573574979249,
1573572393023,
1573570190632,
1573569609769,
1573569322597,
1573568153590,
1572928255916,
1572795510865,
1572057230918,
1571838757202
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2210/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2210/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2210/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2210/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents a method for intrinsically motivated exploration using successor features by interleaving the exploration task with intrinsic rewards and extrinsic task original external rewards. In addition, the paper proposes \\\"successor feature control\\\" (distance between consecutive successor features) as an intrinsic reward. The proposed method is interesting and it can potentially address the limitation of existing exploration methods based on intrinsic motivation. In experimental results, the method is evaluated on navigation tasks using Vizdoom and DeepMind Lab, as well as continuous control tasks of Cartpole in the DeepMind control suite, with promising results.\\n\\nOn the negative side, there are some domain-specific properties (e.g., moderate map size with relatively simple structures, different rooms having visually distinct patterns, bottleneck states generally leading to better rewards, etc.) that make the proposed method work well. In addition, off-policy learning of the successor features could be a potential technical issue. Finally, the proposed method is not evaluated against stronger baselines on harder exploration tasks (such as Atari Montezuma's revenge, etc.), thus the addition of such results would make the paper more convincing. In the current form, the paper seems to need more work to be acceptable for ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer the effort spent on reviewing our paper and the constructive remarks. We highly appreciated the positive feedback.\\n\\n$\\\\bullet$ Comment:\\n\\u201cWould be nice to see a wider variety of evaluation domains, for instance Montezuma's Revenge\\u201d\", \"response\": \"We agree that usually the standard deviation should be plotted. However in the paper we choose to show the curves of each individual runs in the Appendix G instead of showing the standard deviation, since plotting the standard deviation makes our figures harder to interpret. Previous works [3] also chose to plot each of the individual runs instead of showing the standard deviation.\\n\\nThe reason is that in a terminal reward setting each run results in a total reward of 0 or 1. Using the sample variance leads to unintuitive results such as overconfident error predictions if all or no runs converged or underconfident predictions which can span negative values, even though the environment just provides positive reward.\\n\\n\\n[1] Marlos C Machado, Marc G Bellemare, and Michael Bowling. Count-based exploration with the successor representation. arXiv preprint arXiv:1807.11622, 2018.\\n[2] Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181\\u2013 211, 1999.\\n[3] Nikolay Savinov, Anton Raichuk, Rapha \\u0308el Marinier, Damien Vincent, Marc Pollefeys, Timo-thy Lillicrap, and Sylvain Gelly.Episodic curiosity through reachability.arXiv preprintarXiv:1810.02274, 2018.\", \"a_plot_of_the_result_can_be_found_at_https\": \"//gofile.io/?c=0XM4LG .\\nBecause of the limited time we can not give conclusive results and only tested our agent using SID and SFC against a pure extrinsic reward agent. We do not claim that our method achieves state-of-the-art results on Montezuma but the results demonstrate that our method also helps in an environment that is very different to the environments shown in the main paper.\\nThe base RL algorithm was again Ape-X with 8 actors with a replay memory of 500k, run for 400k gradient updates which corresponds roughly to 200 million frames.\\n\\n$\\\\bullet$ Comparison to approaches that do take into account trajectory-level information\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for the insightful comments and suggestions, we highly appreciate the positive feedback.\\n\\n$\\\\bullet$ Comment:\\n\\u201cSFC as intrinsic reward and the study comparing this with other intrinsic signals (ICM/RND). The more interesting study might in the appendix though (Appendix A). I would suggest moving that into the main paper, as it nice separate the influence of scheduling component and the 'quality' of the proposed intrinsic reward.\\u201d\", \"response\": \"We thank the reviewer for the hint. At the current stage we are unable to edit the abstract in openreview. The correct url is https://gofile.io/?c=HpEwTd and we are sorry for the inconvenience.\\n\\n\\n[1] Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros. Large-scale study of curiosity-driven learning.arXiv preprint arXiv:1808.04355, 2018a.\"}",
"{\"title\": \"Response to Reviewer #2 (1/2)\", \"comment\": \"We thank the reviewer for the detailed comments and the effort spent on reviewing our paper. Below we address the concerns raised by the reviewer point by point.\\n\\n$\\\\bullet$ Differences to [1],[2]\", \"response\": \"The reviewer is correct that the replay buffer stores experiences generated from the behavior policy, which is a mixture of exploration and exploitation experiences, from different points during training. The SF is trained using samples from this replay buffer, which contains a mixture of the agent's past trajectories. But we note that training SF using this mixture of experiences is not a technical flaw, but it is particularly intended to do so. When learning on this mixture of policies, the SF estimates the state visitation distributions of its past trajectories in the replay buffer. We also note older trajectories are continuously replaced by more recent ones. Therefore the influence of older behavior policies are gradually washed out from the SF. Then the learned SF will be a correct indicator of how often it has explored certain regions relatively recently. In that sense the SF are trained \\u201con-policy\\u201d with respect to this mixture of policies.\"}",
"{\"title\": \"Response to Reviewer #4: Justification of Method\", \"comment\": \"We thank the reviewer for the remarks and detailed suggestions for improvement.\\n\\n$\\\\bullet$ Comment:\\nThe ablation study (appendix 1) is interesting and very important. The \\\"Ours\\\" algorithm in the main text is actually a combination of SFC and SID, so the comparison shown in this ablation study could be a main result.\", \"response\": \"In our preliminary experiments we found that the performance of training the feature embedding $\\\\theta_\\\\phi$ is roughly comparable as keeping the randomly initialized features fixed. Our rationale for fixing the features is as follows:\\nAs shown in [1], fixing randomly initialized features gives comparable results as with training features for ICM. From our limited experience with the commonly adopted way of training the SF with an autoencoder structure, we experienced that the interleave between training the two objectives (learning the features and rewards, learning the SF) is relatively sensitive to hyperparameter settings. Due to those reasons we wanted to investigate the possibility of minimizing the procedure and the computation budget required for learning the SF, just as [1] investigated the possibility of using a fixed feature extractor. As the experiments suggested, this setup can serve our method for the experiments considered, as although randomly initialized, the CNN structure is already a good prior for feature extractors at least in the visual input domain.\\nFor environments with the same textures everywhere, we suspect that training the features for SF would also not necessarily help, as with the same input the feature embedding would give one embedding no matter trained or fixed.\\n\\n\\n[1] Large-scale study of curiosity-driven learning, Burda et al., 2018a\"}",
"{\"title\": \"Response to Reviewer #2 (2/2)\", \"comment\": \"$\\\\bullet$ Evaluation on common benchmarks\", \"response\": \"We conducted a small experiment on my_way_home with different choices for the number of switches and added the results to the appendix (Appendix B). The results show that our method is not very sensitive about the exact number of switches, however switching just once per episode performed worse.\\n\\n[1] Beyer, Lucas, et al. \\\"MULEX: Disentangling Exploitation from Exploration in Deep RL.\\\" arXiv preprint arXiv:1907.00868 (2019).\\n[2] C\\u00e9dric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms. In Proceedings of the International \\n[3] Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros.Large-scale study of curiosity-driven learning.arXiv preprint arXiv:1808.04355, 2018a.\\n[4] Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van deWiele, Volodymyr Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing-solving sparse reward tasks from scratch.arXiv preprint arXiv:1802.10567, 2018\\n[5] Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random networkdistillation.arXiv preprint arXiv:1810.12894, 2018b.\\n[6] Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. InInternational Conference on Machine Learning (ICML), 2017.\"}",
"{\"title\": \"Response to Reviewer #4: Environment Choice\", \"comment\": \"$\\\\bullet$ Comment: Environments with distinctive appearance\", \"response\": \"In many continuous control tasks discretization is avoided because the search space blows up exponentially with the number of action dimensions (curse of dimensionality). In this specific task it is not the case (action dimension = 1) and it can be solved even with a discretized action space. We note that we used one of the simplest discretization procedures (linear spacing) and this same action space is not just used for our method but for all baselines, too.\\nWe test our method in the cartpole environment, because it is a sparse reward task but differs much from our previous tasks in the sense that it has no obvious bottleneck states and that observations are pretty similar across the whole state space, and the fact that it is third-person view instead of first-person. The only reason we discretized the state space was due to the use of APE-X DQN as our backbone, we did not want to prove properties of our method specific to continuous control as that is not the focus of our paper.\"}",
"{\"title\": \"Response to Reviewer #4: Justification of Method (SFC)\", \"comment\": \"$\\\\bullet$ Comment:\\n\\u201cIn the description of methods, it should be clearly noted that what is the underlying policy being learned for deriving successor features (i.e. in Eq.3 and 4) --- for example, is it a behavioral policy (which is a mixture of two policies) induced by scheduled drive?\\u201d\", \"response\": \"We agree that the formulation can be misleading and have revised it.\", \"to_give_an_intuitive_explanation\": \"The SF represent a state by a function of its successor states. So for states that lie within a well connected region in the state space (e.g. a room), their future most likely look quite similar (there is a high chance that their consecutive states lie in the same room). This means that the SFC reward for these states is low. On the other hand the future trajectory of state at doors/exits are sensitive to the choice of the next action, because it determines whether the agent ends up in a different room or not. Therefore these states corresponds to a high SFC reward.\\n\\nFig. 1 b) of the paper shows that this intuition is correct at least in an established bottleneck toy problem. It is true that the reward here depends on the policy the SF were learned from (random agent here). But as long as the SF are learned under a sufficiently stochastic policy is expected to have similar properties. This stochasticity is well satisfied under the SID framework, as whenever the extrinsic policy is scheduled, the agent acts pseudo randomly before receiving external reward.\\n\\nWe also note that the SFC reward was inspired by the definition of bottlenecks, but is not limited to environments with structural bottlenecks. By definition the SF captures the transition dynamics and the dynamics induced by the policy. This means that in addition to the environmental bottlenecks (induced by the transition dynamics of the environment), SFC is also capable to capture \\u201cperceived bottlenecks\\u201d (induced by the policy: ). This means, also as shown by our cartpole experiment, that SFC is still able to bring performance gain in environments without apparent environmental bottlenecks, since detecting \\u201cperceived bottlenecks\\u201d (where the agent also receives high SFC reward) induced by the behavior policy can help to avoid well explored regions of the state space.\\n\\n$\\\\bullet$ Comment:\\n\\u201cIn SID, a policy for extrinsic rewards and another policy for intrinsic one are learned. But in case the extrinsic reward was never received (in case of terminal-reward environments), the former policy would behave no different than random policies. Is this interpretation correct? \\u201c\"}",
"{\"title\": \"Overview of revisions in the updated draft\", \"comment\": \"We thank all the reviewers for their time and effort in reviewing our paper. We appreciate the positive feedback on our proposed method, we also appreciate the constructive feedback on how to further improve our paper with the detailed reviews. We uploaded a revised version incorporating the suggestions of reviewers, we list the main revisions below.\\n\\n$\\\\bullet$ As suggested by several reviewers, we incorporated contents from Appendix A, where ablation studies are presented, to the main paper.\\n$\\\\bullet$ We conducted an additional experiment to examine how the number of switches per episode in SID affects the performance and report the results in the Appendix.\\n$\\\\bullet$ In the previous version, we missed a reference to the Appendix F where we presented several high-level schedulers that we had investigated. We added the missing reference. Also we included more detailed discussion about the scheduler.\\n$\\\\bullet$ We fixed typos and clarified potentially misleading statements.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"## Summary\\n\\nThis paper proposes a novel intrinsic reward for exploration called SFC (successor feature control), to deal with sparse-reward and hard-exploration task. The main idea of SFC is to provide an agent with intrinsic reward defined to be the L2 distance between the successor features of two consecutive states (Equation 4). An underlying motivation of SFC exploration is that high SFC would encourage the agent to enter the \\\"bottleneck states\\\" and therefore helps to explore the entire state space. \\n\\nAnother line of contribution is SID (scheduled intrinsic drive), where a scheduler is used to determine which of the two separate policies (one for extrinsic reward and intrinsic reward) is chosen, with a fixed probability in their implementation, and executed for the next rollout of experience. It has an effect of longer-term exploration and prevents the agent from collapsing to a local-optimum behavior. \\n\\nEmpirically, the SFC+SID algorithm is evaluated on custom sparse-reward navigation-type environments such as VizDoom and DeepMind Lab (as well as a simple pixel-based continuous control), and outperforms other intrinsically motivated RL algorithms including RND and ICM.\\n\\n\\n\\n## Overall Assessment\\n\\nOverall, I like the idea of this paper and finds it very interesting and promising, but feel it would be on the borderline or slightly below the bar.\\n\\nThis paper studies a very interesting and novel approach of leveraging successor features for exploration. Successor features are a promising way of learning dynamics-related, task-agnostic representation for RL, which can provide a temporally extended exploration signal. The resulting method presents an improvement over existing intrinsic-reward exploration algorithms.\\n\\nHowever, I think there are some weaknesses of the paper that would put the paper slightly below the acceptance threshold: empirically the environments are not diverse enough, and they have an implicit structure assumed and favorable for the proposed method --- there remains a question whether the method is general and not task-specific. I also think there are some misleading overclaims. Please see the detailed comment below.\\n\\n\\n\\n## Detailed Comments\\n\\n**[Problem Motivation and Significance]**\\nThis paper address an important, long-standing problem in RL of efficient exploration under sparse-reward environments.\", \"a_minor_comment\": \"in the introduction, it is said that \\\"terminal reward RL settings\\\" are considered (the reward is given when the goal is achieved) --- which is an extreme case of sparse-reward RL problems --- but in the experiments non-terminal reward environments are studied, e.g. \\\"AppleDistractions\\\" where each of apples yields +0.05 reward. I think the overall claim could be a bit toned down to, for instance, dealing with sparse-reward environments.\\n\\n**[Clarity]**\\nOverall the paper is clearly written and easy-to-follow. Descriptions of implementation details are well provided. However, there are some parts that can be better clarified and improved more. Please see more detailed comments below.\\n\\n**[Justification of Method (SFC & SID)]** \\n\\nThe choice of successor feature for driving a novelty-like intrinsic reward signal seems well-motivated. This is because learning of successor features is task-agnostic and only related to multi-step dynamics (though SF is with respect to \\\"a policy\\\"), which gives a good representation that captures topological characteristics of the environment. The way intrinsic reward is derived, the squared distance of SFs of $S_t$ and $S_{t+1}$, basically encourages the agent to visit and go across the \\\"bottleneck\\\" state. \\n\\nIn the description of methods, it should be clearly noted that what is the underlying policy being learned for deriving successor features (i.e. $\\\\pi$ in Eq.3 and 4) --- for example, is it a behavoral policy (which is a mixture of two policies) induced by scheduled drive? Another comment related to this about \\\"SFC captures statistics over the full distribution of policies that have been followed, ...\\\" (section 2), which sounds a bit overclaiming to me. Please note that a SF is with respect to a specific policy (e.g. behavioral policy), from the expectation in the definition; I think the use of past experience for minimizing the TD error is basically for estimating the expectation term through approximation, so I am not sure that this claim is well-justified.\\n\\nIt is not very clear to me why the SFC reward agrees with bottleneck states. I don't think the explanation given in Section 3.2 is logically enough. Also, isn't it only true under a random exploration policy? How would you defined the \\\"bottleneck states\\\" (e.g. Tomar et al. 2019) -- which can be helpful for making the main idea more understandable? Moreover, there was no enough explanation or reasoning about why SD (successor distance) is roughly the shortest path between the states.\\n\\nIn SID, a policy for extrinsic rewards and another policy for intrinsic one are learned. But in case the extrinsic reward was never received (in case of terminal-reward environments), the former policy would behave no different than random policies. Is this interpretation correct?\\n\\nAlso, given the presented form of SID, it sounds like a bit overclaiming to say it is a hierarchical RL agent, since the scheduler just picks one of the policies with equal probability (rather than being learned) --- especially one policy would become a random exploration policy --- and there is no notion of abstraction or goal/options.\\n\\n\\n\\n**[Environment choice]**\\nI feel (1) that the environments being evaluated on are not diverse enough, and (2) that the environment in the experiment seems to exhibit specific properties that are favorable to the algorithm.\\n\\n(Bottleneck State) One implicit assumption is that the structure of navigation is chosen such that following bottleneck states would lead to an optimal trajectory. I agree that even on the maze like FlytrapEscape the navigation/exploration problem is not easy in the absence of rich reward signals, but this is exactly a sort of environments on which SFC can perform better, especially compared to RND/ICM which are not attracted by bottleneck states (Appendix A). It is good though, and could be beneficial in many cases with the presence of bottleneck states, but seems general applicability is a little bit short (not as much as claimed).\\n\\n(Distinctive appearance) Another assumption is about a choice of appearance. One important thing to note about SF learning is that a feature for state or transition (cumulant) is kept fixed after random initialization, rather than being learned as in (Machado et al. 2018; Kulkarni et al. 2016; Barreto et al. 2017). This is because this method does not need to do regression of reward function. Then, the state-feature $\\\\phi(s)$ should be discriminative enough so that it can capture some topological and global characteristic of the state space. In general, this is not an easy problem (for first-person view POMDPs), but seems on the environments (FlytrapEscape, AppleDistractions) it was possible because each room/sector has uniquely identifiable wall color and texture. I feel this is somewhat strong assumption made to make SF work. Thus, \\\"We believe this is the first time that SF are shown to behave in a first-person view environment as one would expect from its definition\\\" would sound a bit overclaiming. Would this method work on more general environments that do not have this property --- specifically, what will happen if rooms are not distinguishable from color and texture (and the walls were looking similar)?\\n\\nControl from pixels (DM Cartpole) is an example of environment that does not have these assumptions, but one downside is that action space was simplified and discretized. Indeed, the improvement shown on Cartpole over ICM/RND is not substantial enough. To demonstrate that SFC+SID is \\\"generally useful\\\" as claimed in the paper, presenting benchmark results on standard discrete-action Atari environments, or more diverse RL environments would have greatly strengthened the paper to be more convincing.\\n\\n\\n\\n**[Analysis of successor distance]**\\nFigure 11 (visualization of successor distance) is a great analysis, and I liked it. It clearly shows a smooth topology of the environment thanks to the temporally-extended representation that SF captures. I found that the difference heatmap is a little bit difficult parse. Also, under which policy the SF was computed (I guess this is a behavior policy derived by SID; it should be clearly mentioned somewhere in the paper)? \\n\\n**[More minor comments about experiments]**\\n- Was the same K-step objective (e.g. K=5) used for all of SFC, ICM and RND? If so, what would the result look like when K=1?\\n- The ablation study (appendix 1) is interesting and very important. The \\\"Ours\\\" algorithm in the main text is actually a combination of SFC and SID, so the comparison shown in this ablation study could be a main result.\\n\\n\\n\\n## Feedback for Improvement\", \"more_related_work\": [\"Learning decomposed value functions for extrinsic and intrinsic rewards have been discussed in (Burda et al, 2018b), though in their work a single policy is being learned.\", \"[Comparison with Machado et al. 2018] It is discussed that (Machado et al. 2018: count-based exploration with SR) is very similar because of the use of SR/SF. The ways of how to derive intrinsic reward signal are indeed different, but it would be great to have a detailed discussion about how they are different or similar.\"], \"minor_comments\": [\"Citation needed on section 3.1 --- (Kulkarni et al. 2016 or Barreto et al. 2017)\", \"Please consider putting the environment name in the title of each learning curve.\", \"Typo: Therfore (right before section 3.2)\", \"Typo: temporarily -> temporally (introduction bullet point 2)\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: A very nice study on the benefits of successor feature control as an intrinsic drive for hard exploration problems. The work builds nicely on previous work on using SF for exploration and proposes using derived (reachability under $\\\\psi^{\\\\pi}$) distances as intrinsic motivation for an (purely) exploratory policy. This exploratory strategy will be used in conjunction with a policy trained on the extrinsic reward to gather (off-policy) data for both learning processes. The author proposed 'combining' these two policies via a simple scheduler, similar to (Riedmiller 2018).\\n\\nGood paper/addition to both the intrinsic motivation literature and SFs studies.\", \"positives\": \"1) I like the separation of concerns achieved by training separate policies train for the two reward signals (intrinsic and extrinsic).\\n2) SFC as intrinsic reward and the study comparing this with other intrinsic signals (ICM/RND). The more interesting study might in the appendix though (Appendix A). I would suggest moving that into the main paper, as it nice separate the influence of scheduling component and the 'quality' of the proposed intrinsic reward.\\n3) Carefully conducted study, with relevant ($\\\\epsilon$-SOTA) baselines and ablation studies.\", \"points_of_improvement_or_clarification\": \"1) The SFs and the derived reward were done based on random pseudo-rewards $\\\\phi$ (Pg 5, SF-Nets). It maybe worth exploring learning those to capture more interesting features of the task at hand, especially in situation were there is more signal in the extrinsic reward. Do the authors have a sense of how problematic changing this component throughout training would be? As this acts are a reward signal to the inference the intrinsic reward signal, which then trains the exploratory policy. Thus small changes in one, can have massive implications for the trained Q-net, $Q_{E}$.\\n2) It wasn't clear from the exposition which policy is used to train the SFs? The exploratory policy, the uniform random one or the behaviour policy (the combination between the two trained policies given by the Q-nets).\\n3) On the SID setup. Did you conduct any studies on M (the number of switches)? For instance, how does this compare with something like episode switching, which has been explored before?\\n4) There are a couple of observation/discussion claims that are not really substantiated (for instance, last paragraph in Sec. 3.2). The paper is fine content-wise, without them. I would strongly suggest either removing them, re-phasing them as hypothesis and/or back them by more evidence. \\n5) The link to the video (https://gofile.io/?c=HpEwTd.) doesn't work. Please update.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper tackles the problem of how to integrate intrinsic rewards most effectively, in the episodic sparse reward setting. It has two main technical contributions. The first is Scheduled Intrinsic Drive (SID), which trains two separate policies -- one for maximizing the extrinsic (i.e., task) reward and another for maximizing the intrinsic reward -- rather than a single policy that maximizes a weighted combination of both. This uses the same training setup as prior work, Scheduled Auxiliary Control (SAC), except here the extra policy is trained on intrinsic reward rather than an auxiliary task. The second contribution is Successor Feature Control (SFC), a novel approach for computing intrinsic rewards. For a given transition (s, a, s'), the intrinsic reward from SFC is the squared difference in successor features between states s and s'. Since successor features encompass a notion of which kinds of states the agent will encounter in the future after starting from the current state, this type of intrinsic reward is more far-sighted than most state-of-the-art approaches. Empirical analysis shows that SFC leads agents to explore bottleneck states, which is especially helpful for solving navigation tasks.\\n\\nThis paper is well-motivated and clearly written. The experimental evaluation of this paper is thorough, comparing SID to adding extrinsic and intrinsic reward together, and comparing SFC to two recent approaches for generating intrinsic rewards, ICM and RND. The appendix does a good job of providing implementation details for reproducibility, in particular regarding reward normalization and the variation of prioritized experience replay. I also greatly appreciate that design decisions are justified, for instance that the choice of using a random scheduler was made because it outperformed several versions of a learned scheduler.\\n\\nMy only concerns with the paper have to do with evaluation. SFC is compared to prior approaches for computing intrinsic rewards that only take into account transition-level information, whereas SFC takes into account trajectory-level information, and naturally performs better. But there are also recent approaches that do take into account trajectory-level information in different ways, e.g. Savinov et al. (2018). SFC should also be compared to approaches in this category.\\n\\nI would also like to see an analysis of the failure cases that SFC is vulnerable to. Currently the evaluation domains used, with the exception of cartpole, are all tasks involving first-person navigation. So I wonder whether SFC is most effective (compared to existing approaches) on primarily these tasks in this domain, that are partially observable. It would be nice to see a wider variety of evaluation domains, for instance Montezuma's Revenge, which is frequently used to evaluate algorithms for computing intrinsic rewards, as well as other methods for improving exploration of RL agents. It would be neat if agents trained using SFC are better able to navigate through the doors in this game, since that seems to be a clear example of bottlenecks.\\n\\nMinor questions / comments:\\n- In Figure 1b, why are the values on the four bottlenecks not all exactly the same? The maze is symmetric, so I would expect them to be equal.\\n- The plots in Figures 3 through 6 should show the standard deviation.\", \"typos\": [\"Page 2, \\\"inexplicitely\\\" --> \\\"implicitly\\\"\", \"Page 4, \\\"temporarily\\\" --> \\\"temporally\\\"\", \"Page 8, \\\"carpole\\\" \\u2014> \\\"cartpole\\\"\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\nThis paper proposes the use of a controller that selects whether to act according to a policy trained to maximize an intrinsic reward or a different policy trained to maximize the extrinsic reward of a task. The two policies are trained jointly and off-policy. However, the controller is not trained for their experiments and instead randomly (with equal probability) picks one of the two policies every N steps (with N fixed). They also introduce a new kind of . intrinsic reward based on successor features, that is supposed to capture trajectory statistics for a fixed policy. They name their method scheduled intrinsic drive (SID).\", \"main_comments\": \"While this paper proposes some interesting ideas, I am concerned about the soundness of the method, some of the precise implementation details and I believe the empirical evaluation could be greatly improved. \\n\\nOne of my main concerns is the soundness of using the SFC as intrinsic reward while training off-policy. SFs are defined for a fixed policy so they capture statistics of future states if that policy is being used for control. Can you provide more explanation for why the SFC should still be a useful signal for the agent in the case in which it will follow a very different policy (which seems likely given that the replay buffer not only contains a mix of the exploration and exploitation policies, but also policies at different points during training with potentially very different state visitation distributions). Are the SFs trained with data from both the exploration and the exploitation policy? How can we expect the SFC to have useful signal since it is trained using such a wide range of policies?\\n\\nIs there any guarantee that the arbitrary feature embeddings (which are not learned in your experiments if I understood correctly) and thus the successor features (SFs) will contain meaningful information about the kinds of states a policy will visit in the future? An ablation using hand-designed feature embeddings that contain relevant information about the state (i.e. in a gridworld) might be useful to understand how it compares to a randomly initialized network, which is what you used for the state embeddings as I understand it.\\n\\nI am also concerned by the novelty of this work and the fact that it is missing references and discussion to prior work that proposes very similar ideas. For example, [1] proposed the optimization of different losses at the same time: one for exploitation and one or more for exploration. Can you please discuss what is the difference between your method and theirs (other than the intrinsic reward used for the exploration policy)? Similarly, [2] attempts to decouple exploration and exploitation in RL. This reference (and perhaps others that I have missed) should be included and discussed in the paper.\\n\\nThe empirical validation is missing important statistics such as variance across runs. Experiments on AppleDistractions and Cartpole only have 3 random seeds which I do not think is enough for drawing conclusions confidently. Moreover, on the simpler and standard tasks, SID does not seem to be significantly better than other baselines. It is only on carefully designed tasks (e.g. FlytrapEscape or AppleDistractions) that are not regularly used as benchmarks that the method seems to perform better. \\n\\nThe experiments section could be improved by including other (more powerful) baselines such as count/pseudocount exploration methods which have been shown to be more effective than ICM / RND for certain benchmarks, the paper using an intrinsic reward based on successor representations [3] or even Go-Explore [4] that is specifically designed to deal with distractor objects for the AppleDistractions task. Additionally, evaluating SID on harder exploration tasks that are generally considered to be good benchmarks by the community would be helpful (e.g. Montezuma Revenge, Pitfall, sparser versions of DoomMyWayHome etc.) would also strengthen the experimental section.\\n\\nOther Questions / Comments:\\n\\n1. There is no measure of the variance / standard deviation across the random seeds in any of the plots. I find it necessary to be included in the plots, along with the mean across runs. \\n\\n2. What is the reasoning behind using the number of updates (instead of e.g. number of frames / steps / episodes) in the plots? How exactly do you measure the number of updates that appears in the plots? Is that the total number of updates used for the control policy, the exploration policy, and the successor features or is it only the number of updates used for the control policy?\\n\\n3. I find the use of the term \\\"hierarchical\\\" in the title and throughout the paper to be misleading since this term is usually used with a different meaning in the RL literature (i.e. to refer to options/subpolicies that a higher-level policy might choose to pursue at a given time). In your case, the control policy is one of the subpolicies and the other subpolicy is only used for exploration.\\n\\n4. The paper also contains claims which I find unsubstantiated by the results / analytical formulation such as: \\\" our proposed SFC reward implicitly captures statistics over the full distribution of policies that have been followed,\\nsince the successor features are learned using states sampled from all past experiences\\\" on page 2 or \\\"Another valuable property of SFC is that it adapts in very meaningful ways that lead to efficient\\nnon-stationary exploration policies, when the transitions gathered by a policy maximizing the SFC\\nreward is used to update the SF itself\\\" on page 5. Please provide more intuition or theoretical / empirical evidence to support such claims. \\n\\n5. What do you use for the fixed interval (N) at which the meta-controller is choosing which policy to follow? Have you tried training the meta-controller? It would be interesting to see how the results change as N varies. Is N = 1 better than N = length of episode or the other way around or does the choice of N not matter that much?\", \"references\": \"[1] Beyer, Lucas, et al. \\\"MULEX: Disentangling Exploitation from Exploration in Deep RL.\\\" arXiv preprint arXiv:1907.00868 (2019).\\n[2] C\\u00e9dric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. GEP-PG: Decoupling Exploration and\\nExploitation in Deep Reinforcement Learning Algorithms. In Proceedings of the International\\nConference on Machine Learning (ICML), 2018.\\n[3] Marlos C Machado, Clemens Rosenbaum, Xiaoxiao Guo, Miao Liu, Gerald Tesauro, and Murray\\nCampbell. Eigenoption discovery through the deep successor representation. arXiv preprint\", \"arxiv\": \"1710.11089, 2017.\\n[4] Ecoffet, Adrien, et al. \\\"Go-explore: a new approach for hard-exploration problems.\\\" arXiv preprint arXiv:1901.10995 (2019).\"}"
]
} |
BkxSmlBFvr | You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings | [
"Daniel Ruffinelli",
"Samuel Broscheit",
"Rainer Gemulla"
] | Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph. A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance. These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization. In this paper, we take a step back and aim to summarize and quantify empirically the impact of each of these dimensions on model performance. We report on the results of an extensive experimental study with popular model architectures and training strategies across a wide range of hyperparameter settings. We found that when trained appropriately, the relative performance differences between various model architectures often shrinks and sometimes even reverses when compared to prior results. For example, RESCAL~\citep{nickel2011three}, one of the first KGE models, showed strong performance when trained with state-of-the-art techniques; it was competitive to or outperformed more recent architectures. We also found that good (and often superior to prior studies) model configurations can be found by exploring relatively few random samples from a large hyperparameter space. Our results suggest that many of the more advanced architectures and techniques proposed in the literature should be revisited to reassess their individual benefits. To foster further reproducible research, we provide all our implementations and experimental results as part of the open source LibKGE framework. | [
"knowledge graph embeddings",
"hyperparameter optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=BkxSmlBFvr | https://openreview.net/forum?id=BkxSmlBFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Xyls5Y1Vra",
"SyeyqnfijH",
"SygQK6kqoS",
"S1x__2kciB",
"S1g1xsJqoH",
"SkemmQtecS",
"Syljyis0Kr",
"H1gAye5RtB",
"r1xoUNL2YB",
"BkgToE_PtH",
"SJgHpTUPtr",
"H1edT58ztH",
"BklXjDgCur",
"HygxCrNiOB",
"r1x8cXwS_B",
"HkeWpqyWdr",
"BkgFtc1Z_H",
"HJxeXqyb_H",
"rylDuHjyuH",
"Hkg-AmjJdH",
"r1g5orsAPH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment"
],
"note_created": [
1576798743275,
1573756038866,
1573678458927,
1573678192238,
1573677798643,
1572012827279,
1571891939275,
1571885030274,
1571738707106,
1571419301415,
1571413436827,
1571084991587,
1570797466898,
1570616776180,
1570235278127,
1569942200725,
1569942145200,
1569942040414,
1569858926589,
1569858504602,
1569793441719
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2209/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2209/AnonReviewer3"
],
[
"~Jae_Hee_Lee2"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"~Jae_Hee_Lee2"
],
[
"~Apoorv_Umang_Saxena1"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"~Chen_Cai1"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2209/Authors"
],
[
"~Tim_Dettmers2"
],
[
"~Tim_Dettmers2"
],
[
"~Bahare_Fatemi1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors analyze knowledge graph embedding models for multi-relational link predictions. Three reviewers like the work and recommend acceptance. The paper further received several positive comments from the public. This is solid work and should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Regarding comment 2\", \"comment\": \"We have added Fig. 9 to our paper along the lines discussed above. The figure suggests that decent (but often not very good) configurations can be found by simply training for less than 400 epochs.\"}",
"{\"title\": \"Thank you for your feedback and support\", \"comment\": \"We thank you for your feedback and appreciate your support. In what follows, we briefly comment on the points raised in your review.\\n\\n1. \\\"For the comparison between the trained models and previously published results, the sample size might be sufficient to draw the conclusions. However, the intra-model comparison, e.g. in Figure 2, are now comparing subsets of the runs which only comprise approx. (200/6) runs.\\\"\", \"response\": \"Thanks, we added the reference to the \\\"reciprocal relations\\\" section.\"}",
"{\"title\": \"Thank you for your feedback and support\", \"comment\": \"We thank you for your feedback and appreciate your support. In what follows, we briefly comment on the points raised in your review.\\n\\n1. There are no such limitations in our experimental framework that we are aware of.\\n\\n2. Good point! Generally, it may indeed be possible to short-circuit hyperparameter search but that is beyond our current study. Our framework is extensible, however, so that there shouldn't be any principal limitations in adding other hyperparameter optimization methods (and we'd like to include more). What we can do for the present study is to include plots that show model performance (e.g., best validation MRR obtained over all hyperparameter configurations) as a function of the epochs each configuration has been trained. The new plots will give information about how fast models can find good configurations and compare different models along these lines. Would you consider this helpful?\\n\\n3. Our goal was to have a fair, balanced comparison, but not to find perfect hyperparameters. For example, the ComplEx result mentioned in the \\\"Limitations\\\" section uses a configuration which is indeed within our search space but was not found during our hyperparameter search. Of course, the more effort we spend on hyperparameter search, the better models we may find.\\n\\n4. We consciously did not report performance on an \\\"collectively good configuration\\\". A key point that we are trying to make is that there is no such configuration: any configuration will be good for some models but bad for others. The same argument extends to small search grids.\\n\\n5. Thanks for bringing this to our attention. We use (1) and will include a formal definition of the metrics in the appendix.\\n\\n6. Thanks, added.\\n\\n7. Thanks, fixed.\"}",
"{\"title\": \"Thank you for your feedback and support\", \"comment\": \"We thank you for your feedback and appreciate your support. We added a short explanation on quasi-random search to the main paper. We also plan to provide more details in the framework documentation (which also supports other methods for hyperparameter optimization).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors did an extensive experimental study over neural link prediction architectures that was never done before, in such a systematic way, by other works in this space. Their findings suggest that some hyperparameters, such as the loss being used, can provide substantial improvements to some models, and can be the reason of the significant improvements in neural link prediction accuracy the community observed in recent months.\\n\\nThis is a really interesting paper, and can really shine some light on what was going on in neural link prediction over recent years. It also provides a great overview of the field -- in terms of architectures, loss functions, regularizers, sampling strategies, data augmentation strategies etc. -- that is really needed right now in the field.\\n\\nOne concern I have is that the hyperparameter tuning strategy is not really described -- authors just say something along the lines of \\\"we use av.dev\\\", but for those unfamiliar with this specific hyperparameter optimiser this does not provide much information (e.g. what is a Sobol sequence? I had to look it up).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n========\\nThe paper conducts a thorough analysis of existing models for constructing knowledge graph embeddings. It focuses on attempting to remove confounding aspects of model features and training regime, in order to better assess the merits of KGE models. The paper describes the reimplementation of five different KGE models, re-trained with a common training framework which conducts hyperparameter exploration. The results show surprising insights, e.g., demonstrating that a system from 2011, despite being the earliest of the KGE models analyzed, demonstrates competitive results over a more recent (2017) published model.\\n\\nOverall Comments\\n===============\\nThe paper, and the described software release specifically, represent a solid contribution to the area of knowledge graph embeddings. I agree with the basic premise of this paper\\u2019s analysis: in order to accelerate research in a maturing field (like knowledge graphs), it is important to be able to properly compare with older systems, removing artifacts that are due to general improvements in training and optimization techniques, from modeling specific changes. The report of the strong results from the RESCAL system, along with others, drive the point through. Furthermore, the paper is well-written and easy to follow, and should become a good reference for future works on KGEs.\\n\\nDetailed comments\\n===============\\nBelow are some detailed comments about specific parts of the paper, in order of importance:\\n\\n1. The paper mentions disregarding \\u201cmonolithic\\u201d models in the current analysis, primarily due to the expensive training of these models. It may, however, be the case that the future state-of-the-art models will be larger and slower to train (and, perhaps, of the monolithic type). Are there any limitations to the proposed experimental framework that would prevent running monolithic/large models?\\n\\n2. Regarding the item above, if one were to look at the training curves for the exploration of the current 5 KGE models, is it possible that verify winning hyperparameter configurations earlier than the full training is complete. In my experience, it is often the case that with fewer than 1/10th steps of full training (well before convergence), it is possible to compare model configurations (relatively). For example, \\u201cPopulation-base training\\u201d (https://arxiv.org/abs/1711.09846, https://arxiv.org/abs/1902.01894) is one framework where fewer training steps are used to quickly learn good hyperparameter configurations. I\\u2019m wondering whether the KGE hyperparameter exploration training curves display similar early trends. Could a shortened training procedure produce sufficient information for learning good parameters, and potentially deal with larger/slower models? In addition: would adopting population-based training be applicable to the proposed framework?\\n\\n3. In Section 3.2, \\u201cLimitations\\u201d, there is a surprising comment that performance can be improved with further hyperparameter tuning. It is not clear how the authors found the configurations that produced the improved results. It would be helpful to clarify why the hyperparameter exploration proposed in the paper did not discover these improved configurations. Were the improved configurations outside of the range of considered values? Or would the exploration require more points to find the improved configuration?\\n\\n4. In Section 3.3 \\u201cBest configurations (quasi-random search)\\u201d, specifically Table 3, the paper presents an ablation of independent hyperparameters, over the best configuration for each of the 5 models. This is a very interesting section. One further suggestion, however, is whether the paper could include the performance of each of the models on the _average_ best configuration. Although the paper describes losses for switching individual parameters to their second best values, it is unlikely that the losses are cumulative. So, for example, if we can take the average/majority best value for each parameter (embedding size = 512, batch size = 1024, training type = 1vsall, loss = CE, etc.), and collect results for that configuration. I think it would be interesting to know the difference between a model trained on a \\u201ccollectively known good\\u201d set of parameters vs. a model and task specific tuned set of parameters.\\n\\n5. In Section 2, \\u201cEvaluation\\u201d, HITS@k is not formally defined. Unfortunately, I have encountered slight variants of this metrics (e.g: (1) given a SINGLE correct label, HITS@k is the average rate of the label being present in the top k scored results, or (2) given ALL possible correct labels, HITS@k is the percentage of correct labels present within the top k scored results, etc.). It would be nice to precisely describe HITS@k in this work.\\n\\n6. Caption for Table 2 does not contain a description for the \\u201cRecent\\u201d super-column.\\n\\n7. In Section 3.3 \\u201cBest configuration (quasi-random search)\\u201d Space missing at \\u201c... Tables 6 and 7(in \\u2026\\u201d, between 7 and (.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an experimental study about some KGE methods. It argues that papers often propose changes in several different dimensions, such as model, loss, training, regularizer, etc., at once without providing a sufficient investigation about the individual components' contributions. The experimental study considers two datasets (FB15k-237 and WNRR) and five different models (RESCAL, TransE, DistMult, ComplEx, ConvE). The models were selected using a quasi-random hyperparameter search, followed by a short Bayesian optimization phase to fine-tune the parameters. The performance of the best models found during this hyperparameter search are compared to first published results for the same model, as well as to a small selection of recent papers. To analyse the influence of single hyperparameters, the best found configuration is compared to the best configuration which does not use this specific value for the given hyperparameter.\\n\\nOverall, the paper adresses an important problem, as papers about new KGE methods often lack a clear separation of the individual changes' contribution. The experimental results show that older, simpler can compete with recently proposed models when trained properly. The intra-model comparison lacks statistical rigorousity, yet hints a few directions to further explore.\\n\\nThe experiments are based on a quasi-random hyperparameter search. While it is necessary for efficient exploration of larger search spaces [1], and should be the standard methodology for hyperparameter search of a new method, the interpretability of the comparison of two runs suffers. For the comparison between the trained models and previously published results, the sample size might be sufficient to draw the conclusions. However, the intra-model comparison, e.g. in Figure 2, are now comparing subsets of the runs which only comprise approx. (200/6) runs. Furthermore, the influence of random initialization is not accounted for. Another place where this can be witnessed is Table 3. Here, for some ablations, e.g. TransE + Reciprocal, no reduction is given. If I understood it correctly, this is due to not having a configuration which uses TransE and reciprocal relations. Also for the other ablations, it is unclear how statistically significant the reduction is.\", \"further_comments\": \"1. Please add the best published results for a specific model-dataset combination to table 2.\\n2. Do the plots in Figure 1 include the runs which were stopped after 50 epochs due to insufficient MRR?\\n3. Could you elaborate on the combination of KvsAll and CE?\\n4. The combination of subject and object triple scores has for instance been used in SimplE [2].\\n\\n\\n[1] Bergstra, James, and Yoshua Bengio. \\\"Random search for hyper-parameter optimization.\\\" Journal of Machine Learning Research 13.Feb (2012): 281-305.\\n[2] Kazemi, Seyed Mehran, and David Poole. \\\"Simple embedding for link prediction in knowledge graphs.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"Cross entropy with KvsAll\", \"comment\": \"Yes, the question has been answered. Thanks.\\n(It is still puzzling though why KvsAll & CE performs sometimes better than KvsAll & BCE, as they both are based on the local closed world assumption.)\"}",
"{\"comment\": \"Hi Jae, thanks for the support. As for your question, we are not using the CE definition of Kadlec et al., but directly the notion of cross entropy. Consider training point (i,k,j) and task (i,k,?) for which we compute the cross entropy between the model distribution, i.e. the softmax distribution of the scores s(i,k,?), and the data, i.e. the empirical distribution. For 1vsAll, the empirical distribution assigns probability 1 at position j, i.e., the true label, the rest is zero. This matches the notion used by Kadlec et al. For KvsAll, the empirical distribution for n true answers to (i,k,?) assigns probability 1/n to each of the true answers, zero to everything else. Does this clarify?\", \"title\": \"Cross entropy with KvsAll\"}",
"{\"comment\": \"Hi Apoorv, thanks for bringing this to our attention!\", \"title\": \"Thanks\"}",
"{\"comment\": \"Hi, first of all I want to say that I totally support this line of work. Researchers working on KGE (and also on other topics as well!) should deal the baselines fairly and pay as much attention to them as they do to their own models.\\n\\nOne thing that I found not so clear in the paper is how the cross entropy loss (CE) is combined with KvsAll. (Note that this combination is used for the most of the best performing models on WN18RR in Table 3 of the paper).\\nIt is clear to me that, based on the loss definition in [Kadlec et al., 2017], CE can be combined with 1vsAll. But it is not straight forward how CE can be combined with KvsAll, as claimed in line 7-8, page 4: \\\"CE ... has also been used in the multi-label setting (KvsAll)\\\". Please either add a reference to the claim or give a more detailed explanation.\", \"title\": \"Combining Cross Entropy with KvsAll\"}",
"{\"comment\": \"Much needed analysis!\\nI just want to add that in the Appendix of RotatE [1], they have done an ablation study on TransE where they have achieved 0.333 MRR for TransE on FB15k-237 dataset using adversarial negative sampling. I have been able to reproduce the same using the code they provided. I felt this might be relevant for your paper, since your paper reports the best of 0.303\\n\\nThanks\\n\\n[1] RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space https://arxiv.org/abs/1902.10197\", \"title\": \"Best reported results of TransE\"}",
"{\"comment\": \"Hi Chen, thanks for the comments. We plan on extending our study to include more models and datasets. In addition, we will release our framework as open source, and since adding new models is straightforward, we hope this will help to keep a growing list of comparable results for KGE models.\", \"title\": \"More models and datasets in the future\"}",
"{\"comment\": \"Hello,\\n\\nVery interesting and solid work. I would like to provide a different perspective for KGE that might be helpful. In paper [1], I give a group-theoretic treatment of KGE and connect different models as modeling relations in KG as elements in different groups. \\n\\nFrom this perspective, RESCAL is actually quite powerful since it corresponds to $GL(n, R)$. Other more recent methods correspond to other \\\"smaller\\\" groups. I would not be surprised that if optimization is done right, RESCAL can perform as well as other methods due to its very general form. \\n\\nTo me, the most interesting thing is to quantify the improvement of modeling non-communicative relations (son\\u2019s wife is not wife\\u2019s son) by going from the abelian group to the non-abelian group. RotatE can essentially model any finite abelian group (proved in [1]), and for non-abelian group, I saw three recent work [2][3][4]. It would be interesting to see under your evaluation platform, how much gain can we get by modeling non-abelian groups.\\n\\n[1] Group Representation Theory for Knowledge Graph Embedding https://arxiv.org/abs/1909.05100\\n[2] Quaternion Knowledge Graph Embedding https://arxiv.org/abs/1904.10281\\n[3] Relation Embedding with Dihedral Group in Knowledge Graph https://arxiv.org/abs/1906.00687\\n[4] A Group-Theoretic Framework for Knowledge Graph Embedding (ICLR this year) https://openreview.net/forum?id=r1e30AEKPr\", \"title\": \"Group perspective\"}",
"{\"comment\": \"Thanks Tim for clearing this up. Also thanks to both Tim and Bahare Fatemi for the hint regarding the SimplE paper, we will use those references.\", \"title\": \"Thanks for the clarification!\"}",
"{\"comment\": \"Thanks and absolutely! The paper will be accompanied with an open-source software framework (on GitHub, currently private), which implements the different training techniques, models, and hyperparameter search. We have briefly mentioned this in the \\\"Reproducibility\\\" section in the paper, but will say more on the project homepage once the paper is deanonymized. We may be able to provide a dump of the codebase upfront, but we are not sure if we can do so truly anonymously.\", \"title\": \"Our framework will be released as open source\"}",
"{\"comment\": \"Thank you for the comments. We will try to include data and a discussion about the different regularization techniques in the appendix. Also, thanks for the pointers to [1,2] with respect to introducing reciprocal relations; we'll add the corresponding references.\", \"title\": \"Regularization techniques and reciprocal relations\"}",
"{\"comment\": \"My original work (Dettmers et al., 2018) did not use reciprocal relations. However, when a bug was identified in my codebase [1], I made use of reciprocal relations to allow for the continued use of 1-K predictions. I updated my paper with new, corrected results, and I did not update my paper with the precise definitions of using reciprocal relations. As such, I would attribute the first defined use of reciprocal relations to Kazemi & Poole (2018) and Lacroix et. al (2018) as mentioned by Bahare Fatemi.\\n\\n[1] https://github.com/TimDettmers/ConvE/issues/18\", \"title\": \"Timeline of the use of reciprocal relations.\"}",
"{\"comment\": \"In my personal research, I found that there was always high variability between different code-bases and approaches. This is mostly due to (1) a variety of methods (batch size, loss, normalization, regularization etc.), and (2) some bugs in the evaluation procedure. I was not able to replicate some publications, for example, Kadlec et al., 2017 which is frustrating since such results can derail progress in the field. This work aims at a fair comparison of different knowledge graph completion by doing careful hyperparameter searches. As such this work can serve as a solid foundation and reference for future research. This is a very important contribution since the normalization of results across models allows for more precise calibration of promising research directions which allow for faster progress in this field of research.\\n\\nHowever, this work would be much more impactful if it would be coupled to a software framework in which new models can be developed. It would be of critical important that such a framework would be peer-reviewed to ensure that the evaluation procedure and sampling techniques are performed correctly. I am happy to peer review code if the authors are willing to provide such code.\\n\\nWhile the work would be strengthened significantly with the addition of a peer-reviewed codebase. I highly recommend this work to be accepted. Even without a peer-reviewed codebase, this work allows researchers to validate their personal codebases against results in this work. \\n\\n[1] Knowledge Base Completion: Baselines Strike Back: https://arxiv.org/abs/1705.10744\", \"title\": \"An important contribution. Source code needed.\"}",
"{\"comment\": \"Interesting work and interesting results. I have two questions/comments:\\n1- I was wondering if it is possible for the authors to include a figure representing the distribution of filtered MRR for different regularization techniques (or add some discussion on their relative performance)? \\n2- The authors attribute the use of reciprocal relations to Dettmers et al. 2018. I believe Dettmers et al. 2018 identified the leakage of the previous datasets due to the existence of inverse relations; the use of reciprocal relations for learning better embeddings was proposed in [1] and [2]. Also regarding \\u201cOn the downside, the use of reciprocal relations means that a model does not provide a single triple score s(i, k, j) anymore (generally, s_{sub}(i, k, j) \\\\neq s_{obj}(i, k, j); the discrepancy has not been studied yet).\\u201d, it has been proposed in [1] to considering the final score (s(i, k, j)) to be the average of the two scores (s_{sub}(i, k, j) and s_{obj}(i, k, j)) and it has been shown that this results in better performance compared to considering the final score to be either one of the scores (see SimplE vs SimplE-ignr in Table 1). \\n[1] https://papers.nips.cc/paper/7682-simple-embedding-for-link-prediction-in-knowledge-graphs \\n[2] http://proceedings.mlr.press/v80/lacroix18a.html\", \"title\": \"Two Questions/Comments\"}"
]
} |
Bkf4XgrKvS | Unsupervised Learning of Graph Hierarchical Abstractions with Differentiable Coarsening and Optimal Transport | [
"Tengfei Ma",
"Jie Chen"
] | Hierarchical abstractions are a methodology for solving large-scale graph problems in various disciplines. Coarsening is one such approach: it generates a pyramid of graphs whereby the one in the next level is a structural summary of the prior one. With a long history in scientific computing, many coarsening strategies were developed based on mathematically driven heuristics. Recently, resurgent interests exist in deep learning to design hierarchical methods learnable through differentiable parameterization. These approaches are paired with downstream tasks for supervised learning. In this work, we propose an unsupervised approach, coined \textsc{OTCoarsening}, with the use of optimal transport. Both the coarsening matrix and the transport cost matrix are parameterized, so that an optimal coarsening strategy can be learned and tailored for a given set of graphs. We demonstrate that the proposed approach produces meaningful coarse graphs and yields competitive performance compared with supervised methods for graph classification. | [
"Unsupervised learning",
"hierarchical representation learning",
"graph neural networks"
] | Reject | https://openreview.net/pdf?id=Bkf4XgrKvS | https://openreview.net/forum?id=Bkf4XgrKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"UO8v0jRsvU",
"Sye4VJGKsH",
"S1gJM1MYiS",
"B1xmARZKsS",
"Bylkq0ZYjH",
"S1lgrCbKiS",
"SyeMI_gOjS",
"SkxoB6Fbsr",
"HygY5iIScS",
"B1xsVn_aKH",
"ryldwtXVKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743246,
1573621547532,
1573621510946,
1573621451004,
1573621382866,
1573621304243,
1573550154364,
1573129539398,
1572330385400,
1571814451038,
1571203423716
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2208/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2208/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2208/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2208/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2208/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2208/AnonReviewer1"
],
[
"~Mingxing_Xu3"
],
[
"ICLR.cc/2020/Conference/Paper2208/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2208/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2208/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a differentiable coarsening approach for graph neural network. It provides the empirical demonstration that the proposed approach is competitive to existing pooling approaches. However, although the paper shows an interesting observation, there are remaining novelty as well as clarity concerns. In particular, the contribution of the proposed work over the graph kernels based on other forms of coarsening such as the early work of Shervashidze et al. as well as higher-order WL (pointed out by Reviewer1) remains unclear. We believe the paper currently lacks comparisons and discussions, and will benefit from additional rounds of future revisions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of updates in the paper\", \"comment\": [\"We inserted a sentence in the abstract to justify unsupervised learning.\", \"We drew the connection to WL test and WL kernels in the first paragraph of the introduction section.\", \"We added a summary of the novelty and contributions at the end of the introduction section.\", \"We patched the meaning of X and dimensions of matrices in section 3.1.4.\", \"We included one additional data set, IMDB-MULTI, for graph classification.\", \"We included an unsupervised baseline for graph classification. See Sections 4.1 and 4.2 and Table 1.\", \"We expanded the experiments with multi-task learning. See Section 4.4.\"]}",
"{\"title\": \"RE: Optimal transport\", \"comment\": \"Thank you very much for raising the concerns. They are good questions. Let us respond from two angles.\\n\\nWe choose to use optimal transport in a large part because it gives a convenient measure of the difference of two graphs. Recent work cited by the paper, such as Vayer et al. (2019), Xu et al., (2019a), and Garg & Jaakkola (2019), all devotes to the development of this measure. Natural alternatives may be to leverage graph embedding vectors and use the vector Euclidean distance as the measure, but we find that leveraging the node embedding vectors and treating them as a distribution in the optimal transport way is a more powerful machinery. Of course, one may summarize node embedding vectors into a graph embedding vector, but the summarization may lose information.\\n\\nThe optimal transport plan P is obtained as a byproduct of the computation of the optimal transport distance. Conceptually, the plan may be interpreted as how much portion of what nodes are transported to a coarse node. On the other hand, the coarsening matrix S plays more a role of determining the edge weights of the coarse graph (because A_c = S\\u2019AS). The matrices P and S are not necessarily the same, not even the sparsity structure. We have thought of using P to replace S, but this is a chicken-and-egg problem, because parameterization cannot be done. Another challenge is that when one looks at P and S, one implicitly assumes that the coarse nodes are known. However, the selection of coarse nodes comes from the coarsening step but not the transportation step. One must resolve these technical challenges before being able to equate P with S.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thank you very much for the informative comments. In what follows we respond to them. We also have updated the paper accordingly.\", \"re\": \"The point of unsupervised learning.\\n\\nThe major reason for conducting unsupervised learning is that in practice, labels are scarce and expensive to obtain. We are fortunate to have quite a few annotated data sets as benchmarks that facilitate evaluation. In real-life applications, however, often what limits the choice of methods is not the data but the labels (a folklore wisdom is that data is abundant in this \\u201cbig data\\u201d era but labels are expensive to obtain).\\n\\nAdditionally, we demonstrate the possibility of learning graph hierarchical structures without using labels.\\n\\nWe concur that multitask learning is an excellent example demonstrating the use of a single representation for different downstream tasks. We have included an experiment and updated the paper; see Section 4.4.\\n\\nWe also inserted the justification of unsupervised learning in the abstract.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thank you very much for the considerate comments. The critique is largely concerned with WL. We respond to it from two angles and have updated the paper to incorporate these discussions.\", \"re\": \"Novelty.\\n\\nWe would like to clarify that the coarsening approach we are proposing has a very weak connection with WL. The connection may be argued from the fact that we use GCN as one of the components in the parameterization of the coarsening matrix. However, the parameterization is only one piece of the method. The novelty and contribution of the work lie in the hierarchical treatment and the use of graph distance for parameter learning. The WL tests do not generate coarse graphs. The WL kernels also generally do not have a hierarchical flavor. Moreover, the counterpart of \\u201cgraph distance\\u201d in the WL setting is the reproducing kernel Hilbert space, which is in contrast to optimal transport in our case. Hence, we debate that the proposed differentiable pooling does not \\u201cuse WL kind of ideas\\u201d in a large part.\\n\\nTo clarify the novelty and contribution, we have added a summary from the perspectives of unsupervised learning, coarsening strategy, and empirical results, at the end of the introduction section.\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": [\"Thank you very much for the detailed questions. In what follows we answer them one by one and hope the replies help you reassess the value of our work. Please do not hesitate to ask more questions should they arise.\", \"A_c indeed is generally not binary. We treat both A and A_c as *weighted* adjacency matrices. By the design of S, the nonzero elements of A_c are all positive and they qualify as edge weights.\", \"Coarse nodes are not selected beforehand. Rather, they are selected through a parameterized ranking process (see eqn (2)). The parameters are obtained by optimizing a loss function.\", \"a and b are empirical measures in the optimal transport setting. As we explain in the text below eqn (4), \\u201ceach of a and b has constant elements that sum to unity, respectively.\\u201d\", \"The optimal transport distance is the loss function that we minimize. This is not to be confused with the fact that the distance itself is the result of a separate minimization problem. The k-step distance is like a closed-form formula solution of this separate minimization problem.\", \"The use of k-step is to make it a deterministic differentiable function. One may set k however large one wants, but it must be a specific number. In practice, because of the chain rule of differentiation, making k very large incurs a computational burden.\", \"Graph classification is performed through training a separate predictive model by taking the graph representation as input. As stated in the paper, \\u201cFor each node embedding matrix, we perform a global pooling (e.g., a concatenation of max pooling and mean pooling) across the nodes and obtain a summary vector. We then concatenate the summary vectors for all coarsening levels to form the feature vector of the graph. A multilayer perceptron is then built to predict the graph label.\\u201d\", \"As indicated by the above item, we use the information of the original graph and all subsequent coarse graphs when performing graph classification. That is, we do not separate these two pieces of information.\", \"As responded in an earlier item, the selection of the coarse nodes is learned rather than hand-picked. The purpose of Section 4.4 is to show that the learned result is meaningful qualitatively.\", \"X in eqn (2) is the node feature matrix. Thank you for the note and we have patched the notation explanation.\", \"Similarly, we have included the domains of the matrices.\"]}",
"{\"title\": \"A point in favour of the paper\", \"comment\": \"Yes, I somehow agree that the technical part of the paper is not the main contribution. For me, it is the empirical demonstration that a differentiable pooling using WL kind of ideas is competitive to existing pooling approach. This is an interesting take-away message for me. But indeed the novelty is somewhat unclear. This has to be clarified before publication.\"}",
"{\"title\": \"Concerns about the insights in adopting optimal transport distance for unsupervised graph representation learning.\", \"comment\": \"It is a interesting work. This paper proposed a hierarchical unsupervised pooling operation. In each pooling, nodes are selected by keeping the top-k nodes after transforming the node feature to importance score with one-layer GCN to consider both node features and graph structures. Then coarsening matrix $S$ are obtained by sampling and reweighing the normalized adjacent matrix. All the above strategy is existed. The main contribution of this paper is to adopt the optimal transport distance as unsupervised loss. In this paper, the optimal transportation distance are defined as $W_\\\\gamma(G,G_c)=min_{P\\\\in U(a,b)}<P,M>+\\\\lambdaE(P)$. In this function, $M$ is transport matrix and obtained by calculating the distance between the transformed node features of original graph and coarsening graph, $P$ is joint probability measure related to $M$ and the optimal joint probability measure can be obtained as $P_\\\\lambda=diag(u)exp(-M/\\\\lambda)diag(v)$. Thus the optimal transport distance is finally obtained and used to guide the training.\\nI have some concerns about the motivation. First, why optimal transport distance is a good loss function to guide the selection of node and coarsening of graph, the insights behind are not fully discussed. Second, in final, we could obtain the transport cost matrix as well as the optimal joint probability measure. Does it mean that we can obtain the optimal transport plan to transport original graph to coarsening graph, what is the relationship between the optimal plan with the coarsening matrix? this is not mentioned in this paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an unsupervised hierarchical approach for learning graph representations. The proposed architecture is constructed by unrolling k-steps of a parametrized algebraic multigrid approach for minimizing the Wasserstein metric between the graph and its representation. The node distance (transport cost) used in the Wasserstein metric is also learned as an L2 distance between the embeddings of some graph embedding function. The approach is compared against 6 other state of the art approaches on 5 graph classification tasks, showing significant improvements 4 of them.\\n\\nThe paper is reasonably well written, however, I think some of the explanations can be tightened further. Especially a lot on the background of AMG is not really that relevant, since the authors are not transferring technical results from AMG. Also, it seems like a better flow for presenting this argument might be to switch the order of sections 3.2.1 and 3.1.2. It looks like the main point is that this architecture is trying to emulate iterative coarsened residual optimization of the Wasserstein metric between a graph and its representation. How the coarsening matrix is derived is more of a technical point (it looks like the results would be much more sensitive to a switch of metric than to a switch of parametrization for S). \\n\\nThe empirical results are quite intriguing. There are, however, natural and important questions left unanswered. First and foremost, how does the amount of downsampling (compression) compare between methods. How many parameters do different methods require? It would also be good to see what the baseline performance would have been without any input compression as to understand how close these approaches are to the upper bound.\\n\\nFinally, I think the main issue of this paper, is left unresolved, namely, what is the point of not having supervision from the downstream task. As a user of graph representations trying to solve some problem, the only thing I would want from my representation is to capture some notion of sufficient statistics that are small enough to be efficient and allow me to solve my problem. I would not necessarily care about how well the learned representation resembles the original graph unless I believed that my downstream task was hard to evaluate and that it was very smooth in the Wasserstein metric. I read the paper multiple times, trying to find any discussion on this, but it seems that the fact that an unsupervised representation is a good thing is taken for granted. A point could at least be made using the same representation for different tasks experimentally. Or, perhaps, literally doing an AMG-type unpacking of the downstream task itself as a comparison. This would shed light on the question of whether the iterated residuals or the choice of distance is what's driving the observed results.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a differentiable coarsening approach for graph neural network (GNNs).\\nTo this end, it is motivated by algebraic multigrid and optimal transport methods. \\n\\nGNNs is indeed an interesting line of research. And introducing coarsening into them, it a highly relevant step. However, there are some major downsides. First, some of the statements are a little but too strong. The paper starts with claiming that GNNs are competitive to graph kernels. But then for instance\\n\\nChristopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe:\", \"weisfeiler_and_leman_go_neural\": \"Higher-Order Graph Neural Networks. AAAI 2019: 4602-4609\\n\\nshow that many (if not all) GNNs are equivalently expressive as the Weisefeiler-Lehman (WL) graph kernel. Hence, the competitiveness has to be qualified. Moreover, since you also employ graph convolutional networks for coarsening, you are also in the regime of this paper. Consequently, one should actually compare to WL, at least one should mention this connection. Actually, given that the datasets are not that large, one should run some statistical significance test. Moreover, if you check the paper above, they report much better results for PatchySan on MUTAG, better results on Protein for graph kernels, better results on IMDB-B using a hierarchical GNN approach, based on ideas of higher-order WL. \\n\\nNevertheless, indeed, the present paper shows that a differentiable pooling using WL kind of ideas is competitive to existing pooling approach. This is nice, but in the light of the work above, the novelty is unclear. This has to be clarified before publication.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method to summarize a given graph based on the algebraic multigrid and optimal transport, which can be further used for the downstream ML tasks such as graph classification.\\nAlthough the problem of graph summarization is a relevant task, there are a number of unclear points in this paper listed below:\\n\\n- In Section 3.1, the coarsening method has been proposed, which is said to be achieved by finding S such that A_C = S^T A S. \\n However, A_C is usually not binary for S \\\\in R^{n x m}, hence how to get the coarse graph G_C from A_C is not clear. Please carefully explain this point.\\n- In the proposed method, coarse nodes should be selected beforehand. Is there any guideline of how to choose them?\\n- In Section 3.2, optimal transport is introduced and the distance between G and G_C is measured via entropic optimal transport in Equation (4) or (7).\\n However, in Equation (4), a and b should come from the input G and G_C , and it is not clearly explained how to obtain them from the input.\\n Moreover, how to use the distance between G and G_C in the proposed coarsening method is also not clear. It seems that it is not used in Algorithm 1.\\n- I do not understand why the k-step optimal transport distance is needed. Since it converges to the global optimum as k becomes large, it is usually enough to set k to be large enough.\\n- In experiments, how is the proposed method used for graph classification?\\n Since the proposed method is for generating coarse graphs in an unsupervised manner, graph classification cannot be directly performed by itself.\\n- In addition to the above issue, to assess the effectiveness of the the proposed method, the following experiment is recommended:\\n Fix some classifier and compare performance of graph classification for the original graphs and for the coarse graphs.\\n- In the qualitative study in Section 4.4, while the authors discuss coarse nodes, they are just an input from the user and results are arbitrary. Hence such discussion is not informative.\", \"minor_comments\": [\"What is \\\"X\\\" in Equation (2)?\", \"I recommend to write domain for matrices when they used at the first time.\"]}"
]
} |
r1gEXgBYDH | Defensive Tensorization: Randomized Tensor Parametrization for Robust Neural Networks | [
"Adrian Bulat",
"Jean Kossaifi",
"Sourav Bhattacharya",
"Yannis Panagakis",
"Georgios Tzimiropoulos",
"Nicholas D. Lane",
"Maja Pantic"
] | As deep neural networks become widely adopted for solving most problems in computer vision and audio-understanding, there are rising concerns about their potential vulnerability. In particular, they are very sensitive to adversarial attacks, which manipulate the input to alter models' predictions. Despite large bodies of work to address this issue, the problem remains open. In this paper, we propose defensive tensorization, a novel adversarial defense technique that leverages a latent high order factorization of the network. Randomization is applied in the latent subspace, therefore resulting in dense reconstructed weights, without the sparsity or perturbations typically induced by the randomization.
Our approach can be easily integrated with any arbitrary neural architecture and combined with techniques like adversarial training. We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks. We further validate the generalizability of our approach across domains and low-precision architectures by considering an audio classification task and binary networks. In all cases, we demonstrate superior performance compared to prior works in the target scenario. | [
"tensor decomposition",
"tensor factorization",
"randomization",
"robustness"
] | Reject | https://openreview.net/pdf?id=r1gEXgBYDH | https://openreview.net/forum?id=r1gEXgBYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ciz94Hh2f",
"HJluYfNooB",
"ByeaXG4ssB",
"rJxdCW4ojr",
"BJlqtyEosB",
"Bkgn8aQooH",
"r1l0cn0k9H",
"Hyl2dMt0YB",
"BylqtktpFS",
"r1gR0Ei4ur",
"BJxKtFdnDH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798743216,
1573761663839,
1573761572769,
1573761488407,
1573760897724,
1573760339636,
1571970197565,
1571881588296,
1571815298152,
1570186453826,
1569651072637
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2207/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2207/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2207/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2207/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2207/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2207/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2207/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2207/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2207/Authors"
],
[
"~Anthony_Wittmer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Three reviewers have assessed this submission and were moderately positive about it . However, the reviewers have also raised a number of concerns. Initially, they complained about substandard experimentation which has been resolved to some degree after rebuttal (rev. believe more can be done in terms of unifying them, investigating backbones, attack methods, and experimental settings in light of recent papers).\", \"a_somewhat_bigger_criticism_concerns_the_theoretical_part\": \"1. Rev. remained unclear why using tensor decomposition techniques is a sound approach for designing robust network.\\n2. AC and rev. also noted during discussions that using low rank constraints (and other mechanisms) and i.e. encouraging smoothness (one important mechanism among many in robustness to attacks) have been extensively investigated in the literature, yet, the proposed idea makes scarce if any theoretical connection to such important theoretical tools.\\n\\nSome references (not exhaustive) that may help authors further study the above aspects are:\\nCertified Adversarial Robustness via Randomized Smoothing, Cohen et al.\", \"local_gradients_smoothing\": \"Defense against localized adversarial attacks, Naseer et al.\\nLimitations of the Lipschitz constant as adefense against adversarial examples, Huster et al.\\nLearning Low-Rank Representations, Huster et al.\\n\\nOn balance, AC feels that despite the enthusiasm, this paper is not ready yet for the publication in ICLR as the key theory behind the proposed idea is missing. Thus, this submission falls marginally short of acceptance in ICLR 2020. However, the authors are encouraged to build up a compelling theory and resubmit to another venue (currently the paper feels like a solid workshop idea that needs to be investigated further).\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": \"We are glad to see that all reviewers recognised the novelty of our approach and that there is a consensus for acceptance. We are grateful to all reviewers\\u2019 comments, which we believe will help in greatly improving the quality of the paper. In this rebuttal, we carefully addressed all the comments and ran additional experiments as recommended by the reviewers within the limited time.\\n\\n$\\\\textbf{A summary of additional experiments conducted during rebuttal}$:\\nFollowing the reviewers\\u2019 suggestions, we ran the following additional experiments:\\ni) Experiment on CIFAR-100\\nii) Experiments with a different, deeper architecture (ResNet-101)\\niii) Experiments with BPDA [1] (test against obfuscated gradients)\\niv) Comparison with matrix-based decomposition\\nv) Comparison with Wang et al, 2018.\\nvi) Comparison with defensive dropout \\nvii) Experiments with black box attack \\n\\nWe address individual reviewer\\u2019s comments in their respective threads.\"}",
"{\"title\": \"Response to Reviewer 2 [part 2/2]\", \"comment\": \"$\\\\bullet \\\\textit{About experiments against black box attacks}$\\n\\nThank you for your comment. In addition to the additional experiments mentioned and reported above, we ran a black box attack. We followed the standard setting, as in the paper you recommended (Moustafa et al, ICCV 2019). The results are as follows:\\n\\n$\\\\begin{array} {|r|r|}\\\\hline Method & Clean & FGSM(2/8/16) & BIM(2/8/16) & PGD(2/8/16) \\\\\\\\ \\\\hline Baseline & 95.4 & 94.2/87.8/79.1 & 93.0/84.7/77.5 & 94.0/82.3/59.9 \\\\\\\\ \\\\hline Ours & 88.5 & 87.4/87.1/83.3 & 87.4/84.5/83.8 & 87.6/85.2/83.4 \\\\\\\\ \\\\hline \\\\end{array}$\\n\\nAs expected, our method is more robust in all cases. One thing to notice is the relative difference in performance: our method starts with a slightly lower performance on the clean set but is much less affected by the adversarial attacks. \\n\\n$\\\\bullet \\\\textit{It is unclear how much the architecture of a backbone can impact the fairness of comparison.}$\\nWe agree with the reviewer that a fair comparison is paramount. As such, we compared only methods in the same context. In particular, all our comparisons are done on the same dataset, using the same exact architecture and the same experimental setting. This is also why it is challenging to compare with all methods, and we selected the most recent state-of-the-art at the time of writing to compare with. In case this is not sufficiently clear in the current version, we will further clarify this in the updated manuscript.\\n\\n$\\\\bullet \\\\textit{On typos and unifying the subscripts/superscripts for all \\\\lambda.}$\\nThank you for pointing these out, we will fix them in the updated manuscript.\"}",
"{\"title\": \"Response to Reviewer 2 [part 1/2]\", \"comment\": \"$\\\\bullet \\\\textit{- Insufficient and badly conducted comparative study with recent SOTAs.}$\\nIn the experimental evaluation section of the paper we followed the same setup as previous landmarks and state-of-the-art papers in the field, such as (Lin et al, 2018, Xu et al, 2017, Madry et al, 2019). We also reported results with an additional architecture for audio data, and experimented with binary neural networks. \\nFurthermore, based on your suggestion, we also ran additional experiments which we will add to the final version of the paper. Specifically, we ran experiments on CIFAR-100, compared with the method suggested by reviewer 1 (Wang et al, 2018). We also tried an additional network architecture, ResNet-101. Finally, we experimented with Black box attacks, following the standard setting, as in the paper you recommended (Moustafa et al, ICCV 2019). In all cases, we demonstrate superior performance and increased robustness to adversarial attacks. \\n\\n$\\\\bullet \\\\textit{- Insufficient experiment with larger datasets (such as CIFAR-100) or enough variety of datasets (such as SVHN).}$\\n\\nThanks for your suggestion. We accordingly ran additional experiments with a ResNet-18 on CIFAR-100 dataset and in the table below we report the new results (classification accuracy, in percent). Note, that our approach produces consistently more robust models.\\n\\nOne important observation is that our method is consistently robust to adversarial attacks, and in most cases generates robust models than employing adversarial training, while achieving higher accuracy on the clean data. As can be observed in the last three rows, our method can also be easily combined with adversarial training, in which case robustness is greatly improved. \\n\\n$\\\\begin{array} {|r|r|}\\\\hline Method & clean & FGSM(2/8/16) & BIM(2/8/16) & PGD(2/8/16) \\\\\\\\ \\\\hline baseline & 76.2 & 21.8/8.7/4.5 & 11.0/0/0 & 11.6/0/0 \\\\\\\\ \\\\hline \\\\theta=0.95 & 68.9 & 65.8/57.6/47.9 & 56.6/40.6/37.6 & 57.5/36.7/26.6 \\\\\\\\ \\\\hline \\\\theta=0.9 & 67.2 & 64.5/56.5/48.9 & 55.7/44.1/41.8 & 58.3/38.6/29.0 \\\\\\\\ \\\\hline \\\\theta=0.8 & 62.6 & 58.8/54.2/48.3 & 54.4/44.7/43.0 & 55.3/41.8/32.0 \\\\\\\\ \\\\hline adv. train & 61.7 & 47.4/23.1/11.0 & 47.2/17.4/4.0 & 48.0/20.4/5.5 \\\\\\\\ \\\\hline \\\\theta=0.95 + adv & 60.4 & 58.5/57.1/53.2 & 56.9/53.4/50.2 & 56.8/54.5/50.0 \\\\\\\\ \\\\hline \\\\theta=0.9 + adv & 58.6 & 58.5/56.2/54.2 & 58.0/53.6/52.6 & 57.4/53.8/49.8 \\\\\\\\ \\\\hline \\\\theta=0.8 + adv & 56.7 & 54.8/52.6/51.3 & 53.9/51.2/50.6 & 55.2/51/8/49.5 \\\\\\\\ \\\\hline \\\\end{array}$\\n\\n$\\\\bullet \\\\textit{ - No direct experiment verification that supports the advantage of randomization in a subspace }$\\nTo further showcase the advantages of our approach against ones that don\\u2019t make the randomization in the subspace we compare against the method suggested by Reviewer 1 (Wang et al 2018), in which the authors proposed to apply dropout directly to the activation of the first fully connected layer at test time. Since their method requires the presence of multiple fully connected layers, we apply our randomized tensorization directly on their architecture. The results of the comparison can be seen below for the same epsilon as in Wang et al 2018. Notice that our approach consistently outperforms the defensive dropout. \\n\\n$\\\\begin{array} {|r|r|}\\\\hline Method & Clean & FGSM & BIM & PGD \\\\\\\\ \\\\hline Ours & 85.9 & 60.0 & 42.6 & 43.8 \\\\\\\\ \\\\hline Defensive~dropout~[Wang~et~al]\\n & 83.4 & 41.3 & 32.2 & 35.2 \\\\\\\\ \\\\hline \\\\end{array}$\\n\\n$\\\\bullet \\\\textit{- No discussions on the training complexities and the extendability to large-scale datasets/networks, such as ImageNet/ResNet-101. }$\\n\\nOur proposed method is architecture-agnostic and can be incorporated in any arbitrary network architecture. To validate this, we follow your suggestion to train a ResNet-101 on the CIFAR10 dataset. The results are in line with those obtained using a ResNet-18 and we report results from this additional experiment below:\\n\\n$\\\\begin{array} {|r|r|}\\\\hline Method & Clean & FGSM(2/8/16) & BIM(2/8/16) & PGD(2/8/16) \\\\\\\\ \\\\hline baseline & 95.4 & 65.8/48.7/28.6 & 43.3/0/0 & 44.6/0/0 \\\\\\\\ \\\\hline \\\\theta=0.95 & 92.1 & 84.8/72.8/58.5 & 72.9/40.0/39.9 & 75.6/42.5/37.0 \\\\\\\\ \\\\hline adv. train & 87.5 & 76.7/54.1/37.0 & 74.5/43.5/26.0 & 76.2/48.1/28.4 \\\\\\\\ \\\\hline \\\\theta=0.95+adv & 86.6 & 84.3/79.6/73.0 & 83.3/73.8/62.5 & 84.1/76.0/69.2 \\\\\\\\ \\\\hline \\\\end{array}$\\n\\nOur method is significantly more robust to adversarial attacks than the baseline, and even outperforms adversarial training. Robustness can be further improved by combining the two.\\n\\n$\\\\bullet \\\\textit{- Missing citation and comparison to the following two SOTAs:}$\\n$\\\\textit{1. Xie et al., Feature Denoising for Improving Adversarial Robustness, CVPR19 }$\\n$\\\\textit{2. Mustafa et al., Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, ICCV19}$\\n\\nThank you for pointing these out, we will cite them and add them to the discussion in the updated manuscript. The last one in particular was not published at the time of writing the initial version of our paper.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"$\\\\bullet \\\\textit{Tucker decomposition allows one to have randomized but still dense weights. Is it the only reason to use tensor decomposition?}$\\n$\\\\textit{Why not do the same with a simple low rank matrix for example?}$\\n \\nThank you for your thoughtful feedback, these points will be clarified in the paper.\\ni) Tensor methods have the ability to leverage the multi-linear structure in the data, weight and activations. This is a property that is desirable when building robust neural networks. It can be noted that the matrix case is a special case of our approach. Specifically, in Equation 3, this can be obtained by setting $M_I = M_H = M_W = U_l^{I} = U_l^{H} = U_l^{W} = \\\\mathbf{I} $. The equality can then be rewritten in term of the mode-1 unfolding to obtain the matrix case. We will add this and a discussion in the final manuscript. In addition, to validate the above hypothesis, we ran an additional experiment on CIFAR-10, using a ResNet-18 architecture. We run both the matrix and tensor version of our method, for the same value theta,. Notice that the tensor decompositions offers consistent gains over the matrix ones. The results are reported in the table below:\\n\\n$\\\\begin{array} {|r|r|}\\\\hline Method & Clean & FGSM(2/8/16) & BIM(2/8/16) & PGD(2/8/16) \\\\\\\\ \\\\hline Ours & 94.5 & 84.9/65.4/54.0 & 60.2/26.6/27.0 & 64.4/27.0/22.4 \\\\\\\\ \\\\hline matrix~decomp. & 93.7 & 76.9/52.8/40.9 & 44.6/17.5/18.2 & 50.0/16.9/15.0 \\\\\\\\ \\\\hline \\\\end{array}$\\n\\nii) The regularization during training happens is in the latent subspace, and the network is learnt end-to-end with this regularization, and thus preserving the distributions of the reconstructed weights, despite the randomness. In particular, no sparsity is induced on the weights, as opposed to existing methods such as Stochastic Activation Pruning or Dropout based approaches. \\n\\niii) Lastly, when training deep convolutional neural networks, there is an evidence that over-parameterization is crucial (Du & Lee, 2018; Soltanolkotabiet al., 2018) Our latent parameterization allows for both over-parameterization in the reconstruction space and preserving performance while still having randomization in the latent space, which can be controlled with a large values of $\\\\theta$.\\n\\n* Simon S Du and Jason D Lee. On the power of over-parameterization in neural networks with quadratic activation. In ICML, 2018.\\n* Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 2018.\\n\\n$\\\\bullet \\\\textit{The experimental section is developed but I find the experimental setting not clearly described (e.g., what is the metric? is it the accuracy over adversarial examples?).}$\\n\\nAll the results reported throughout the paper are in terms of Top-1 accuracy computed across the entire test set and reflect the changes in accuracy due to the adversarial attacks as we increase the magnitude of the attacks. We will further clarify this in the paper.\\n\\n$\\\\bullet \\\\textit{The core tensor G in the decomposition as the same size of W, so at this stage W is not parameterized as a low rank tensor (W is actually over-parmeterized).}$\\n$\\\\textit{When the stochastic vectors \\\\lambda are introduced the Tucker rank of W is implicitly reduced. This could be clarified. }$\\n\\nYes, we use a full-rank decomposition, since, as pointed out by the reviewer, the focus is the randomization, not the low-rank structure. We will make this clear in the paper.\\n\\n$\\\\bullet \\\\textit{- Is stochasticity preserved at test time (unlike when using dropout but like in Wang, et al. (2018))?}$\\n\\nYes, we preserve the stochasticity at test time (indicated by the value of theta in each table). We will make this clear in the paper.\\n\\n $\\\\bullet \\\\textit{- What is the metric used in Table 1 to compare the models? }$\\nThe metric used in Table 1 and through this paper is the Top-1 accuracy. Thanks for pointing it out, we will make sure to mention this in the caption and body of the manuscript.\\n\\n$\\\\bullet \\\\textit{- Would it make sense to explore other tensor decomposition models? Are there any particular reasoning motivating the choice of Tucker? }$\\n\\nIt would be interesting to explore other decompositions. The randomization is general and does not depend on a specific decomposition. In this paper, we selected a Tucker structure as it is well suited to our use-case as it induces a latent subspace (represented by the core), with a multi-linear mapping to and from that subspace defined by the factors of the decomposition. The stochastic regularization is applied in the latent subspace whilfrom which the actual weights are then reconstructed using the multi-linear mapping. \\n\\nCP is a special case of Tucker where the core is super-diagonal so we can assume the performance would be similar. Other decompositions such as Tensor-Train would be an interesting experiment which we leave for future work due to time-constraint.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"$\\\\bullet \\\\textit{ 1. I don't understand why using randomization in the latent space of the weights can retain the classification accuracy on clean data. }$\\n\\nUsing randomization in the latent space has several advantages. First, the distributions of the reconstructed weights is preserved, despite the randomness, since the regularization during training happens is in the latent subspace, not directly on the weights. In particular, no sparsity is induced on the weights. On the other hand, existing randomizations methods that *do* induce sparsity, such as Stochastic Activation Pruning or Dropout based approaches require correcting the magnitudes after pruning. In addition, these approaches are significantly less robust than our approach to adversarial attacks. In addition, when training deep convolutional neural networks, there is an evidence that over-parameterization is crucial (Du & Lee, 2018; Soltanolkotabiet al., 2018). Our latent parameterization allows for both over-parameterization in the reconstruction space and preserving performance while still having randomization in the latent space, which can be controlled with a large values of $\\\\theta$. \\n\\n$\\\\bullet \\\\textit{ Besides, I think an accuracy of 90.1 on CIFAR10 (Tab.2) is not high. }$\\n\\nRegarding performance (90.1), there is a tradeoff between accuracy on the clean set and robustness to adversarial attacks. Please note that adversarial training alone (the go-to method) trained in the same exact setting, has a performance of only 86.6 on the clean set, which is significantly worse than ours. In addition, our approach is more robust to adversarial attacks in all cases. \\n\\n$\\\\bullet \\\\textit{ 2. As in the review written by Anthony Wittmer, the author should include experiments to check the obfuscated gradient issue.}$\\n\\nThank you for your comment, we have ran this additional experiment. As mentioned to Anthony Wittmer, in Section 5, \\\"Defending against omniscient attacker\\\", we hoped to address that very point. In that scenario, the attacker has access to the full (unrandomized) weights and uses these to perform the attack. The idea is that these unrandomized weights could be obtained by accumulating forward passes as suggested in BPDA[1].\\nPlease note that an important point of our approach is that we do not randomize the weights directly. Instead, we apply randomization in the latent subspace spanned by the low-rank structure imposed. You can think of it as a stochastic regularization applied to the *rank* of the tensor factorization.\\n\\nOne could argue that applying [1] would result in different results (e.g. [1] acts as an ensemble of models). To verify, in addition to the above scenario and following your comments, we ran the following additional experiment (BPDA [1]):\\nat each iteration of gradient descent, for each convolutional layer, instead of taking a step in the direction of $\\\\nabla_x f(x)$ we move in the direction of $\\\\sum_{i=1}^{k}\\\\nabla_x f(x)$ where each pass has the weights randomized in the latent space using our approach.\\n\\nWe report here the accuracy (on CIFAR 10), obtained using our best model, for various values of $\\\\epsilon=2,8,16$:\\n\\n$\\\\begin{array} {|r|r|}\\\\hline Method & 2 & 8 & 16 \\\\\\\\ \\\\hline BPDA & 83.3 & 54.9 & 43.8 \\\\\\\\ \\\\hline \\\\end{array}$\\n\\nWhile in [1] the authors use $k=10$ we try with up to $k=20$ but without noticing any significant increase in the success rate of the attack. The PGD attack itself was run for 500 iterations as in [1]. These results are in line with the results we reported in the paper, see Table 2 in the manuscript.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose a randomization-based tensorization framework towards robust network learning. The high-level idea of this work is to reparameterize the network parameters W of each layer with low-rank tensors, where the factor matrices are injected with randomization through randomly sampled sketching matrices. Since the randomization is is done within a subspace than directly on the weight matrix itself, the authors claim that this brings certain advantages such as less sparsity.\", \"strengths\": [\"Well-written paper with good clarity and technical correctness.\", \"Interesting idea with novelty.\", \"Good ablation study with clear performance improvement from the proposed framework.\", \"Good applications with binarized networks and audio classification.\"], \"weaknesses\": [\"Insufficient and badly conducted comparative study with recent SOTAs.\", \"Insufficient experiment with larger datasets (such as CIFAR-100) or enough variety of datasets (such as SVHN).\", \"No direct experiment verification that supports the advantage of randomization in a subspace\", \"No discussions on the training complexities and the extendability to large-scale datasets/networks, such as ImageNet/ResNet-101.\", \"Missing citation and comparison to the following two SOTAs:\", \"1. Xie et al., Feature Denoising for Improving Adversarial Robustness, CVPR19\", \"2. Mustafa et al., Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, ICCV19\"], \"comments\": \"I consider the idea of this paper novel and interesting. Considering tensor factorization with randomization for network robustness makes a lot of sense but overall the experiments of this paper are not well-conducted towards comparative studies with other SOTAs, although ablation study shows the considerably improved robustness from the proposed method. The main concerns of this paper lie in several aspects:\\n1. It seems that the authors did not report their comparison to recent SOTAs (such as Lin et al, 2019) comprehensively enough, nor were the benchmark measures (missing several other attacks, especially black box ones), datasets and backbones fully aligned. It is unclear how much the architecture of a backbone can impact the fairness of comparison. There is also no apples to apples comparison to directly verify the advantage of this work over non-subspace-based randomization method.\\n2. The authors failed to cite and compare to recent two SOTAs (listed above) which conduct large-scale experiments with bigger models. And there is no discussion about the extendability/generalizability of the proposed method to these data and models. Therefore, the contributions of this work somehow become less convincing.\", \"minor_typos\": \"In page 4 \\\"Randomizing in the latent subspace\\\": \\n\\\\lambda^F \\\\in R^O --> \\\\lambda_F \\\\in R^F\\nM_O = diag(\\\\lambda_F) --> M_F = diag(\\\\lambda_F)\\nplease unify subscripts/superscripts for all \\\\lambda.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper tackles the problem of designing neural network architectures that are robust to adversarial attacks. Several defense techniques against adversarial attacks have been proposed, mainly adversarial training (train on perturbed inputs) and introducing random perturbation to the weights or activations of the network. The paper claims that one limitation of the second approach is that it introduces artifacts (e.g. sparsity). The authors propose a simple but original idea to address this issue: parameterize the network's weight matrices as low rank tensors (in the Tucker format) and randomize the weights by sketching the core tensor of the Tucker decomposition (in effect, the sketching amounts to randomly setting fibers of the core tensor to 0).\", \"I think this paper can be relevant to the community but I am not confident that this is an important contribution. The idea is interesting and addresses the problem of sparsity artifacts in randomized defense strategies, but it does not appear clearly why using tensor decomposition techniques is a sound approach for designing robust networks (besides overcoming sparsity artifacts). I believe there may be more fundamental (theoretical, principled) arguments to motivate the approach, but this is not explored in the paper: the idea is interesting but not supported by much theoretical insight. Yes, using Tucker decomposition allows one to have randomized but still dense weights. Is it the only reason to use tensor decomposition? Why not do the same with a simple low rank matrix for example?\", \"The experimental section is developed but I find the experimental setting not clearly described (e.g., what is the metric? is it the accuracy over adversarial examples?). Maybe this is because I am not familiar with the adversarial defense literature.\", \"In conclusion, I am a bit on the fence for this paper. The idea is interesting and definitely worth exploring but to me a more thorough discussion and analysis of why tensor decomposition techniques are relevant is missing. Still, the approach is original and this paper may spark future work further exploring these questions, so I recommend acceptance.\", \"Comments / Questions *\", \"Paragraph \\\"Latent high-order parametrization of the network\\\". If I understand correctly, the core tensor G in the decomposition as the same size of W, so at this stage W is not parameterized as a low rank tensor (W is actually over-parmeterized). This is only when the stochastic vectors \\\\lambda are introduced that the Tucker rank of W is implicitly reduced. This could be clarified.\", \"Is stochasticity preserved at test time (unlike when using dropout but like in Wang, et al. (2018))?\", \"What is the metric used in Table 1 to compare the models?\", \"Would it make sense to explore other tensor decomposition models (e.g. CP, tensor train, tensor ring, ...)? Are there any particular reasoning motivating the choice of Tucker?\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose to use randomized tensor factorization in the weight space as a defense to adversarial attack, which builds upon the existing works on using randomization on the weights or activation as a defense methods.\", \"pros\": \"1. The idea of using randomized tensor factorization for dense is novel\\n2. It seems that this defense is robust to large perturbation (epsilon), and the accuracy on clean data is high when combined with PGD adv.training.\", \"cons\": \"1. I don't understand why using randomization in the latent space of the weights can retain the classification accuracy on clean data. The authors say that this is because both the weights and the activations are not sparse. But I don't understand the relation between sparsity and accuracy. Can the author can provide some evidence on this (probably from the previous acceleration literatures). Besides, I think an accuracy of 90.1 on CIFAR10 (Tab.2) is not high.\\n2. As in the review written by Anthony Wittmer, the author should include experiments to check the obfuscated gradient issue.\"}",
"{\"comment\": \"Hi Anthony,\\n\\nThank you for your interest in our paper. \\n\\nIn Section 5, \\\"Defending against omniscient attacker\\\", we hoped to address that very point. In that scenario, the attacker has access to the full (unrandomized) weights and uses these to perform the attack. The idea is that these unrandomized weights could be obtained by accumulating forward passes as suggested in BPDA[1].\\nPlease note that an important point of our approach is that we do not randomize the weights directly. Instead, we apply randomization in the latent subspace spanned by the low-rank structure imposed. You can think of it as a stochastic regularization applied to the *rank* of the tensor factorization.\\n\\nOne could argue that applying [1] would result in different results (e.g. [1] acts as an ensemble of models). To verify, in addition to the above scenario and following your comments, we ran the following additional experiment (BPDA [1]):\\nat each iteration of gradient descent, for each convolutional layer, instead of taking a step in the direction of $\\\\nabla_x f(x)$ we move in the direction of $\\\\sum_{i=1}^{k}\\\\nabla_x f(x)$ where each pass has the weights randomized in the latent space using our approach. \\n\\nWe report here the accuracy (on CIFAR 10), obtained using our best model, for various values of $\\\\epsilon$:\\n+---------------+----------------------------+\\n| | Epsilon |\\n| Attack +-------+---------+---------+\\n| | 2 | 8 | 16 |\\n+---------------+-------+--------+----------+\\n| BPDA [1] | 83.3 | 54.9 | 43.8 |\\n+---------------+-------+--------+----------+\\n\\n\\n\\nWhile in [1] the authors use $k=10$ we try with up to $k=20$ but without noticing any significant increase in the success rate of the attack. The PGD attack itself was run for 500 iterations as in [1]. These results are in line with the results we reported in the paper, see Table 2 in the manuscript.\\n\\nThanks,\\nThe authors.\", \"title\": \"Additional comparison wtih BPDA [1]\"}",
"{\"comment\": \"Hi,\\n\\nI find the evaluation on the black-box attacks is missing in this paper, which is important, because that if a model causes obfuscated gradients, black-box attacks perform better than white-box attacks[1]. \\n\\nSome defend methods based on the randomization techniques have been broken by BPDA[1] or Nattack[2]. \\nSince the proposed method adopts some randomization techniques, I have a little doubt whether the proposed model causes obfuscated gradients to give a false sense of security.\\n\\nIn order to check whether obfuscated gradients has happened, BPDA[1] or Nattack[2] is a better choice to evaluate the models.\\n\\n[1] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML 2018\\n[2] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks. ICML 2019\", \"title\": \"Evaluation questions and obfuscated gradients\"}"
]
} |
BJl7mxBYvB | Robust Reinforcement Learning via Adversarial Training with Langevin Dynamics | [
"Huang Yu-Ting",
"Parameswaran Kamalaruban",
"Paul Rolland",
"Ya-Ping Hsieh",
"Volkan Cevher"
] | We re-think the Two-Player Reinforcement Learning (RL) as an instance of a distribution sampling problem in infinite dimensions. Using the powerful Stochastic Gradient Langevin Dynamics, we propose a new two-player RL algorithm, which is a sampling variant of the two-player policy gradient method. Our new algorithm consistently outperforms existing baselines, in terms of generalization across differing training and testing conditions, on several MuJoCo environments. | [
"deep reinforcement learning",
"robust reinforcement learning",
"min-max problem"
] | Reject | https://openreview.net/pdf?id=BJl7mxBYvB | https://openreview.net/forum?id=BJl7mxBYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ccZjzpHCsU",
"SkxOFmx2iB",
"Hkxwl7e3jS",
"B1e-fXEoiS",
"BkgcrZNojH",
"SJxLet99sB",
"r1lU1m95oS",
"HkxECxAUiB",
"SyxZyNppYB",
"B1eUcyvaFH",
"B1xt4BU9Fr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743185,
1573811072409,
1573810927300,
1573761800966,
1573761346013,
1573722349790,
1573720798181,
1573474508050,
1571832793405,
1571807117766,
1571607856734
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2205/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2205/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2205/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2205/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2205/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2205/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2205/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2205/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2205/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2205/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors address the problem of robust reinforcement learning. They propose an adversarial perspective on robustness. Improving the robustness can now be seen as two agent playing a competitive game, which means that in many cases the first agent needs to play a mixed strategy. The authors propose an algorithm for optimizing such mixed strategies.\\n\\nAlthough the reviewers are convinced of the relevance of the work (as a first approach of Bayesian learning to reach mixed Nash equilibria, which is useful not only for robustness but for any problem that can be formulated as zero-sum game requiring a mixed strategy), they are not completely convinced by the work in current state. Three of the reviewers commented on the experiments not being rigorous and convincing enough in current form, and thus not (yet!) being able to recommend acceptance to ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"additional remark\", \"comment\": \"We will clearly acknowledge that the PR-MDP setting in [2] has convergence guarantees in the text when we update to add the openAI hide-and-seek environment benchmark.\\n\\nNote however that our work considers the problematic NR-MDP setting.\"}",
"{\"title\": \"answer to the question\", \"comment\": \"Any general min-max (mixed-NE) problem given in Eq.(3) can be solved using the infinite-dimensional mirror descent algorithm (Algorithm 2 in the appendix), with provable convergence guarantee [1]. In section 2.2, we briefly reviewed the existing theory from [1]. In section 3, we translated the two-player RL problem in the mixed-NE form, c.f., Eq.(5). Then the theory from [1] naturally follows. For experiments, we used Algorithm 1, adapted to DDPG.\\n\\nWe can further clarify the remark in the paper. We can also tone this down, if reviewers find it too ambitious. \\n\\n[1] Finding mixed nash equilibria of generative adversarial networks - Hsieh et al. 2019\"}",
"{\"title\": \"Additional remark\", \"comment\": \"I am almost certain that in [1] (the baseline) the authors provide two forms of robustness, named PR and NR. While the NR has problems, as the authors have stated in the paper, the PR does have convergence guarantees in RL (in the tabular case).\"}",
"{\"title\": \"repeated question\", \"comment\": \"Questions to authors\\nYou write \\\"Our paper precisely bridges this gap between theory and practice in previous works, by proposing the \\ufb01rst theoretically convergent algorithm for robust RL\\\". What is the exact mathematical statement here? Does this refer to the algorithm that is used in the numerical experiments?\"}",
"{\"title\": \"numerical evidence is strong and our *technical contribution* is also sufficient.\", \"comment\": \"Dear Reviewers,\\n\\nThe reviewers criticize that the theoretical contribution may not be sufficient and that the numerical evidence should be stronger. \\n\\nWe included additional, stronger numerical evidence in the new revision, including capturing a failure case in the baseline. Please see further our previous comment. \\n\\nWe now would like to argue that the technical contribution is also sufficient. \\n\\nFirst of all, thanks for acknowledging that our approach as the first approach of using SGLD to reach the mixed Nash equilibrium in reinforcement learning (RL). The mixed Nash perspective should be investigated further as it also sheds light into popular problems, such as self-play. As a result, bringing this to the attention of the literature is important. \\n\\nWhile the two player algorithm via the Langevin dynamics is the same algorithm from [1], there is no need to improve on it since it is already theoretically grounded. We do not need to provide new infinite dimensional convergence theorems to show that it is useful in RL. We thought that the mixed setting itself is novel and that we do not need to obfuscate the contributions, and hence, we relied on a state-of-the-art algorithm [1]. \\n\\nHowever, we observe in our paper that the algorithm in [1] is computationally demanding even though it uses a mean approximation in its inner loop (which already saves significant resources). \\n\\nAs a result, we introduced a new approximation by setting delta=0 and K_t=1. You can see this as just applying entropic mirror descent with a short inner loop for a sampling problem that has a dirac delta solution (i.e., there is no mixed solution but just a pure solution). \\n\\nEven in this pure case, the new algorithm performs superior to the baseline (i.e., non-robust DDPG algorithm) with similar per iteration computational complexity to DDPG itself; cf., Figure 3. This simultaneous improvement should also not be understated since it has implications in all the applications of DDPG. \\n\\nIn the light of these observations, we respectfully ask the reviewers to acknowledge that our novelty is sufficient for publication. \\n\\nbest,\\nAuthors \\n\\n\\n[1] Finding mixed nash equilibria of generative adversarial networks - Hsieh et al. 2019\"}",
"{\"title\": \"on the strength of the numerical evidence -- inverted pendulum works perfectly\", \"comment\": \"Dear Reviewers,\", \"we_now_include_new_numerical_evidence_to_support_our_case\": \"1. We obtain superior performance on the inverted pendulum, which is the failure case for [1]; cf., Figures 1 and 3.\\n\\nRegarding these two figures, we would like to emphasize that the baseline in Figure 1 is the action robust-DDPG algorithm proposed in [1], and the baseline in Figure 3 is the standard non-robust DDPG algorithm, which is the baseline for [1].\\n\\n2. We include additional results with MuJoCo environments: Swimmer, Reacher, and Humanoid. SGLD-DDPG performs clearly better than the baseline [1] in these examples. \\n\\nRegarding the additional as well as the previous results, note that we have done all the standard/common set of experiments reported in robust deep-RL papers. In particular, please compare our numerical evidence and that of [1]: Both have 8 MuJoCo environments. Finally, similar to [1], we have also followed the best practices prescribed in [2]. \\n\\n3. We are currently trying to setup the hide and seek environment from openAI, which creates a stylized two-player case. Since the setting is simple, we expect both our algorithm and the baseline to perform similarly; however, it is nonetheless a great warm up exercise to show the contrast that follows with superior performance on the MuJoCo environment as well as the inverted pendulum. We hope to include this result before the rebuttal deadline; we can for sure include it in the camera ready, if accepted. \\n\\nIn summary, we believe that our numerical evidence is more than sufficient. In the light of this, we respectfully ask the reviewers to reconsider our score.\\n\\nbest, \\nAuthors\\n\\n\\n\\n[1] Action Robust Reinforcement Learning and Applications in Continuous Control - Tessler et al. 2019\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Overall this seems like the first approach of using Bayesian learning in order to reach mixed Nash equilibrium in RL. This is super-important, since many problems can be formulated as zero-sum games for which the solution is not necessarily a pure strategy.\\nAs [2] have already shown the ability of this concept to find mixed equilibria in GANs, the main novelty is the introduction to RL.\\n\\nAs this work does not introduce new theory or a dramatic new concept, I feel that the acceptance of this work lies mainly on the empirical side. In my opinion, the experiments need to be more convincing - for instance, include additional domains (such as the Inverted Pendulum which was a failure case in [1]) and additional forms of robustness (e.g., the probabilistic-robust variant from [1] which is based on deterministic strategies and was shown to work better, and RARL [3] which test robustness to external disturbances).\\nAn addition option is to build a low-dimensional toy problem in which the exact solution is known. This will enable you to show that while the naive solution in [1] either does not converge or converges to sub-optimal solutions, the SGLD approach is capable of finding superior solutions (hopefully the global optimum).\\n\\nThe idea is in the right direction. However, in my opinion, it is not there yet and is thus not ready for ICLR.\\n\\n--- Post Rebuttal ---\\n\\nI stand by my original assessment. I feel that such work needs to be more convincing. I for one would feel more confident had the authors provided simpler experiments in which they show that their approach indeed converges to the mixed Nash equilibria while the NR-DDPG approach from [1] does not.\\nI did overall like this direction and I believe Robustness in RL is very important.\\n\\n\\n[1] Action Robust Reinforcement Learning and Applications in Continuous Control - Tessler et al. 2019\\n[2] Finding mixed nash equilibria of generative adversarial networks - Hsieh et al. 2019\\n[3] Robust Adversarial Reinforcement Learning - Pinto et al. 2017\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a new algorithm for solving Noisy Robust MDPs (NR-MDPs) based on computing an approximate gradient using stochastic gradient langevin dynamics (SGLD). THE NR-MDPs can be thought of as an agent learning in an adversarial environment.\\nThe paper is well written and the references pointed to give sufficient background to understand the problem undertaken.\\nMost of the theory comes from the \\\"Finding mixed nash equilibria of generative adversarial networks.\\\" (ICML, 2019), and the papers main contribution lies in applying the theory to reinforcement learning.\\nIt is not altogether unexpected though that Langevin Dynamics give better results than the more standard gradient-based approach considered for saddle-point problems, that's what they are supposed to do. But, this seems to be the first time SGLD has been applied to such a problem.\\nThe authors compare on the MuJoCo benchmark with common but not identical instances to \\\"Action robust reinforcement learning and applications in continuous control\\\", which also provides the baseline algorithm used for comparison. Mentioned in this paper is the difficulty of solving the \\\"inverted pendulum\\\" instance by the algorithm proposed therein. It is infact mentioned as a failure case. Maybe, the authors can show results on the same. \\nAdditionally, maybe it would make sense to have an ablation study of the hyper-parameters from Table 1.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper approaches two-player zero-sum Markov game by using mixed Nash equilibrium by sampling from randomized policies. The authors propose that the optimization is done using Stochastic Gradient Langevin Dynamics iterations.\\nIn my opinion, the use of SGLD to this problem is potentially useful - as partially proved by this work. However, this application is obvious and does not have enough novelty merits to be accepted to this ICLR. The experiment make senses but is not rigorous enough to assure the practitioners on the improvement over baseline. \\nI recommend the authors to further develop this idea in both theoretical improvement and experiment settings to further explore the direction.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Update: I have read the author responses. As mentioned before I don't have the background to carefully assess the experiments and will have to rely on my fellow reviewers here. However I stand by my opinion that the experimental results would need to be very strong to warrant an acceptance, since the conceptual contribution is relatively limited.\\nI was also disappointed by the authors unwillingness to back up their claims of \\\"theoretically convergent\\\" with a proof, or at least a theorem. Therefore I still tend towards rejecting the paper.\\n-----------------------------------------------\\nSummary\\nThe present work proposes to use the recently developed Stochastic Gradient Langevin Dynamics (SGLD) to compute mixed approximate equilibria in two-player reinforcement learning, following the methodology proposed by hsieh et al for generative adversarial networks. The authors report practical improvements compared to pure strategies computed as proposed by Tessler et al.\\n\\nDecision\\nThe idea of using randomized strategies for two-player reinforcement learning is interesting and natural. However, as the authors note, mixed strategies are classical in game theory. Furthermore, the adaption of the methodology of hsieh et al is straightforward, limiting the strength of the theoretical contribution. \\nUnfortunately, the theoretical contribution is not stated consistently, since in the introduction, the paper states \\\"Our paper precisely bridges this gap between theory and practice in previous works, by proposing the \\ufb01rst theoretically convergent algorithm for robust RL\\\", but this claim is missing in the abstract or conclusion, and there is no theorem that justifies this rather strong claim. \\nIn my opinion, this paper should only be accepted if it provides very convincing numerical experiments, which I am not qualified to assess. I happy to increase my score if the experimental results are deemed strong by the reviewers with more expertise in practical reinforcement learning.\\n\\nQuestions to authors\\nYou write \\\"Our paper precisely bridges this gap between theory and practice in previous works, by proposing the \\ufb01rst theoretically convergent algorithm for robust RL\\\". What is the exact mathematical statement here? Does this refer to the algorithm that is used in the numerical experiments?\"}"
]
} |
r1lQQeHYPr | Embodied Multimodal Multitask Learning | [
"Devendra Singh Chaplot",
"Lisa Lee",
"Ruslan Salakhutdinov",
"Devi Parikh",
"Dhruv Batra"
] | Visually-grounded embodied language learning models have recently shown to be effective at learning multiple multimodal tasks such as following navigational instructions and answering questions. In this paper, we address two key limitations of these models, (a) the inability to transfer the grounded knowledge across different tasks and (b) the inability to transfer to new words and concepts not seen during training using only a few examples. We propose a multitask model which facilitates knowledge transfer across tasks by disentangling the knowledge of words and visual attributes in the intermediate representations. We create scenarios and datasets to quantify cross-task knowledge transfer and show that the proposed model outperforms a range of baselines in simulated 3D environments. We also show that this disentanglement of representations makes our model modular and interpretable which allows for transfer to instructions containing new concepts. | [
"Visual Grounding",
"Semantic Goal Navigation",
"Embodied Question Answering"
] | Reject | https://openreview.net/pdf?id=r1lQQeHYPr | https://openreview.net/forum?id=r1lQQeHYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hl-r2zcfr",
"B1g45ZScor",
"BJehkWrcjS",
"Hkl2Ber9jS",
"HJxnPOdRYr",
"HyxBKMA6KB",
"HJgz_vSntH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743156,
1573699980375,
1573699811875,
1573699652193,
1571879011994,
1571836541263,
1571735402233
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2204/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2204/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2204/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2204/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2204/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2204/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper offers a new approach to cross-modal embodied learning that aims to overcome limited vocabulary and other issues. Reviews are mixed. I concur with the two reviewers who say the work is interesting but the contribution is not sufficiently clear for acceptance at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for the review and helpful feedback. We address your concerns below:\\n\\n> First, the paper uses a new environment to evaluate the SGN and EQA task instead of the benchmark environments for these two tasks, making it difficult to compare performance to previous work\\n\\nWe agree with you that reproducibility and benchmarking is important. And this is the reason we did not use the House3D EQA dataset as it requires the SUNCG dataset which is no longer available. For SGN, we use the same dataset as used by Chaplot et. al. 2018 [1]. \\n\\n\\n> Also, the paper only compares to relatively out-of-date approaches on EQA and SGN, instead of the state-of-the-art approaches on them.\\n\\nPlease note that our baselines aren't weak -- as shown by the multi-task training performance of our baselines in Table 2, they achieve nearly 100% performance on both SGN and EQA during training. Testing a newer method will not improve this performance. The problem is not that these baselines are ineffective at SGN or EQA but that these models are designed for a single task and hence do not perform well when tested for cross-task knowledge transfer. And this issue remains with the state-of-the-art single-task approaches for EQA and SGN. In fact, no prior work has proposed a model for cross-task knowledge transfer for embodied multimodal learning, so we cannot easily compare to prior work (we do construct reasonable baselines and ablations, as described in section 5.1 and 5.2). Having said that, if there are any specific recommendations for a baseline to add that we may have missed, we will be happy to add it in the revised version.\\n\\n\\n> In addition, the paper should also discuss its connections to other multi-task learning approaches in the related work section.\\n\\nWe did not discuss multitask learning in the related work as we are not aware of any multitask learning approaches specific to embodied multimodal learning. We will add a discussion about multitask learning in non-embodied multimodal settings in the revised version.\\n\\n[1] Gated-Attention Architectures for Task-Oriented Language Grounding\\nDevendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks for the helpful review and feedback. We address your concerns below:\\n\\n> However, the generality of the proposed method, i.e., dual attention, is still ambiguous \\u2026\\n\\nWe argue that the proposed dual-attention is generally applicable to any multimodal task which requires grounding words in visual concepts. It provides a general way of aligning textual and visual representations in any multimodal task such that they can be reused for other tasks. Although it is evaluated on two tasks in a specific environment, the design of the dual-attention unit itself is not specific to the environment or the task.\\n\\n\\n> Even though the title has the phrase \\\"multitask learning,\\\" what the system copes with is just two specific tasks. If the system is designed to solve the two specific tasks simultaneously, it's better to change the title. The title seems to be misleading.\\n\\nNote that we called semantic goal navigation as a single task in the paper for easier understanding, whereas prior work [1, 2, 3] has called each instruction as a different task and handling multiple instructions as multi-task learning. In our work, we not only handle multiple instructions but also multiple questions. Thus, under the notation of past work, we are solving multiple tasks. Having said that, we see your concern, and we are happy to change the title to \\u201cEmbodied Multimodal Learning and Knowledge Transfer between Semantic Goal Navigation and Embodied Question Answering\\u201d or just \\u201cEmbodied Multimodal Learning and Knowledge Transfer\\u201d if the reviewers and the Area Chair find this more suitable. We are also open to other suggestions from you.\\n\\n\\n> Some of the main contributions, e.g., \\\"modularity and interpretability\\\" and \\\"transfer to new concepts,\\\" are not evaluated quantitatively.\\n\\nWe believe these contributions are evaluated quantitatively. In Section 5.4: \\\"Transfer to new concepts\\\"', we evaluate the model's capability of transferring to new concepts with quantitative results in Table 4. The results show that our model achieves a success rate of 0.97 on average over different types of instructions involving new object types and attributes. These results also demonstrate not only our model\\u2019s ability to handle new concepts but also to combine the knowledge of existing concepts with a new concept without any additional policy training. The above results are possible because of the modularity and interpretability of the model, as it allows us to add the output of external object detectors as intermediate representation in our model. \\n\\nFurthermore, in Section 5.3 \\u201cHandling relational tasks\\u201d we show that the modularity and interpretability of our model also allow us to use trainable neural modules to handle relational tasks involving negation and spatial relationships and also tackle relational instructions involving new concepts. \\n\\nThanks for pointing out the typo. We will correct it in the revised version.\\n\\n[1] Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning\\nJunhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli\\n\\n[2] Grounded Language Learning in a Simulated 3D World\\nKarl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom\\n\\n[3] Gated-Attention Architectures for Task-Oriented Language Grounding\\nDevendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for the review and helpful feedback. We address your concerns and answer your questions below:\", \"regarding_the_use_of_synthetic_language\": \"The focus of this submission is not tackling natural language but transferring the knowledge of grounded concepts (words grounded to their visual properties) across different embodied multimodal tasks and handling new concepts never seen during training. In our opinion, handling natural language is important but a separate problem in itself. For example, a parallel submission in ICLR (https://openreview.net/forum?id=rklraTNFwB) studies this problem specifically and shows that even embodied agents trained only with synthetic language can be transferred to natural language by using word representations learned by language models trained on large text corpora. Another work [1] shows that embodied instruction-following models can be transferred to unseen natural language synonymous words using GLoVe [2] word embeddings. These approaches could be used to transfer our model to natural language as well.\\n\\nIn order to provide evidence for the above, we conducted additional experiments. We constructed a new test set using synonymous words given by [1] such that each question and instruction in this new test set contains at least one unseen word never seen in any task during training. In order to handle this new test set containing unseen natural language words, we map each unseen word to the closest seen word in our synthetic data using the GLoVe [2] word vector space similar to [1]. The Dual-Attention model achieved a performance of 0.81/0.48 SGN/EQA as compared to the best performance of 0.27/0.19 SGN/EQA (GA) among the baselines. Clearly, we understand that natural language has much more complexity than previously unseen synonyms, but these results and the papers referenced above indicate that models trained with synthetic language can be used with natural language as well.\\n\\n\\n> it might not be aware of the spatial relationships and thus be limited to simple questions. For example, if the question is \\\"What is the object on top of the apple?\\\". To my understanding, the current module would not explicitly handle this one-hop spatial relationship.\\n\\nThe current model can handle relational questions including one-hop spatial relationships as shown in Section 5.3: \\\"Handling Relational Tasks\\\". In this section, we show how a simple extension of the model can address questions and instructions containing \\\"left of\\\", \\\"right of\\\" and \\\"not\\\". We also show visualizations of the learnt representations in Figure 5. We do not tackle \\u2018top of\\u2019 specifically, but \\u2018left of\\u2019 and \\u2018right of\\u2019 are analogous to `top of\\u2019 and tackle one-hop spatial relationship as the review mentions.\\n\\n\\n> I am not sure why the visual attention map x_S could be used as the state of the module.\\n\\nThe visual attention map is passed to the navigation policy because the information in the visual attention map is sufficient for successful navigation. For example, for the instruction, `Go to the red torch\\u2019 if the visual attention map identifies the location of red and torch things, that information is sufficient for navigating to the red torch.\\n\\n\\n> After Eqn. 3, the paper says that \\\"ReLU activations ... make all elements positive, ensuring ...\\\". I am confused about the intuition behind this argument because of the softmax activation. Softmax will projects 0 to 1. So the sum of the all-zero vector would still be non-zero after softmax. \\n\\nThe purpose of ReLU activations is not to zero-out the prediction after softmax. In fact, ReLU activations were chosen independently of the subsequent softmax operation. The purpose of ReLU activations is to have only positive activations during summation. Positive activations ensure that they aggregate during summation. If there were negative activations, they could potentially cancel out positive activations during summation. \\n\\nThanks for pointing out the typos. We will correct them in the revised version.\\n\\n[1] ACTRCE: Augmenting Experience via Teacher's Advice For Multi-Goal Reinforcement Learning\\nHarris Chan, Yuhuai Wu, Jamie Kiros, Sanja Fidler, Jimmy Ba\\n\\n[2] Glove: Global vectors for word representation. \\nJeffrey Pennington, Richard Socher, and Christopher Manning\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"I thank the authors for their detailed response and appreciate their hard work in bringing us this paper.\\n\\nI think that my main point is that this work relies too much on the extra information/constraints in the synthetic env. E.g., 1. since the vocab size is small, thus the feature map could be designed 'equal to the vocabulary size' 2. The bag-of-words representation is effective but it is not the case for natural language. Although the authors kindly point me to some recent works on sim2real, I am still not convinced whether this proposed method could be transferred to real setups based on the referenced papers.\\n\\nHowever, it is a personal research taste that I always take real setup into considerations, because I have worked on both synthetic and real setup (on both lang and visn sides) for years and observed a large gap. My opinion is that methods of synthetic setups are not naturally convertible to the real ones. If AC/meta-reviewer considers the ability of vision-and-language interactions could be effectively studied through this setup with synthetic language and simulated-unrealistic images, I am OK with acceptance. I have downgraded my confidence scores (but kept my overall score) for this purpose.\\n\\n\\n-----------------------------------------------------------------------------------\", \"pros\": \"(1) The proposed model makes sense to me, which tries to have two attention layers to extract the information related to the questions. It seems to have the ability to deal with \\\"and\\\"/\\\"or\\\" logical relationships as well. \\n\\n(2) Fig. 4 is impressive. It is clear and well-designed. \\n\\n(3) The results in Table 2 are convincing. They show that both the proposed dual-attention method and multi-task learning would contribute to the performance.\", \"cons\": \"(1) It seems that the two main contributions are related to the language. Thus the synthetic language might not be proper to study. For example, in Eqn. 2, the first GA multiplies the BOW vector with the vision feature map, which could filter out unrelated instruction. This method could not be directly transferred to a real setup where natural language and natural images are involved.\\n\\n(2) The designed attention modules is lack of generalizability. It implements a two-step attention module, while the first step selects the related visual regions w.r.t the words and the second step gathers the information regarding these attended regions. However, it might not be aware of the spatial relationships and thus be limited to simple questions. For example, if the question is \\\"What is the object on top of the apple?\\\". To my understanding, the current module would not explicitly handle this one-hop spatial relationship.\", \"comments\": \"(1) According to Sec. 3, 70 instructions and 29 questions are involved in this task. Using GRU to encoder these questions seems to be redundant. A simple one-hot embedding for these instructions might already be enough to encode the information.\\n\\n(2) I am not sure why the visual attention map x_S could be used as the state of the module.\\n\\n(3) After Eqn. 3, the paper says that \\\"ReLU activations ... make all elements positive, ensuring ...\\\". I am confused about the intuition behind this argument because of the softmax activation. Softmax will projects 0 to 1. So the sum of the all-zero vector would still be non-zero after softmax.\", \"typo\": [\"In Sec. 4, X_{BoW} \\\\in \\\\{0, 1\\\\}^V.\", \"In Sec. 4.1, \\\"this matrix is multiplied ...\\\" --> this tensor.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"*Summary\\n\\nThe paper describes a Dual-Attention model using Gated- and Spatial-Attention for disentanglement of attributes in feature representations for visually-grounded multitask learning. It has been shown that these models are capable of learning navigational instructions and answering questions. However, they addressed two limitations of previous works about visually-grounded embodied language learning models. The first is the inability to transfer grounded knowledge across different\\ntasks, and the other is the inability to transfer to new words and concepts not seen during the training phase. To overcome the problem, a multitask model is introduced. The model can transfer knowledge across tasks via learning disentanglement of the knowledge of words and visual attributes. The paper shows that the proposed model outperforms a range of baselines in simulated 3D environments. \\n\\n\\n*Decision and supporting arguments\\n\\nI think the paper is on the borderline. The reason is as follows. \\nThe motivation of the study is described appropriately, and the performance is quantitatively evaluated, as shown while Table 2. \\nHowever, the generality of the proposed method, i.e., dual attention, is still ambiguous. Though the devised module performs effectively in this specific simulation environment and specific two tasks, an explanation of the theoretical basis and generality of dual attention seem to be missing.\\nEven though the title has the phrase \\\"multitask learning,\\\" what the system copes with is just two specific tasks. If the system is designed to solve the two specific tasks simultaneously, it's better to change the title. The title seems to be misleading.\\nSome of the main contributions, e.g., \\\"modularity and interpretability\\\" and \\\"transfer to new concepts,\\\" are not evaluated quantitatively.\\n\\n\\n*Additional feedback\\nIn conclusion, \\\"interpretablew\\\" -> \\\"interpretable\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper explores multi-task learning in embodied environments and proposes a Dual-Attention Model that disentangles the knowledge of words and visual attributes in the intermediate representations. It addresses two tasks, namely Semantic Goal Navigation (SGN) and Embodied Question Answering (EQA), using a simple synthetic environment. The paper compares against a few simple baselines and baselines adapted from models in each task.\\n\\nI would recommend for acceptance, as the experimental results show that the proposed approach successfully transfers knowledge across tasks.\\n\\nHowever, I would also like to note that the paper has a few drawbacks.\\n\\nFirst, the paper uses a new environment to evaluate the SGN and EQA task instead of the benchmark environments for these two tasks, making it difficult to compare performance to previous work. The environment in the paper is small (compared to e.g., House3D for EQA) and has a limited variety. Also, the paper only compares to relatively out-of-date approaches on EQA and SGN, instead of the state-of-the-art approaches on them.\\n\\nIn addition, the paper should also discuss its connections to other multi-task learning approaches in the related work section.\"}"
]
} |
r1gfQgSFDr | High Fidelity Speech Synthesis with Adversarial Networks | [
"Mikołaj Bińkowski",
"Jeff Donahue",
"Sander Dieleman",
"Aidan Clark",
"Erich Elsen",
"Norman Casagrande",
"Luis C. Cobo",
"Karen Simonyan"
] | Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images. However, their application in the audio domain has received limited attention,
and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech. To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech.
Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Fréchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS. We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator. Listen to GAN-TTS reading this abstract at https://storage.googleapis.com/deepmind-media/research/abstract.wav | [
"texttospeech",
"speechsynthesis",
"audiosynthesis",
"gans",
"generativeadversarialnetworks",
"implicitgenerativemodels"
] | Accept (Talk) | https://openreview.net/pdf?id=r1gfQgSFDr | https://openreview.net/forum?id=r1gfQgSFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vmFSpeIS8p",
"rkg0V5P9iB",
"r1gpl85tjH",
"SJglWB9FoS",
"HylKa4cYjH",
"S1gKvNctjS",
"Hyg3Zr60FS",
"BJlJ-YsaKS",
"rklAPQJOKS",
"Hygn03xGFB",
"rkxBtBvCuB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798743128,
1573710389657,
1573656053500,
1573655799581,
1573655745435,
1573655648911,
1571898627686,
1571825910939,
1571447654183,
1571060948364,
1570825596753
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2203/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper2203/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2203/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2203/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2203/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2203/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2203/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2203/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2203/Authors"
],
[
"~Rithesh_Kumar1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"The authors design a GAN-based text-to-speech synthesis model that performs competitively with state-of-the-art synthesizers. The reviewers and I agree that this appears to be the first really successful effort at GAN-based synthesis. Additional positives are that the model is designed to be highly parallelisable, and that the authors also propose several automatic measures of performance in addition to reporting human mean opinion scores. The automatic measures correlate well (though far from perfectly) with human judgments, and in any case are a nice contribution to the area of evaluation of generative models. It would be even more convincing if the authors presented human A/B forced-choice test results (in addition to the mean opinion scores), which are often included in speech synthesis evaluation, but this is a minor quibble.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewers, any comments on the author responses?\", \"comment\": \"Dear Reviewers, thanks for your thoughtful input on this submission! \\u00a0The authors have now responded to your comments. \\u00a0Please be sure to go through their replies and revisions. \\u00a0If you have additional feedback or questions, it would be great to get them this week while the authors still have the opportunity to respond/revise further. \\u00a0Thanks!\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thank you for the detailed comments.\\n\\n1. We did not do experiments with such generator architecture. Although we have considered other architectural choices for generator and ways of conditioning, our early experiments showed that our residual-upsampling scheme is more efficient than parallel wavenet\\u2019s full-resolution scheme. The correspondence between temporal dimensions of the conditioning and the waveform also seemed important and hence we decided to keep the proposed generator architecture throughout.\\n\\n2. Indeed we believe that the use of the ensemble of random window discriminators was the main factor behind the performance we obtained. This, however, breaks down to three steps: \\n(a) switching from full discriminator to random-window discriminator(s),\\n(b) including unconditional random window discriminator(s),\\n(c) including several different window sizes in the ensemble.\\nAs can be seen in Table 1., (a) already brings a huge improvement (from ~1.9 to ~3.4 MOS). (b) and (c) also seem to be important; we have considered fixing the window size or using only conditional RWDs, but all of such trials turned out considerably worse. Only models combining all of (a) - (c) made it past MOS of 4.1.\\n\\n3. Indeed D^c_k and D^u_k should have been clearly defined there; we clarified this notation in the updated version of the submission.\\n\\n4. For the training stability, please see our joint response. As for the role of the batch size, we fixed it throughout all experiments, but we will include analysis of model stability with smaller batch sizes in the final version of the paper.\\n\\n5. Thank you for pointing out this related work. We refer to it in the updated version of the submission.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thank you for your comments. We have added a pseudo-code description of TTS-GAN training algorithm to the updated submission. We believe that, together with other architectural details present in the paper, it makes our work reproducible.\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for your comments. Please refer to the joint response in regards to training stability and mode collapse.\"}",
"{\"title\": \"Joint response to all Reviewers\", \"comment\": \"We would like to thank all reviewers for their effort and their useful comments.\\n\\nWe have updated our submission, adding several references to related work and pseudocode for training GAN-TTS in Appendix D.\\n\\n*Stability and Mode Collapse*\", \"there_are_two_phenomena_in_gan_training\": \"(i) mode collapse and (ii) model collapse. The first manifests itself in the lack of sample diversity, the second is essentially training instability.\\nIn the paper, we didn't claim that our model doesn't have the former (mode collapse). In fact, for conditional generative models like text-to-speech, mode collapse is not necessarily a problem. Having said that, based on our subjective assessment, feeding different noise z samples leads to slightly different speech samples, so the model does capture some sample diversity given the conditioning.\\nWhat we did claim in Section 5.2 is that we didn't observe the second phenomenon (model collapse), i.e. training is stable. We attribute this to data augmentation, both explicit - due to training on random crops, and implicit - through discriminating random windows. The only setting in which we observed model collapse was the one with full-window discriminator; settings with even single random window discriminator, on the other hand, led to stable training.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper puts forth adversarial architectures for TTS. Currently, there aren't many examples (e.g. Donahue et al, Engel et al. referenced in paper) of GANs being used successfully in TTS, so this papers in this area are significant.\\n\\nThe architectures proposed are convolutional (in the manner of Yu and Koltun), with increasing receptive field sizes taking into account the long term dependency structure inherent in speech signals. The input to the generator are linguistic and pitch signals - extracted externally, and noise. In that sense, we are working with a conditional GAN. \\n\\nI found the discriminator design very interesting. As the comment below notes, it is a sort of patch GAN discriminator (See pix2pix, and this comment from Philip Isola - https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/39) and that is could be quite significant in that it classifies at different scales. In the image world, having a single discriminator for the whole model would not take into account local structure of the images. Likewise, perhaps we can imagine something similar in the case of audio at varying scales - in fact, audio dependencies are even more long range. That might be one reason why the variable window sizes work here. \\n\\nThe paper also presents to image analogues for metrics based on FID and the KID, with the features being taken from DeepSpeech2. \\n\\nI found the speech sample presented very convincing. In general, the architectures are also presented quite clearly, so it seems that we might be able to reproduce these experiments in our own practice. It is also promising that producing good speech could be achieved by a non-autoregressive or attention based architecture.\\n\\nThe authors mention that they hardly encounter any issues with training stability and mode collapse. Is that because of the design of the multiple discriminator architecture?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to enable GAN based TTS in the time domain with the careful designs of the (non-autoregressive) generator and discriminator. There have been various trials of GAN-TTS but not so many success and I'm glad to hear that the proposed method seems to enable GAN-TTS with fast inference thanks to the non-autoregressive property. The method also proposes new objective measures inspired by the image recognition network based on the high-level features generated by end-to-end ASR, which is also another important contribution of this paper.\\n\\nMy concern for this paper is reproducibility. Although I really appreciate the authors' efforts on providing implementational details in the appendix, the code and data do not seem to be publicly available, and I'm expecting that the implementation of this technique is relatively hard due to their complex designs of the generator and discriminator. Apart from that, the paper is well written overall by well describing the trend of GAN studies in the image processing and the application of such image processing oriented GAN techniques to TTS.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"I want thank the authors for solving this long-standing GAN challenge in raw waveform synthesis. With all due respect, previous GAN trials for audio synthesis are inspiring, but their audio qualities are far away from the state-of-the-art results. Although the speech fidelity of GAN-TTS is still worse than WaveNet and Parallel WaveNet from the posted sample, it has begun to close the significant performance gap that has existed between autoregressive models and GANs for raw audios. Overall, this is a very good paper with significant contributions to the filed.\", \"detailed_comment\": \"1, In WaveNet, the conditional features (linguistic / mel-spectrogram) are added as bias terms in the convolutional layers. Did the authors tried this alternative architecture for the generator, which uses the white noisy z as network input (similar as flow-based models, e.g., Parallel WaveNet) and the conditional features as bias term in the convolutional layers? \\n\\n2, Could the authors comment the importance of serval architecture choices in this work? From Table 1, it seems to me that the ensemble of random window discriminators is the most important (perhaps the only important) contributing factor for the success. For example, the MOS score was boosted from 1.889 to 4.213 by replacing a single full discriminator to the ensemble of RWDs.\\n\\n3, The notations in Eq. (1) and (2) are messy. Although I can figure their meaning from the context, one may clarify certain notations if they appear at the first time. \\n\\n4, The stable training (NO model collapses) is pretty impressive. Could the authors shed some light on the potential reason? Does the ensemble of RWD regularizes the training? What's your experience for training FullD (does not have random window ) and cRWD_1 (only has one random window discriminator)? Are they still very stable? Also, could the authors comment on the importance of large batch size -- 1024 for stable training of GAN-TTS? \\n\\n5, Although there is a notable difference, one may properly mention previous work Yamamoto et al. (2019), which uses GAN as an auxiliary loss within ClariNet and obtains high-fidelity speech ( https://r9y9.github.io/demos/projects/interspeech2019/ ). \\n\\nYamamoto et al. Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation. 2019.\\n\\n\\n=== update === \\n\\nThank you for the detailed response. \\n2, Thanks for the elaboration. \\n4, It would be very interesting to see an analysis of model stability with smaller batch sizes.\"}",
"{\"comment\": \"Thank you for sharing your related work. As it was made public after our submission and will be published only in the near future, it cannot be considered prior work. We are looking forward to reading the camera-ready version of your paper, and will include a discussion of similarities and differences in a future version of our paper.\", \"title\": \"Thanks for reference to parallel work\"}",
"{\"comment\": \"We would like to point out that our research paper - MelGAN: Conditional Generative Adversarial Networks for Conditional Waveform Synthesis (accepted as poster presentation at NeurIPS 2019) also performs raw audio generation using generative adversarial networks. Our paper primarily targets the problem of mel spectrogram inversion using Conditional GANs and also show that alternate representations such as VQ-VAE latents, Universal Music Translator encodings can be utilized to generate corresponding raw waveform.\\n\\nMelGAN and the current paper under review (GAN-TTS) have many similarities in their approach. Specifically, both papers use highly similar Generator architectures (residual blocks, dilated convolutions, pattern of upsampling the conditioning information) and Discriminator architectures (multi-scale discriminator and multiple discriminators, patch-discriminator vs random window samping). The difference occurs in the task, where the GAN-TTS model uses text features instead of mel-spectrograms to perform raw audio generation.\\n\\nWe acknowledge that the authors couldn\\u2019t have known this paper since it wasn\\u2019t public. It would be nice if the authors could summarize and discuss the additional insights provided by this paper in the light of the existence of this prior work.\\n\\nWe temporarily share the paper using a google drive link, pending arxiv submission. (https://drive.google.com/file/d/1a_CnqAMkFYEC7pfAkBKvjMeaKiREKPkl/view?usp=sharing)\\n\\nThe final camera ready version of the paper will be available later this month (October 30).\", \"title\": \"Prior work for raw audio generation using Conditional GANs\"}"
]
} |
BkgM7xHYwH | Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory | [
"Antonio Carta",
"Alessandro Sperduti",
"Davide Bacciu"
] | Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memorization using linear autoencoders for sequences. We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks. We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution. The initialization schema can be easily adapted to any recurrent architecture.
We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training. The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT. | [
"recurrent neural networks",
"autoencoders",
"orthogonal RNNs"
] | Reject | https://openreview.net/pdf?id=BkgM7xHYwH | https://openreview.net/forum?id=BkgM7xHYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SNYq9Iegob",
"rJxEiykssr",
"HJgBe0A5sH",
"HJlKoTAcir",
"BJxS8t0k9r",
"B1gi6tditH",
"H1llbY2sdH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743099,
1573740443895,
1573740013427,
1573739937073,
1571969356683,
1571682754852,
1570650359609
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2202/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2202/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2202/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2202/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2202/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2202/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper explores an initialization scheme for the recently introduced linear memory network (LMN) (Bacciu et al., 2019) that is better than random initialization and the approach is tested on various MNIST and TIMIT data sets with positive results.\\n\\nReviewer 3 raised concerns about the breadth of experiments and novelty. Reviewer 2 recognized that the model performs well on its MNIST baselines and had concerns about applicability to larger settings. Reviewer 1 acknowledges a very well written paper, but again raises concerns about the thoroughness of the experiments. The authors responded to all three reviewers, responding that the tasks were chosen to match existing work and that the approach is complementary to LSTMs to solve different tasks. Overall the reviewers did not re-adjust their ratings.\\n\\nThere remains questions on scalability and generality, which makes the paper not yet ready for acceptance. We hope that the reviews support the authors further research.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \">>> 1. The authors claimed the proposed method could help with exploding gradient in training the linear memories. It would be helpful to include some experiments indicating that this was the case (for the baseline) and that this method does indeed help with this problem.\\n>>> 4. In general, it seems that the experiments could be more carefully designed to reflect the contributions of the proposed method. Some suggestions for future edits are, more analysis on gradients, maybe more experiments on the stability of training such as gradients could help.\\n\\nWe agree that this are interesting experiments. We believe that it is especially useful to study the effect on the gradient and training stability when combined with the truncated backpropagation (e.g. as done in the LSTM paper). Unfortunately, we still do not have the final results on these experiments.\\n\\n\\n>>> 2. The experiments on the copy task only showed results for length upto 500, which almost all baseline models are able to solve. I am not too sure how the proposed initialization helps in this case. \\n \\nWe used the experiments on the copy tasks to show that the LMN architecture learns the copy task with a saturating nonlinearity (tanh). As far as we know, this is the only architecture that can do it, while most of the other models use variations of ReLUs.\\n\\n\\n>>> 1. There are some confusions, on P2 \\\"we can construct a simple linear recurrent model which uses the autoencoder to encode the input sequences within a single vector\\\", I think the authors meant encode the input sequences into a sequence of vectors? Equation 1 and 2 suggest that there is a vector m^t per timestep (as oppose to having 1 for the entire sequence). \\n \\nThe state vector of the LAES m^t can be used to reconstruct the entire input sequence x^1, \\u2026 x^t. Therefore, each vector m^t encodes the entire subsequence x^1, \\u2026, x^t. We will update the paper to make this point clearer.\\n\\n\\n>>> 2. Although the copy task was used in ((Arjovsky et al., 2015), I believe the original task was proposed in the following paper and hence this citation should properly be the correct one to cite here\\n \\nThank you for noticing this, we will add the reference.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \">>> Weakness: What do you mean by the \\\"optimal autoencoder\\\"? \\n \\nWe use a linear autoencoder because we can find the optimal solution (in the sense that it optimizes the mean squared error) with a closed-form solution. We approximate this solution by taking a fixed number of components.\\n\\n\\n>>> The performance on TIMIT is worse than the baseline methods. The scale of the experiments is too small. Do you have any experiment results on any large dataset? e.g. Penn Treebank. \\n\\nWe did not perform experiments on PTB, but our expectation is that models based on autoencoding are not a good choice for language modeling tasks. This can be seen by looking at the performance of orthogonal models on language modeling tasks [1,2], which are always inferior to gated models. Our guess is that language modeling does not require the memorization of long sequences, and it is probably sufficient to remember a small amount of information.\\n \\n[1] Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning\\nrecurrent networks with long term dependencies. In ICML, pp. 3570\\u20133578, 1 2017. URL \\n[2] Cijo Jose, Moustpaha Cisse, and Francois Fleuret. Kronecker Recurrent Units. In ICML, pp.\\n2385\\u20132394, 5 2018. URL http://arxiv.org/abs/1705.10142.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"You can, of course, obtain the LMN from the LSTM equations by eliminating all the gates and making the CEC update through a generic linear function (in place of the sum). But these are not minor changes and radically change the inductive bias of the models. The LSTM uses the forget gate to reset the cell state activations, while the LMN uses a more efficient encoding and does not forget past activations. The result of this choice is that the LMN is better on tasks that require the memorization of long sequences (e.g. copy task, MNIST). On different tasks, like language modeling, other architectures (like the LSTM) are probably a better choice since the problem does not require explicit memorization.\\n\\n\\nYou are correct that the initialization scheme come from a previous contribution. However, what we show here is that such initialization is more coherent with the assumption and nature of the LMN, rather than for a RNN with non-linear memory. We also show how that the autoencoder initialization can help in some tasks (improving both the results of orthogonal and gated models) and hinder the performance in others. These results show that the proposed approach is complementary to LSTMs and other gated architectures, and that the two approaches can be used to solve different tasks of different nature (in terms of long and short-term memorization abilities). We are not aware of other researches investigating the differences between encoding/orthogonal-based approaches and gated models and the capabilities of the different memorization schemes (which, to the extent of our understanding, appears a relevant topic for ICLR).\\n \\n\\n>>> For the experiment part, the first two tasks are a bit toyish in 2019\\nWe agree that some of the benchmarks are simple tasks. However, the chosen datasets are used to compare against classic benchmarks in the literature of orthogonal RNNs. Most of the papers in the literature use pixel-MNIST.\\n[1] copy task, pixel-MNIST, PTB\\n[2] copy task, TIMIT, pixel-MNIST\\n[3] pixel-MNIST, pixel-CIFAR10\\n[4] copy, MNIST, TIMIT, PTB, MIDI\\n\\nIf the reviewer is aware of different benchmarks allowing to compare with the related literature we will gladly consider it.\\n\\n\\n>>> Even for the TIMIT dataset, the results are a bit far from state-of-the-art which makes the paper's claim less convincing. \\nThank you for noticing this. The results may seem poor because we do not use bidirectional models. This is done to compare against [4]. We will update the paper to highlight this fundamental architectural difference with the literature.\\n \\n \\n[1] Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning\\nrecurrent networks with long term dependencies. In ICML, pp. 3570\\u20133578, 1 2017. URL http://arxiv.org/abs/1702.00071.\\n[2] Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas. Full-Capacity\\nUnitary Recurrent Neural Networks. In NIPS, pp. 4880\\u20134888, 10 2016. URL http://arxiv.org/abs/1611.00035.\\n[3] Bo Chang, Minmin Chen, Eldad Haber, and Ed H. Chi. AntisymmetricRNN: A Dynamical System\\nView on Recurrent Neural Networks. 2 2019. URL http://arxiv.org/abs/1902.09689\\n[4] Cijo Jose, Moustpaha Cisse, and Francois Fleuret. Kronecker Recurrent Units. In ICML, pp.\\n2385\\u20132394, 5 2018. URL http://arxiv.org/abs/1705.10142.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an initialization scheme for the recently introduced linear memory network (LMN) (Bacciu et al., 2019) and the authors claim that this initialization scheme can help improving the model performance of long-term sequential learning problems.\\n\\nMy concerns lie with the novelty of the proposed model and the insufficiency of the experiments. First, the LMN seems to be a simpler version of LSTM and it has no significant advantages compared with other recurrent structures introduced in the past several years.\\nSecond, the autoencoder-based init scheme (Pasa&Sperduti, 2014) is not new while the only technical contribution of this paper is a minor change of this scheme so that it works for the LMN. In my opinion, combining these two (LMN and init scheme) can hardly be considered as a solid novelty contribution.\\nFor the experiment part, the first two tasks are a bit toyish in 2019 and I have not seen any significant improvement gained from the proposed model. Even for the TIMIT dataset, the results are a bit far from state-of-the-art which makes the paper's claim less convincing.\\n\\nOverall I think the novelty contribution is marginal and I suggest the authors to test their models on larger-scale real problems.\\n\\nThe writing is clear and easy to follow.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper proposes a new initialization method for recurrent neural networks. They first obtain the weight from a linear optimal autoencoder. And then they use the weight to initialize the Lieanr Memory Networks(LMN). Basically, this paper is a combination of [1] and [2].\", \"strength\": \"The method of initializing LMN using a linear RNN is natural and simple. (section 3.2)\\nThe proposed initialization outperforms the baselines on the MNIST dataset.\", \"weakness\": \"What do you mean by the \\\"optimal autoencoder\\\"?\\nThe performance on TIMIT is worse than the baseline methods.\\nThe scale of the experiments is too small. Do you have any experiment results on any large dataset? e.g. Penn Treebank.\", \"reference\": \"[1] Pre-training of Recurrent Neural Networks via Linear Autoencoders\\n[2] Linear Memory Networks\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThe paper proposes an autoencoder-based initialization for RNNs with linear memory. The proposed initialization is aimed at helping to maintain longer-term memory and instability during training such as exploding gradients (due to linearity).\", \"pros\": \"1. The paper is well written, the motivation and methods are clearly described.\\n\\nCons.\\n\\n1. The authors claimed the proposed method could help with exploding gradient in training the linear memories. It would be helpful to include some experiments indicating that this was the case (for the baseline) and that this method does indeed help with this problem.\\n\\n2. The experiments on the copy task only showed results for length upto 500, which almost all baseline models are able to solve. I am not too sure how the proposed initialization helps in this case.\\n\\n3. TIMNIT is a relatively small speech recognition dataset. The task/ dataset does not require long-term memorization. It is nice to see that the initialization helps in this case. However, it is still a little how this experiment corresponds to the messsage that the authors are attempting to deliver at the end of the introduction.\\n\\n4. In general, it seems that the experiments could be more carefully designed to reflect the contributions of the proposed method. Some suggestions for future edits are, more analysis on gradients, maybe more experiments on the stability of training such as gradients could help.\", \"minor\": \"1. There are some confusions, on P2 \\\"we can construct a simple linear recurrent model which uses the autoencoder to encode the input sequences within a single vector\\\", I think the authors meant encode the input sequences into a sequence of vectors? Equation 1 and 2 suggest that there is a vector m^t per timestep (as oppose to having 1 for the entire sequence).\\n\\n2. Although the copy task was used in ((Arjovsky et al., 2015), I believe the original task was proposed in the following paper and hence this citation should properly be the correct one to cite here,\\n\\nHochreiter, Sepp and Schmidhuber, J\\u00fcrgen. Long short-term memory. Neural computation, 9(8):\\n1735\\u20131780, 1997.\"}"
]
} |
HyezmlBKwr | Test-Time Training for Out-of-Distribution Generalization | [
"Yu Sun",
"Xiaolong Wang",
"Zhuang Liu",
"John Miller",
"Alexei A. Efros",
"Moritz Hardt"
] | We introduce a general approach, called test-time training, for improving the performance of predictive models when test and training data come from different distributions. Test-time training turns a single unlabeled test instance into a self-supervised learning problem, on which we update the model parameters before making a prediction on the test sample. We show that this simple idea leads to surprising improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts. Theoretical investigations on a convex model reveal helpful intuitions for when we can expect our approach to help. | [
"out-of-distribution",
"distribution shifts"
] | Reject | https://openreview.net/pdf?id=HyezmlBKwr | https://openreview.net/forum?id=HyezmlBKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BYpQE6-ZpG",
"HkltDr4oiS",
"S1ldGH4ioS",
"SJgihEVsjr",
"B1gWRWfMcH",
"SkxK93PFtr",
"SkgD9svKFS",
"BklX60fKtH",
"rJxhI4ktFB",
"Bkx_qyUOtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798743070,
1573762401152,
1573762320359,
1573762227366,
1572114889034,
1571548304725,
1571548046707,
1571528378833,
1571513427938,
1571475344016
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2201/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2201/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2201/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2201/AnonReviewer3"
],
[
"~Jiao_YU_Shen1"
],
[
"ICLR.cc/2020/Conference/Paper2201/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2201/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2201/Authors"
],
[
"~Jiao_YU_Shen1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is on a new approach approach to transductive learning. Reviewers were a bit on the fence. Their most important objection is that the performance improvements that the authors report almost entirely come from the \\\"online\\\" version, which basically gets to see the test distribution. That contribution is nevertheless, in itself, potentially interesting, but I was surprised not to see comparison with simple transductive learning from semi-supervised learning, learning with cache, or domain adaptation, e.g., using knowledge of the target distribution to reweigh the training sample, or [0], on using an adversary to select a distribution consistent with sample statistics. I encourage the authors to add more baselines, analyze differences with existing approaches, and, if their approach is superior to existing approaches, resubmit elsewhere.\\n\\n[0] http://papers.nips.cc/paper/5458-robust-classification-under-sample-selection-bias.pdf\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you and answers to your questions\", \"comment\": \"Thank you for your feedback. It appears our use of the term \\u201cout-of-distribution\\u201d caused some confusion. Our algorithm works on what you call \\u201cdomain shifts\\u201d and does not deal with out-of-distribution detection. This terminology difference is secondary to the main contribution of the paper. In the revision, we use the term distribution shifts instead of out-of-distribution.\\n\\nForgetting has minimal impact on the performance of our method. This has been shown empirically in the paper, as you recognize in your review. Our model is jointly trained on both tasks, so our corrections during test-time training are tiny. This is in contrast to continual learning, where forgetting arises because the tasks have never been jointly trained and are learned one-by-one from scratch.\\n\\nWe also conducted the following experiment. The widely adopted oracle in continual learning is to jointly train on all the tasks. For test-time training, we experiment with the analogous oracle by mixing in training of the main task (on the training set) with the self-supervised task (on the test instance). This modification of our method takes a very long time to run, but should exhibit as little forgetting as possible. On all the benchmarks in CIFAR-10-C level 5, the standard and online versions of our method have the same performance both with and without this modification (up to random fluctuation), demonstrating that the impact of forgetting is minimal.\\n\\nRegarding experiments following [Volpi et al 2018]: Digit datasets (e.g. MNIST), especially those of small image dimensions, are generally not good fits for self-supervised learning. Rotation in particular can be poorly defined for digits. We experiment with real distribution shifts of natural scenes in the paper.\"}",
"{\"title\": \"Thank you and answers to your questions\", \"comment\": \"Thank you for your positive feedback. We agree our method \\u201chelps adjust for corruptions and modest dataset shifts\\u201d. We understand the term out-of-distribution to broadly include distribution shifts of all kinds, including the small and modest ones in our experiments. To avoid this confusion, however, we now use the term distribution shifts instead of out-of-distribution in the revision.\\n\\nYou are worried that \\u201cthe more fine-tuned labels get (like the density of a tumor), the harder it gets to create auxiliary tasks.\\u201d Even if the main task is highly specialized (e.g. the tumor density example), the auxiliary task can be fairly general (e.g. rotation). The self-supervised task only needs to share *features* with the main task without actually solving it, and features in computer vision can be as general as edges and shades. In ImageNet for example, improvements are aggregated across very specific problems such as distinguishing between 280 kinds of birds and 62 kinds of lizards, but rotation suffices for self-supervision. \\nIn addition, the theoretically sufficient condition of our method -- gradient correlation -- is agnostic to the size of the label space; even if the label space is large (ImageNet with 1000 classes) or infinite (regression), using rotation still performs well.\\n\\nYou also asked for \\u201c...a reasonable categorization of tasks where this method is expected to be applicable.\\u201d We can provide reasonable rules of thumb for both the standard and the online version of our method. Standard: The self-supervised task, e.g. rotation, is both well defined and non-trivial on the new domain (in the sense discussed in Section 3.2). Online: All the test samples in the sequence are from the same (new) test distribution. Both conditions are easy to check in practice. Empirically, our method was shown to be effective in all of the experiments where these rules are met.\"}",
"{\"title\": \"Thank you and answers to your questions\", \"comment\": \"Thank you for your thoughtful comments. Here we answer your questions:\\n\\n1. We do not need information about the new distribution to choose the hyper-parameters. There are three hyper-parameters for test-time training: the splitting point, the learning rate and the number of steps. We select our splitting point to maximize performance of the joint-training baseline on the original distribution. The learning rate for test-time training is the same as during the last epoch of regular training; intuitively, this lets the model keep learning at the rate it has been accustomed to. Both these hyper-parameters can be selected without any knowledge of the test distribution, as done in our experiments. Empirically, we observe that performance is rather insensitive to the number of test-time training iterations once the self-supervised loss (on the test instance) converges. This also makes intuitive sense because once convergence is reached, the gradient from the self-supervised loss is small anyways. For the standard version, 10 steps is more than enough to reach convergence; we have in fact experimented with taking more steps and observe no difference in performance beyond random variations. For the online version, 1 step is in fact enough to reach convergence because the algorithm has already seen many previous samples from the same distribution. Practitioners can easily observe convergence of the self-supervised loss with the information revealed during testing.\\n\\n2. Thank you for acknowledging that we do not need to see the new distribution. In the case where some unlabeled samples are indeed available (as in the setting of unsupervised domain adaptation), yes they should be taken advantage of through methods for that setting. Test-time training can be applied on top of the model that has used these samples.\", \"you_minor_comments\": \"we really appreciate your careful reading and have incorporated them in our revision.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The motivation is to increase accuracy of CNNs with unseen (unknown) distribution shifts. To this end, instead of the recent approach of training-time self-supervision, they adopted it for test-time [limited novelty]. More precisely, for a classification task, they considered two headed neural network with one head for main classification task and another for an auxiliary task (e.g. predicting rotation degree of rotated images). The feature extractor (up to k layers) is shared between these tasks thus updated using the two tasks, while two heads are updated according to each task. In test-time, the samples (drawn from shifted data distributions) are being used for updating the shared feature extractor through the auxiliary task. They have investigated their test-time approach in a series of experiments in online and offline settings on synthetic shift in data distribution of image classification tasks. The distributional shifts are synthesized adding some non-adversarial perturbation to clean images. The authors proof that their test-time approach can lead to lower error rate on given test-time samples when the underlying learning model is a linear regression model.\\nThe authors incorrectly interchange OOD with domain shifts in their manuscript. The common usage of OOD set is to capture novel samples that are not properly following the training set distribution, e.g. from different concepts than the set of classes given in the training set. With an OOD set, a robust model dealing with it should be able to detect whether an instance is in-distribution or OOD, making a special decision for the latter case (e.g., rejecting the instance).\\nIn domain shift, we look at the scenario where test objects are the same as used for training the model (i.e., correspond to one of the classes the model is processing), but might be perturbed or coming having a different distribution (e.g. SVHN for MNIST or vice versa). The approaches for domain shift concern to improve robustness of CNNs to such shifts in data distribution. Accordingly, the title of the paper inaccurately reflects of the claim of the paper and is misleading, this paper is not on learning with out-of-distribution instances.\\nThe other important point is about catastrophic forgetting phenomena in online setting of their approach, which was not addressed thoroughly in the paper. How not to forget what the model has previously learnt a test-time training? I see this somewhat has been empirically shown in Fig 2 with accuracy on the original data, but what is the mechanism not to forget what have being learned so far?\\nBesides generalization enhancement, the advantage of test-time self-supervision over training-time joint self-supervised is not clear for the readers, particularly considering the problem of the pitfall of catastrophic forgetting phenomena in test-time training. This pitfall does not exist for training-time joint self-supervised approach. What are the advantages of this approach?\\nThe claims (about synthetic shift in distribution shift) are well supported in a series of experiments, where the distribution shifts are synthesized using adding some non-adversarial perturbation to clean images. However, the other common experiments on real shifts in distribution (e.g. SVHN for MNIST and vice versa, and those performed in Volpi et al 2018) are missing, which can help the paper to being better supported and justified for its practicality.\\n[Volpi et al 2018]: Generalizing to Unseen Domains via Adversarial Data Augmentation, NIPS 2018\\nI found the paper rather difficult to follow and not very coherent in its organization. The main idea is fundamentally simple, but it is still difficult to get it from the text. It needed me 2-3 readings before really getting the point of the paper.\\n** Update ** I read other reviews and comments. The answer of the authors to my comments are somehow satisfactory, especially the point of changing from \\\"out-of-distribution\\\" to \\\"domain shift\\\", which avoid some confusion. I upgraded my rating to a \\\"weak accept\\\".\"}",
"{\"comment\": \"Thanks very much. I will study the paper again based on your explanation to me.\", \"title\": \"Some questions\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes test time training, a method that uses an auxiliary task to provide a kind of loose supervision during test time. I say loose supervision because the theory suggests this is only useful when the gradients of the main and auxiliary losses are positively correlated.\\n\\nI'm not sure I totally believe that this is a method of out-of-distribution generalization, but rather it helps adjust for corruptions and modest dataset shifts which is an important problem itself. I suspect test-time training is fundamentally better suited for the latter because in general the assumption of correlation between the loss gradients cannot hold in the test if we allow for large shifts (like the airplane class in the video experiment). I note that the authors are aware of this limitation (page 5, last paragraph).\\n\\nMy main problem with this paper is that the more fine-tuned labels get (like the density of a tumor), the harder it gets to create auxiliary tasks. This will be a significant problem when the samples at test-time only share the highest level common characteristics with the true dataset; (like rotations do not impact or density of tumor, like face detection and similar fine-feature-based tasks).\\n\\nThat said, I do appreciate the experimental results which show promise; especially the CIFAR-10.1 results. So I'm inclined toward accepting this paper. I would be more so inclined if the authors could provide a reasonable categorization of tasks where this method is expected to be applicable. I ask this because, for practitioners, it is near impossible to verify the positive correlation of gradients assumption. If there are high-level targets that the distribution and/or task must satisfy that can act as indicators for applying this method, I believe that would be a valuable addition to the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a method for adapting model parameters by doing self-supervised training on each individual test example. They show striking improvements in out-of-domain performance across a variety of image classification tasks while preserving in-domain performance; the latter is a marked difference from other robustness procedures which tend to sacrifice in-domain performance for out-of-domain (or adversarial) performance. These results are exciting, and I believe that this proposed test-time training method will spur a significant amount of further research into similar approaches.\\n\\nThe paper is well-written and the experiments are thorough, so I have no major concerns. Some remaining questions about the proposed approach are:\\n\\n1) How sensitive is test-time training to hyperparameters like splitting the parameters at the right location (i.e., the particular partitioning of $\\\\theta$ into $\\\\theta_e$, $\\\\theta_s$, and $\\\\theta_m$), or to the learning rate? Is there a good way to pick these hyperparameters, given that evaluation is on an out-of-domain distribution that we assume we do not have access to at training time? The paper proposes a particular split of parameters and a particular learning rate and number of steps (which differs for standard vs. online training). How were those chosen?\\n\\n2) How does test-time training compare to methods that assume access to the test distribution? I understand that a big benefit is that test-time training does not need to see the entire distribution (unlike standard domain adaptation approaches). But in cases where we do get to see parts of the test distribution -- say some unlabeled examples from it, or even some labeled examples -- how does test-time training compare? For example, should we see test-time training as providing the benefits of domain adaptation even when we're unable to access the unlabeled test distribution; or should we see it as doing something beyond what standard domain adaptation methods do, even when we have access to the unlabeled test distribution?\\n\\nMinor comments, no need to respond:\\na) There are several minor typos in the paper, e.g.: p1, \\\"prediciton\\\"; eqn 8, v; p8, \\\"address\\\"; p9, \\\"orcale\\\"; appendix A, missing ref.\\nb) The discussion in Appendix A seems a bit speculative and opinionated. For example, it is not obvious that one has to fall back on the space of all possible models in order to represent test-time training with a single gradient step as a fixed model. The discussion is useful but in my subjective opinion could be toned down; the experiments and discussion in the main paper are strong and less speculative.\\n\\n===\", \"edit\": \"Thank you for the response. The discussion about hyper-parameters makes sense. My rating edit comes from the realization that the performance improvements obtained are almost entirely from the \\\"online\\\" version, which gets to see the test distribution. So I think the baselines are in lacking in that sense: as a straw man baseline for example, one could simply run the normal model on half of the test set, and then use those observed test examples to do some other sort of domain adaptation training.\"}",
"{\"comment\": \"Thank you for your kind words and we are glad to be able to inspire your research. Here we answer your questions:\\n1. We cite and discuss this CVPR'19 paper in the related work section. Their method roughly corresponds to our joint training baseline, which we always compare with empirically. At a high level, the difference is simply that they do not perform test-time training i.e. modify the model parameters at test-time, which is exactly what we claim to contribute.\\n2. We assume you meant to say \\\"corruption types\\\" instead of \\\"augmentation types\\\", and \\\"testing labels\\\" instead of \\\"testing data\\\". Then the case you described, of corruption types unknown in advance, is exactly the case we are trying to solve.\\n3. The motivation is to make the analysis tractable, since we currently do not have sufficient tools to theoretically reason about realistic networks.\\n4. We are not sure we understand your question, which asks us to compare the \\\"generalization problem\\\" with MMD-based methods; the comparison between a problem and a class of methods seems undefined. If you are looking for a comparison between the \\\"generalization problem\\\" and the problem of unsupervised domain adaptation, which often uses MMD-based methods and knows the target distribution in advance in the form of many unlabeled samples, we discuss this in the related work section. If you are looking for a comparison between our method and MMD-based methods, the most immediate difference is that the mean discrepancy is degenerate for a single sample thus cannot be used for test-time training.\", \"title\": \"Answers\"}",
"{\"comment\": \"First of all, I'd like to thank the author for the work, which inspire me to the generalization capability of real-world perturbation. I have few questions regarding the work.\\n\\n1. Based on my understanding, the formulation is very similar to the jigsaw puzzle published in CVPR2019 which also tackled the out of distribution generalization task. My impression is that this work adopted a image rotation for auxiliary task, while jigsaw puzzle adopted image patch shuffling as auxiliary task. May I know whether there are any other differences in high level? \\n\\n2. When talking about out of distribution generalization, may I know whether the proposed problem can tackle the case where the augmentation types are unknown in advance (especially for testing samples). I guess this might not be a problem of jigsaw puzzle as it did not require testing data for parameter updating. \\n\\n3. I have difficulty understanding the theory 1 part. Based on the coarse of deep learning, the network is usually not convex. May I know whether there are any motivation by assuming x, y, l are all convex in \\\\theta?\\n\\n4. My research focus currently is on distribution based for generalization problem. May I know how this problem differentiate with the MMD based domain generalization method? such as the paper \\\"Exploiting Low-rank Structure from Latent Domains for Domain Generalization\\\" published in ECCV14 and other related works. \\n\\nFinally, I'd like to thank the author again for this inspiring work. I am looking forward to the explanation which I think will definitely help me with my future research.\", \"title\": \"Some questions\"}"
]
} |
HJgb7lSFwS | Distance-based Composable Representations with Neural Networks | [
"Graham Spinks",
"Marie-Francine Moens"
] | We introduce a new deep learning technique that builds individual and class representations based on distance estimates to randomly generated contextual dimensions for different modalities. Recent works have demonstrated advantages to creating representations from probability distributions over their contexts rather than single points in a low-dimensional Euclidean vector space. These methods, however, rely on pre-existing features and are limited to textual information. In this work, we obtain generic template representations that are vectors containing the average distance of a class to randomly generated contextual information. These representations have the benefit of being both interpretable and composable. They are initially learned by estimating the Wasserstein distance for different data subsets with deep neural networks. Individual samples or instances can then be compared to the generic class representations, which we call templates, to determine their similarity and thus class membership. We show that this technique, which we call WDVec, delivers good results for multi-label image classification. Additionally, we illustrate the benefit of templates and their composability by performing retrieval with complex queries where we modify the information content in the representations. Our method can be used in conjunction with any existing neural network and create theoretically infinitely large feature maps. | [
"Representation learning",
"Wasserstein distance",
"Composability",
"Templates"
] | Reject | https://openreview.net/pdf?id=HJgb7lSFwS | https://openreview.net/forum?id=HJgb7lSFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"b7BMFDzfOF",
"rkxVztJjsB",
"rJg7R9qcsS",
"Hkejtt59jB",
"rJxIZt9qsS",
"rkxEKlP0tr",
"HygNqe2ntB",
"Skx0fuGctH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798743041,
1573742860186,
1573722827154,
1573722498600,
1573722366168,
1571872891698,
1571762316009,
1571592214192
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2200/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper2200/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2200/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2200/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2200/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2200/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2200/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes an approach for learning class-level and individual-level (token-level) representations based on Wasserstein distances between data subsets. The idea is appealing and seems to have applicability to multiple tasks. The reviewers voiced significant concerns with the unclear writing of the paper and with the limited experiments. The authors have improved the paper, but to my mind it still needs a good amount of work on both of these aspects. The choice of wording in many places is imprecise. The tasks are non-standard ones so they don't have existing published numbers to compare against; in such a situation I would expect to see more baselines, such as alternative class/instance representations that would show the benefit specifically of the Wasserstein distance-based approach. I cannot tell from the paper in its current form whether or when I would want to use the proposed approach. In short, despite a very interesting initial idea, I believe the paper is too preliminary for publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewers, any comments on author response?\", \"comment\": \"Dear Reviewers, thanks for your thoughtful input on this submission! The authors have now responded to your comments. Please be sure to go through their replies and revisions. If you have additional feedback or questions, it would be great to get them this week while the authors still have the opportunity to respond/revise further. Thanks!\"}",
"{\"title\": \"Reply to reviewer #2\", \"comment\": \"\\u201cThe general idea of measuring the distribution divergence for a set of classes is interesting and seems to be novel.\\u201d\\nThank you for your review and the effort you put into it. \\n\\n\\u201c[...]\\n- A set of divergences doesn't contain any pixel-level information, only divergences to some predefined classes\\n- As a consequence, this representation will not be able to discover information that is not covered by the labels\\nBecause of these limitations, it seems that this particular representation may be less useful for some applications than others.\\u201d\\nWe would argue that, to the contrary, using labeled information is useful and many applications require such representations. Our approach offers an easy method to integrate continuous (images) and discrete (labels) information into one representation which, for instance, offers many possibilities given the current state of object recognizers in computer vision. In particular, using labeled data to explore both global and local similarities over different classes can be useful in many specialized or critical applications where wrong classification results are highly undesirable (medical diagnosis, robotic automation,...) \\n\\n\\u201cI don't follow why the paper proposes to use 'environments' -- random combinations of classes. It seems that a square matrix (n_c x n_c) with all classes should do the same job.\\u201d\\nNote that the experiments that are detailed in section 4.2 with plot in appendix A.1 experimentally confirm the validity of using random combinations. Results for $R$=1 are indicative of the outcome when using a square matrix, yet the results improve significantly for larger $R$ values. We also explicitly tried using a square matrix which did not work as well. We have clarified why we use environments in the revised submission (sections 2 and 4.2). In short, we follow the findings of the work of [1] that apply the theory of Random Feature Approximations [2] which has shown to lead to beneficial shift-invariant properties. \\n\\n\\u201cThe experimentation is very weak and does very little to support the claims. The paper considers only one substantial task to test the representation. This task is image retrieval by image query. The paper doesn't provide any comparison to existing methods or simple baselines.\\u201d\\nWith all due respect, this is not correct on two accounts: first, we perform not only retrieval experiments (section 4.3), but also classification experiments on the obtained representation (4.2). Additionally, for both experiments we provide a comparison to baselines. For the classification task we provide baselines with three state-of-the-art classification models that employ binary cross-entropy (see table 1, section 4.2), for the retrieval based on modified representations we provide a baseline on the basis of CNN features (see table 2, section 4.3).\\n\\n\\u201cThe second contribution that the representations are interpretable and composable is not addressed. I seems that it should be hard to interpret a large vector of distances to randomly chosen subsets of classes. There is no experiment demonstrating interpretability of the proposed approach. The compositionality is not addressed either. [...]\\u201d \\nWe respectfully disagree on this account as well. The retrieval experiment in section 4.3 is specifically designed to illustrate the interpretability and composability following the method described in section 3.5. In that experiment we compose new representations from existing representations by exploiting the structure of the representations that are interpretable over rows and columns. We subsequently retrieve images that are similar to the modified representations, which we quantitatively have evaluated in table 2 and qualitatively in figure A2. Additionally, the \\u2018SIM\\u2019 retrieval method for this experiment relies on the interpretability of representations over different classes. \\n\\n\\u201cThe paper is generally well written and it is easy to follow. The literature review can be improved by [...]\\u201d\\nThank you, we have improved this section in the revised version by adding several relevant works.\\n\\n\\u201c- The proposed technique can be used with any task, but the paper is clearly limited to the retrieval task\\n- The environments are too vaguely described and can be misinterpreted in the introduction\\u201d\\nAgain we note that we implemented both a classification task and retrieval task. We have rephrased and detailed certain parts of the paper.\\n\\n\\u201cI recommend using term divergence instead of distance when it is not symmetrical.\\u201d\\nCould you specifically refer to a relevant instance? All distance estimates employed in the paper are intended as approximations or estimates of the Wasserstein distance which is a distance between distributions. The IPM formulation is found to be symmetric and satisfies the triangular inequality when Lipschitz continuous [3].\\n\\n[1] https://arxiv.org/abs/1811.01713\\n[2] http://papers.nips.cc/paper/3182-random-features-for-large-scale-kernel-machines.pdf\\n[3] https://arxiv.org/abs/1701.07875\"}",
"{\"title\": \"Reply to reviewer #3\", \"comment\": \"We thank reviewer #3 for the elaborate and very constructive review. We appreciate that you find it an excellent idea, yet acknowledge that the exposition had some issues. We have tried to address your concerns and have thoroughly improved the formulation in the revised version (especially section 3).\\n\\n\\u201cHowever, the exposition could be greatly improved by using the standard language of probability theory. The discussion in 3.1 was particularly painful to read. What is the difference between \\\"existing in an environment\\\" and \\\"conditioning on a measurable event\\\"? Phrases like \\\"belonging to any random subset of the dataset\\\" suggest a non-deterministic method of selecting an element of the power set of the training data, but it is unclear what to do if more training data arrives in this case.\\u201d\\nIndeed, these phrases lacked clarity and we have modified these sentences throughout the whole text in the revised version and especially in section 3.1. \\n\\n\\u201cThroughout the entire paper the word \\\"random\\\" is apparently used in the colloquial sense of \\\"arbitrary\\\". *Correct every instance of this.* If you actually are referring to generating samples from a distribution, be explicit about the generative process.\\u201d\\nWe have corrected our use of the word \\u201crandom\\u201d in the paper. Our intention is to convey the following: Environments are randomly generated in the following manner: the size is uniformly selected from the range given by [1,$R$] and the attributes that make up the environments are uniformly selected without replacement from the set of all attributes.\\n\\n\\u201cSection 3.5 was more confusing than enlightening. In general I understand that environments can be leveraged for intelligibility and admit manipulation for information retrieval. The exact strategy remains somewhat opaque. If you are under space constraints refer to an appendix with more explicit details.\\u201d\\nWe have rephrased and detailed section 3.5 to improve clarity. \\n\\n\\u201cIn the experiments section phrases like \\\"environments consist of random combinations of classes\\\" is also not helpful. Do you mean something like \\\"uniformly selected from the set of all class pairs?\\\" Or something like \\\"uniformly selected from the power set of all classes?\\\" \\nWe improved this formulation (as well as other phrases) as you have suggested. These improvements can be found throughout the text and especially in the introduction and section 3.\\n\\n\\u201cHow volatile are the experimental results with respect to the non-deterministic choice of environments?\\u201d\\nAs can be seen from the standard deviations in table 1, the F1 scores in the classification task are not impacted much for different non-deterministic choices. Intuitively, the sensitivity depends on the values of the hyperparameters $n_e$ and $R$ , the amount and maximum size of the environments respectively. We thus added additional sensitivity analyses in appendix A1 (tables A1 and A2) of the revised version, illustrating that the sensitivity remains low for all values of $R$ or $n_e$.\\n\\n\\u201cThe technique bears some resemblance to Wasserstein Discriminant Analysis.[1] [...] That is ok since the representation is designed to be used for a variety of tasks (modulo section 4.2), but it does leave open the question \\\"what if the matrix of estimated Wasserstein distances isn't informative, e.g., due to poor choice of environments?\\\" There is no attempt to assess the representation except via utility in downstream tasks.\\u201d \\nThank you for this reference, we have added it in section 2 (background \\u2013 recent work). As mentioned above, we added figures that show a small variance for classification outcomes for different values of $R$ or $n_e$, which suggests the representations are quite robust with respect to the choice of environments. \\n\\n\\u201cThe common representation was justified computationally, but I suspect is beneficial statistically. [...]\\u201d\\nThese are interesting points. From our composition experiments we found that, for the given values of $R$ and $n_e$ , the spectrum was very non-flat which suggests indeed that the representation could be further compressed to a large degree. We believe that your suggestion of a diagnostic to guard against insufficient capacity is very interesting and could be part of further future work, for example by evaluating the evolution of the spectrum of the representations as training progresses.\\n\\n\\u201cI am curious what the results in appendix A.1. look like relative to the spectral norm or the smallest eigenvalue of the estimated WD matrix (smallest eigenvalue assuming number of environments < number of classes, otherwise the k-th eigenvalue where k = number of classes).\\u201d\\nWe have added tables A.3 and A.4 in the appendix that show average values of the spectral norm of 100 representations for different values of $R$ and $n_e$.\"}",
"{\"title\": \"Reply to reviewer #1\", \"comment\": \"Thank you for your constructive review. Below we address your concerns.\\n\\n\\u201c- Since the environments are taken randomly in the experiments, it is not investigated how sensitive the method is with respect to the choices of environments. \\u201d\\nWe partially addressed this concern in the results of table 1 (page 8) that shows the average F1 scores for the classification task over several runs, where each run has a different randomly selected choice of environments. From this table it is clear that the standard deviation in F1 scores is low and in line with the standard deviations of the baselines. This suggests that for sufficiently large $n_e$ and $R$ (the parameters that determine the amount of environments and the maximum amount of attributes per environment) the method is not sensitive with respect to the choices of environments. We also added this sensitivity for different choices of $n_e$ and $R$ to the appendix of the revised version (see appendix A1, tables A1 and A2), where it becomes clear that even for small values, the sensitivity is low. \\n\\n\\u201cAlso, does it make any sense to design environments to include related (and not random) classes?\\u201d\\nThe rationale for not designing (handmade) environments is that they require knowledge about what would lead to distinguishing features. Note that the randomly selected features will lead to many environments that are related to any class, thus ensuring a good choice over any set of features. The idea is inspired by the Random Features approximation as developed in the work of [1] and [2], we have clarified this in the revision (\\u2018Recent work\\u2019 in section 2).\\n\\n\\u201c- It seems necessary to include some experiments to assess sensitivity of the interpretation with regard to the small perturbations that are not changing the class label.\\u201d\\nOur interpretation of your question is as follows, please correct us if necessary: \\u201cWhat amount of perturbation is needed before the interpretation of the class label of the representation changes?\\u201d. As part of the retrieval method we evaluated the size of the factor $q$ (the factor that determines how much the representation is modified for the retrieval experiment in section 4.3) and its influence on the change in class. This gives an estimate of the sensitivity as it shows that small perturbations to representations don\\u2019t easily modify class membership. We have added the results in section 4.3.\\n\\n[1] https://arxiv.org/abs/1811.01713\\n[2] http://papers.nips.cc/paper/3182-random-features-for-large-scale-kernel-machines.pdf\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors proposed a template-based interpretable representation that works based on the earth mover's distance of each class to a number of \\\"environments\\\", which could be taken as union of a few random classes. To achieve this, they train several critics based on Fisher GAN. The method is evaluated based on classification and retrieval tasks.\\nThe representation, by construction, is aimed towards interpretation and is specially useful in multi-class classification tasks.\", \"here_are_my_concerns\": [\"Since the environments are taken randomly in the experiments, it is not investigated how sensitive the method is with respect to the choices of environments. Also, does it make any sense to design environments to include related (and not random) classes?\", \"It seems necessary to include some experiments to assess sensitivity of the interpretation with regard to the small perturbations that are not changing the class label.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper defines a representation learning strategy based upon estimation\\nof a matrix of Wasserstein distances.\\n\\nThe idea is excellent. The ability to \\\"solve\\\" IPMs reliably is a recent\\ndevelopment in deep learning whose ramifications are still being explored.\\nIntuitively this line of research could plausibly result in general\\nmethods which are theoretically intelligible and broadly applicable.\\nIndexing at least one side of the matrix of estimated WDs with events\\n(rather than classes) has interpretability properties useful for\\ninformation retrieval and also conveys benefits reminiscent of learning\\nwith privileged information.\\n\\nHowever, the exposition could be greatly improved by using the\\nstandard language of probability theory. The discussion in 3.1\\nwas particularly painful to read. What is the difference between\\n\\\"existing in an environment\\\" and \\\"conditioning on a measurable event\\\"?\\nPhrases like \\\"belonging to any random subset of the dataset\\\" suggest\\na non-deterministic method of selecting an element of the power set of\\nthe training data, but it is unclear what to do if more training data\\narrives in this case. \\n\\nThroughout the entire paper the word \\\"random\\\" is apparently used in the \\ncolloquial sense of \\\"arbitrary\\\". *Correct every instance of this.*\\nIf you actually are referring to generating samples from a distribution,\\nbe explicit about the generative process.\\n\\nSection 3.5 was more confusing than enlightening. In general I understand \\nthat environments can be leveraged for intelligibility and admit manipulation \\nfor information retrieval. The exact strategy remains somewhat opaque. If \\nyou are under space constraints refer to an appendix with more explicit details.\\n\\nIn the experiments section phrases like \\\"environments consist of random \\ncombinations of classes\\\" is also not helpful. Do you mean something like \\n\\\"uniformly selected from the set of all class pairs?\\\" Or something like \\n\\\"uniformly selected from the power set of all classes?\\\" How volatile\\nare the experimental results with respect to the non-deterministic choice \\nof environments? \\n\\nI want to accept this paper if the exposition is improved, which I think\\nis possible during the response period.\\n\\nMy other comments are not blocking issues, but would either improve the\\ncurrent paper or inform future directions of research.\\n\\nThe technique bears some resemblance to Wasserstein Discriminant\\nAnalysis.[1] That paper seeks a projection that maximizes the ratio of \\nWasserstein distance between classes vs. within classes. Here, \\nalthough the common representation is a nonlinear mapping \\nanalogous to a projection, we merely try to estimate all the\\nWasserstein distances rather than maximize them, so it is not trained\\nto be discriminative per se. That is ok since the representation is\\ndesigned to be used for a variety of tasks (modulo section 4.2), but it \\ndoes leave open the question \\\"what if the matrix of estimated \\nWasserstein distances isn't informative, e.g., due to poor choice of \\nenvironments?\\\" There is no attempt to assess the representation \\nexcept via utility in downstream tasks.\\n\\nThe common representation was justified computationally, but I suspect\\nis beneficial statistically. It might facilitate safely including a\\nlarge number of environments and then spectrally compressing (i.e., SVD)\\nthe resulting matrix without overfitting the data. However clearly if\\nthe capacity of this layer is too small, then all estimated WDs will\\nbe close to zero. If we posit a low Bayes error classifier for the\\nmulti-class problem associated with the dataset, that might imply there\\nis some conditioning of the input under which the matrix of (actual) WDs\\nhas rank equal to the number of classes, which would in turn provide a\\nuseful diagnostic to guard against an insufficiently discriminative choice\\nof environments or insufficient capacity in the common representation. If\\nthe matrix is full rank with a flat spectrum, however, that might indicate\\nthe choice of environments is too granular and overfitting has occurred,\\nit's not immediately obvious to me how to guard against this. \\n\\nI am curious what the results in appendix A.1. look like relative to the spectral \\nnorm or the smallest eigenvalue of the estimated WD matrix (smallest \\neigenvalue assuming number of environments < number of classes,\\notherwise the k-th eigenvalue where k = number of classes).\\n\\n[1] https://arxiv.org/abs/1608.08063\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Paper contributions\\n=================\\n- This paper proposes a method for constructing representations using a matrix of Wasserstein distances. These distances measure the discrepancy between each class and each environment, that is a random combination of some classes.\\n- The paper evaluates this approach on a task of image retrieval.\\n\\nGeneral notes\\n============\\nThe general idea of measuring the distribution divergence for a set of classes is interesting and seems to be novel. But I argue that this representation can be limiting:\\n- A set of divergences doesn't contain any pixel-level information, only divergences to some predefined classes\\n- As a consequence, this representation will not be able to discover information that is not covered by the labels\\nBecause of these limitations, it seems that this particular representation may be less useful for some applications than others.\\n\\nI don't follow why the paper proposes to use 'environments' -- random combinations of classes. It seems that a square matrix (n_c x n_c) with all classes should do the same job.\\n\\nThe experimentation is very weak and does very little to support the claims. The paper considers only one substantial task to test the representation. This task is image retrieval by image query. The paper doesn't provide any comparison to existing methods or simple baselines.\\n\\nThe second contribution that the representations are interpretable and composable is not addressed. I seems that it should be hard to interpret a large vector of distances to randomly chosen subsets of classes. There is no experiment demonstrating interpretability of the proposed approach. The compositionality is not addressed either. The samples provided in the appendix are not convincing.\\n\\nThe paper is generally well written and it is easy to follow. The literature review can be improved by providing prior work where \\\"approaches use hidden state vector of LSTM\\\" and \\\"features extracted from CNNs\\\" instead of generic references.\", \"some_of_the_claims_are_vague_and_excessively_broad\": \"- The proposed technique can be used with any task, but the paper is clearly limited to the retrieval task\\n- The environments are too vaguely described and can be misinterpreted in the introduction\\n\\nConclusion\\n=========\\n\\nI recommend to reject on the basis that \\n- the approach is more limited than the paper advocates\\n- the experimentation is weak\\n- some claims are not addressed\\n\\nOther notes\\n==========\\nI recommend using term divergence instead of distance when it is not symmetrical.\"}"
]
} |
Bkeb7lHtvH | At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks? | [
"Niv Giladi",
"Mor Shpigel Nacson",
"Elad Hoffer",
"Daniel Soudry"
] | Background: Recent developments have made it possible to accelerate neural networks training significantly using large batch sizes and data parallelism. Training in an asynchronous fashion, where delay occurs, can make training even more scalable. However, asynchronous training has its pitfalls, mainly a degradation in generalization, even after convergence of the algorithm. This gap remains not well understood, as theoretical analysis so far mainly focused on the convergence rate of asynchronous methods.
Contributions: We examine asynchronous training from the perspective of dynamical stability. We find that the degree of delay interacts with the learning rate, to change the set of minima accessible by an asynchronous stochastic gradient descent algorithm. We derive closed-form rules on how the learning rate could be changed, while keeping the accessible set the same. Specifically, for high delay values, we find that the learning rate should be kept inversely proportional to the delay. We then extend this analysis to include momentum. We find momentum should be either turned off, or modified to improve training stability. We provide empirical experiments to validate our theoretical findings. | [
"implicit bias",
"stability",
"neural networks",
"generalization gap",
"asynchronous SGD"
] | Accept (Spotlight) | https://openreview.net/pdf?id=Bkeb7lHtvH | https://openreview.net/forum?id=Bkeb7lHtvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Z9Ja6s6vb",
"BygmdOItoB",
"SkgYLuIwjr",
"BygeIZlDoB",
"HJlV0elDir",
"BJxe-k_0Kr",
"SyeTfjp3YH"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798743013,
1573640298781,
1573509201487,
1573482824046,
1573482699833,
1571876599811,
1571769108570
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2199/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2199/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2199/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2199/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2199/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2199/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper considers the problem of training neural networks asynchronously, and the gap in generalization due to different local minima being accessible with different delays. The authors derive a theoretical model for the delayed gradients, which provide prescriptions for setting the learning rate and momentum.\\n\\nAll reviewers agreed that this a nice paper with valuable theoretical and empirical contributions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer #4\", \"comment\": \"We thank the reviewer for the positive and helpful feedback. Below we address the questions the reviewer raised.\\n \\n(1) \\\"In Fig. 3, we can clearly see a threshold of $\\\\eta$. I notice that when $\\\\tau=16$ the fluctuation is more significant than other three cases. Can you explain why this appears?\\\"\\n \\nIndeed, the standard deviation is somewhat larger when $\\\\tau=16$. We suspect it is a finite-sample effect, and will run more repetitions to verify (it will take more than a few days to check thoroughly, but these results will be ready for the camera ready version).\\n \\n(2) \\\"In Sec. 3.1, do you consider any kind of learning rate scheduling to change learning rate over epochs, like you did in Sec. 3.2?\\\"\\n \\nThe theoretical analysis in Section 2 focused on a fixed learning rate for simplicity. Therefore, in Section 3.1 which goal was to support our theoretical findings with empirical evidence, we chose to also focus on a fixed learning rate regime. Particularly, we investigated the interaction between the delay, a fixed learning rate, and momentum and how this interaction affects stability and generalization.\\n \\nIt is interesting to explore the relation between the scheduling of the learning rate and the generalization from the perspective of dynamical stability. To do this, we need to consider different methods of practical learning rate scheduling regimes and analyze each scheduling method separately. \\n \\n(3) \\\"It would be great to evaluate on more tasks, as it has been shown that some may be more robust than others (Dai et al., 2019).\\\"\\n \\nOur findings of the relation between the momentum and asynchrony align with (Dai et al., 2019). In (Dai et al., 2019), the authors demonstrate that momentum based algorithms, e.g. Adam and RMSProp, are more sensitive to staleness. It is intriguing to analyze such algorithms in the asynchronous setting from the perspective of dynamical stability. We will aim to examine the behaviour of other tasks until the camera ready version (again, it will take us more than a few days to check it thoroughly). Thank you for the suggestion.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies how asynchrony affects model training by investigating dynamic stability of minimum points that A-SGD can access. They point out that not all local minimum points are accessible, and asynchrony can affect which minimum points can be accessed, and thus helps to explain why models trained by A-SGD have higher generalization gap. The authors also propose shifted-momentum that utilize momentum for asynchronous training.\\n\\nOverall, this paper provides nice insights and thorough theoretical analysis. Experiments are carefully designed to validate their results. I think this paper is well written and its novelty is significant.\", \"strength\": [\"Theoretical formulation and analysis in this paper is nice and elegant.\", \"Provide theoretical insights of A-SGD with momentum, which is important.\", \"Experiments of minima selection are carefully designed. I like the idea to observe trajectories ``leaving minimum''.\"], \"some_quick_questions\": [\"In Fig. 3, we can clearly see a threshold of \\\\eta. I notice that when \\\\tau=16 the fluctuation is more significant than other three cases. Can you explain why this appears?\", \"In Sec. 3.1, do you consider any kind of learning rate scheduling to change learning rate over epochs, like you did in Sec. 3.2?\", \"It would be great to evaluate on more tasks, as it has been shown that some may be more robust than others (Dai et al., 2019).\", \"Wei Dai, Yi Zhou, Nanqing Dong, Hao Zhang, and Eric Xing. Toward Understanding the Impact of Staleness in Distributed Machine Learning. In Proc. International Conference on Learning Representations (ICLR), 2019.\"]}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"We thank the reviewer for the helpful feedback. We added a conclusions section as suggested. Below we address the questions the reviewer raised.\\n \\n(1a) \\\"I think the authors should attempt to make a stronger case for the practical implications of their analysis: in particular, in the most practical setting (where we don't have a minimum obtained from synchronous training), what does the provided analysis allow us to do?\\\"\\n \\nOur analysis is true for steady state, i.e., at the proximity of a minimum. As mentioned correctly, in practical cases, the training doesn't always end in a minimum. However, we observe that the training modifications we derive from our analysis help improve stability during training, even before reaching a minimum. We added empirical evidence that support this claim in appendix section G. In this section, we used the experiment in Fig. 1 and sampled the validation error in different epochs during training (see Figure 13). As can be seen, for learning rate = 0.01, which is the optimal learning for the steady state (and matches our proposed modification of the learning rate), the validation error stays near optimal throughout training.\\n \\n(1b) \\\"Part of this might involve being more explicit about the results in Table 1: what exactly was the procedure for selecting the learning rates? Is it meaningfully different than just lowering the learning rate?\\\"\\n \\nOur procedure for determining the A-SGD learning rate is to take the learning rate used for large batch training and divide it by the delay. In Table 1, the learning rate we used for the large batch is as suggested in [1]: the baseline (small batch) learning rate, multiplied by the square root of the ratio between the sizes of the small batch and the large batch. \\n\\n(2) \\\"Equation (3) is rather obscure without the Appendix, especially since unbolded x hasn't been introduced anywhere. I think the authors should try to convey more of what's going on in this equation in the main text.\\\"\\n \\nWe added more details on this equation in the main text. In particular, below equation (3) we added the definition of unbolded x: \\\"$x_t$ is the projection of the expectation of the perturbation of $\\\\mathbf{x}_t - \\\\mathbf{x^{*}}$ on the eigenvector which corresponds to the maximal eigenvalue $a$\\\".\\n\\n(3a) \\\"Minor: 'looses' should be 'loses' throughout\\\" \\n \\nWe fixed this typo through the paper.\\n \\n(3b) \\\"it might be good to include a conclusion section.\\\"\\n \\nWe added a discussion section, which includes conclusions, and a discussion of implications (such as 1a above).\\n \\n[1] - Hoffer, E., Hubara, I., Soudry, D. (2017). Train longer, generalize better: closing the generalization gap in large batch training of neural networks.\"}",
"{\"title\": \"Reply to Reviewer #1\", \"comment\": \"We thank the reviewer for the positive and helpful feedback which allowed to improve the paper: we added a discussion section with two paragraphs elaborating on the Reviewer's questions, and how they lead into interesting directions for future research. We address the reviewer questions below:\\n \\n(1) \\\"Would introducing some sychronization help?\\\" \\n \\nThere are several methods to incorporate some synchronization, e.g. [1,2]. We agree it would be an interesting research direction to investigate how our stability analysis changes for such synchronization methods. Generally, based on our analysis for asynchronous methods with stochastic delay (Appendix C), we expect that, synchronization methods that reduce the delay in expectation will also help stability - enabling the use of larger learning rates. However, an exact answer would require choosing a specific method and calculating the stability threshold. \\n \\n(2) \\\"Is the lower learning rate hurting training speed when measures as wall-clock time to accuracy?\\\"\\n \\nIn our experiments we did not find a degradation in convergence speed when using the optimal learning rate scaling for the steady state (naturally, the situation might change in other datasets or models). For example, in Figure 1 right, we see that the learning rate at steady state (after 2000 epochs) which achieves optimal generalization is 0.01. The same learning rate achieves optimal convergence, as can be seen in Figure 1 left: its validation error curve is almost always lower then all other learning rates. To see this more clearly, we also added an additional figure to the paper appendix G (Fig. 13) . In this new figure, we use the experiment introduced in Fig. 1 and show the validation error during training sampled at different epochs. As can be seen, with learning rate 0.01 (which matches the proposed modification of the learning rate), the validation error stays near optimal through training, i.e. at the different epochs sampled training we observe that the lower learning rate (chosen according to our suggested modification) achieves smaller validation error compared with higher or lower learning rates. \\n \\nNote that, although this is not wall-clock measuring, as there is no degradation in convergence speed in terms of iterations (or epochs), this implies that using the proposed learning rate in A-SGD will be beneficial in terms of run time performance as well - in comparison to other learning rates. If, instead, we compare A-SGD (with the proposed learning rate) vs. S-SGD (synchronous training), then, when measuring performance with wall-clock time, one should consider the system hardware, its heterogeneously, the network bandwidth and etc. There are settings in which training asynchronously will greatly benefit the training time compared to synchronous training. In such settings, using lower learning rate might improve wall-clock time to reach accuracy compared to synchronous training, as suggested by our results in section 3.2. \\n \\n[1] - Assran, M., Loizou, N., Ballas, N., Rabbat, M. (2018). Stochastic Gradient Push for Distributed Deep Learning.\\n \\n[2] - Chen, J., Pan, X., Monga, R., Bengio, S. (2016). Revisiting Distributed Synchronous SGD.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors introduce a theoretical model for delayed gradients in asynchronous training. It is a very nice model and solving the corresponding differential equation allows to study its stability. Authors derive stability bounds for pure SGD (learning rate needs to decrease with delay) and for SGD with momentum, where they introduce a nice momentum formulation that improves stability. These are nice insights and good results and they are validated by experiments. More experiments and practical analysis would be welcome though. Some example questions: would introducing some sychronization help? Is the lower learning rate hurting training speed when measures as wall-clock time to accuracy?\\n\\nI am very grateful for the authors' response. It would still be good to see more experiments, but I hope this paper gets accepted.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors model A-SGD as a dynamical system, where parameters are updated with delayed gradients. The authors analyze the stability of this system, and they first derive that the learning rate must scale linearly with the inverse of the delay around a minimum to remain stable. Using a similar analysis they show that the standard way of incorporating momentum into A-SGD requires small learning rates for high momentum values, and they propose \\\"shifted momentum,\\\" which allows for stability under higher momentum values. Experimentally, the authors show that around minima the learning rate needed to retain stability scales linearly with the inverse of the delay, that there appears to be an analogous threshold when training models from scratch, that shifted momentum allows for higher momentum values, and finally that on several datasets A-SGD with an appropriate learning rate is able to generalize at least as well as large batch synchronous training.\\n\\nThis is a nice paper with a large number of interesting theoretical and experimental results, and I believe it should be accepted. I think there are some largely presentational issues that should be addressed, however:\\n\\n- I think the authors should attempt to make a stronger case for the practical implications of their analysis: in particular, in the most practical setting (where we don't have a minimum obtained from synchronous training), what does the provided analysis allow us to do? Part of this might involve being more explicit about the results in Table 1: what exactly was the procedure for selecting the learning rates? Is it meaningfully different than just lowering the learning rate?\\n\\n- Equation (3) is rather obscure without the Appendix, especially since unbolded x hasn't been introduced anywhere. I think the authors should try to convey more of what's going on in this equation in the main text.\", \"minor\": \"'looses' should be 'loses' throughout, and it might be good to include a conclusion section.\"}"
]
} |
BJxlmeBKwS | FRICATIVE PHONEME DETECTION WITH ZERO DELAY | [
"Metehan Yurt",
"Alberto N. Escalante B.",
"Veniamin I. Morgenshtern"
] | People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms. These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition. Fricative phonemes have an important part of their content concentrated in high frequency bands. It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times. Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids. In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus. All reported results are reproducible and come with easy to use code that could serve as a baseline for future research.
| [
"fricative detection",
"phoneme detection",
"speech recognition",
"deep learning",
"hearing aids",
"zero delay",
"extrapolation",
"TIMIT"
] | Reject | https://openreview.net/pdf?id=BJxlmeBKwS | https://openreview.net/forum?id=BJxlmeBKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5YHirJCSqt",
"SJl7IornsB",
"SygAaFH2iB",
"rJxfNtr3iS",
"B1x0i7iTFB",
"rke_KL2hFS",
"S1esJzqhFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742986,
1573833546998,
1573833157877,
1573833002429,
1571824549652,
1571763840462,
1571754466797
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2197/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2197/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2197/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2197/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2197/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2197/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers appreciate the importance of the problem, and one reviewer particularly appreciated the gains in performance. However, two reviewers raised concerns about limited novelty and missing comparisons to prior work. While the rebuttal helped address these concerns, the novelty is still limited. The authors are encouraged to revise the presentation to clarify the novelty.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you and review reply\", \"comment\": \"We would like to thank the reviewer for carefully evaluating our work and for providing very thoughtful comments. Surely, these comments have already helped us improve the paper. We are also happy that the reviewer enjoyed the paper.\\n\\nBelow we give answers to the specific comments of the reviewer.\", \"q1\": \"The authors selected raw speech signal as input to the CNN, which is not trivial and should be motivated and discussed in the paper. For instance, using the standard features like Mel filterbanks or MFCC will introduce a delay as they are computed on an overlapping window of 25ms. Phoneme recognition using raw speech as input to a CNN has been presented before, the authors should cite [1] and [2] for instance.\", \"a1\": \"We added Section 2.3 to the paper discussing this issue. Shortly, we did experiment with MFCC-type approach followed by a recurrent neural network. We were not able to achieve the quality of Net25 on the task, but we were able to get within 5% of the accuracy of Net25 with a propitiatory filterbank and much more computationally efficient processing. We plan to publish these results in another paper. We cited [1] and [2] as the reviewer suggested.\", \"q2\": \"Table 4 is confusing as the first four lines are not actually evaluated on TIMIT, so I don't see the point of adding these numbers to the table, as they cannot be compared anyway. I would remove these four lines from the Table.\", \"a2\": \"We agree. We moved this part of the table to the appendix, just for reference of an interested reader.\", \"q3\": \"In terms of previous works, phoneme recognition on TIMIT is a very popular task, and many others could be cited, such as [3-5].\", \"a3\": \"We cited [3-5] appropriately as the reviewer suggested.\", \"q4\": \"On the computation consideration, the analysis is interesting, but a discussion on the size (i.e. number of parameter) of the network is missing: one way to decrease computation time is to have a smaller network, which is in line with the application: hearing aids probably do not have gigabytes of ram available.\", \"a4\": \"We added a table with all the relevant details into Section 4 of the paper. It was already stated in the paper that Net25 has 1.1M parameters. Also, we have experimented with a much smaller recurrent neural network, please see answer to Q1.\", \"q5\": \"Question about the network: the input segment seems to be of size 3072 samples, why ? any motivation for this particular input size?\", \"a5\": \"We discussed this more in the paper: the choice is empirical, based on experimentation. The input window covers 2-3 preceding phonemes.\"}",
"{\"title\": \"Thank you\", \"comment\": \"We would like to thank the reviewer for the positive evaluation of our work. We agree that evaluating our method on more datasets will make the work more solid. Unfortunately, we were not able to accomplish this quickly enough to add to the paper at this point, but we will do this in the future.\"}",
"{\"title\": \"Thank you and review reply\", \"comment\": \"We would like to thank the reviewer for carefully reading the paper and providing very helpful comments.\\nWe improved the paper based on these comments and submitted a revision. Below we address the specific comments of the reviewer one-by-one.\", \"q1\": \"The authors chose to model the problem from the raw wave. Although it is getting popularity in several speech processing tasks, it is not clear why not using magnitude/MFCC, for example. In case the authors claim that learning from the waveform is better, I suggest providing a comparison to other features.\\nAdditionally, did the authors experience with simpler architectures Maybe more shallow models? Regarding supervision, did the authors tried comparing to the method proposed by [2] but with a unidirectional RNN? Similar to [3].\", \"a1\": \"We added Section 2.3 to the paper discussing this issue. Shortly, we did experiment with MFCC-type approach followed by a recurrent neural network. We were not able to achieve the quality of Net25 on the task, but we were able to get within 5% of the accuracy of Net25 with a propitiatory filterbank and much more computationally efficient processing. We plan to publish these results in another paper.\\n-------\", \"q2\": \"If I understand it correctly, the motivation for this task was: accurate detection of fricatives boundary can be used to shift into lower frequency bands in hearing aids. It seems like the boundaries are more important than other phoneme parts such as a mid phoneme, for example.\\nIn that case, a better metric might be Presicion + Recall + F1 + R-val next to the boundaries (for instance, with a tolerance level of 10-20ms). Those metrics were suggested on several studies of phoneme segmentation, [1], [2].\", \"a2\": \"We added an evaluation using these metrics in appendix B. For 20 ms threshold on each side, the r-value we achieved is 0.6. Note that the network is not optimized for this task. We did not have sufficient time to dig deeper into sources of errors on these metrics, therefore these results are in the appendix and not in the main paper. Also, it looks like the r-val in formula (3) in [2] has a typo in (R+1 -OS)/sqrt(2). The correct formula from [R\\u00e4s\\u00e4nen, Laine, Altosaar] has r2 = (\\udbff\\udc0e-OS + R - \\udbff\\udc0e1)/sqrt(2).\\n-------\", \"q3\": \"The comparison in Table 3 is very strange. Results are reported on different datasets. Although the authors mentioned it in the caption, it is still misleading. I suggest the authors to compare either obtain results on the same benchmark or compare to other baselines.\", \"a3\": \"We agree. We moved this part of the table to the appendix, just for reference of an interested reader.\\n-------\", \"q4\": \"Minor comments:\\n\\\"If for the majority of the samples in a phoneme our network\\u2019s output is greater than the threshold we set\\\" -> not a clear sentence.\", \"a4\": \"We rephrased this sentence.\\n\\n[2] Michel, Paul, et al. \\\"Blind phoneme segmentation with temporal prediction errors.\\\" arXiv preprint arXiv:1608.00508 (2016).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"====================================== Updated Review =====================================\\nI would like to thank the authors for providing more experiments and details regarding their work.\\nHowever, after reading the authors rebuttal, I still think that there is more work to do in terms of comparison to prior work. That way it would be much clearer what is contribution of this work, and how it can be used for future research in that field. \\n\\nHence, I would like to keep my score as is. \\n==========================================================================================\\n\\nThis paper describes a method for fricative phonemes boundary detection with zero delays. The authors suggest optimizing a convolutional based neural network with a binary cross loss function to detect such events. \\nThe authors provide results on the TIMIT dataset and compare the proposed model to several baselines. \\n\\nThe task of phoneme boundary detection was well studies under different setups and is very important for various applications, including the one proposed in this paper.\\n\\nHowever, I have some major concerns regarding this paper, which I would like the authors to clarify. Without these, it is hard to understand the contribution in this paper.\\n\\n1) the authors chose to model the problem from the raw wave. Although it is getting popularity in several speech processing tasks, it is not clear why not using magnitude/MFCC, for example. In case the authors claim that learning from the waveform is better, I suggest providing a comparison to other features. \\nAdditionally, did the authors experience with simpler architectures Maybe more shallow models? Regarding supervision, did the authors tried comparing to the method proposed by [2] but with a unidirectional RNN? Similar to [3].\\n\\n2) If I understand it correctly, the motivation for this task was: accurate detection of fricatives boundary can be used to shift into lower frequency bands in hearing aids. It seems like the boundaries are more important than other phoneme parts such as a mid phoneme, for example. \\nIn that case, a better metric might be Presicion + Recall + F1 + R-val next to the boundaries (for instance, with a tolerance level of 10-20ms). Those metrics were suggested on several studies of phoneme segmentation, [1], [2]. \\n\\n3) The comparison in Table 3 is very strange. Results are reported on different datasets. Although the authors mentioned it in the caption, it is still misleading. I suggest the authors to compare either obtain results on the same benchmark or compare to other baselines.\", \"minor_comments\": \"\\\"If for the majority of the samples in a phoneme our network\\u2019s output is greater than the threshold we set\\\" -> not a clear sentence. \\n\\n[1] Franke, Joerg, et al. \\\"Phoneme boundary detection using deep bidirectional lstms.\\\" Speech Communication; 12. ITG Symposium. VDE, 2016.\\n[2] Michel, Paul, et al. \\\"Blind phoneme segmentation with temporal prediction errors.\\\" arXiv preprint arXiv:1608.00508 (2016).\\n[3] Adi, Yossi, et al. \\\"Automatic Measurement of Voice Onset Time and Prevoicing Using Recurrent Neural Networks.\\\" INTERSPEECH. 2016.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper apples supervised deep learning methods to detect exact duration of a fricative phoneme in order to improve practical frequency lowering algorithm. A major challenge compared to existing work is to have an algorithm with nearly zero delay while preserving detection accuracy. A deep convolutional neural network is trained for this purpose and it is validated on TIMIT dataset.\\n\\nAfter a careful preprocessing of the data, long segments of raw audio are given as input to the convolutional net. It is trained as a binary classification problem. Therefore, for each different phenome, a different network is needed. To improve the accuracy, Majority voting is also adopted. This however increases the computational cost. To address this issue, an extrapolation detection problem is formulated to predict the fricative phoneme a few ms in advance. Extensive numerical results show that the approach still outperforms the method of Ruinskiy & Lavner (2014) in Unweighted Average Recall. \\n\\nI find the accuracy attained by the neural nets quite impressive, although more insights would be favored to understand what is going on. This is yet an interesting application of deep learning useful for real-life problems. If the method could be tested on another dataset, the result would be more convincing.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"## Updated review\\n\\nI have read the rebuttal. I have some concerns about the new version of the paper: The addition of section 2.3 about the MFCCs is welcome but feels a bit out of place. The first part about the MFCC is interesting and relevant, but it could be in the introduction as a motivation. The second part about the \\\"proprietary high-quality time-frequency filterbank\\\" is not clear at all. Firstly, results are discussed so this part should be in the Evaluation section. Secondly, why using proprietary filterbanks and not the standard Mel filterbanks ?\\n\\nGiven that the rebuttal and the new version of the paper didn't address my major concerns, I am keeping my original rating.\\n\\n## Original review\\n\\nThis paper presents an approach to detect fricative phoneme in speech with as little delay as possible, in the context of hearing aids improvement. The model is based on CNN and is trained to detect fricative given the past context. The model is evaluated in terms of recall and compared with recent published works. The results show that the proposed approach outperforms the baselines and yields state-of-the-art performance with no delay. The paper concludes with some analysis on the computational cost and draw possible future work.\", \"this_paper_should_be_rejected_for_the_following_reasons\": [\"The novelty is very limited: this work applied a well-known architecture (CNN) to a common problem, phoneme recognition. This only novelty is the zero-delay constraint, which is probably not sufficient for ICLR.\", \"The significance is also limited given the very specialized application.\", \"Some references are missing (see below).\", \"The presented results are not very clear.\", \"The computational considerations section is interesting but is missing some important elements.\"], \"detailed_comments\": [\"The authors selected raw speech signal as input to the CNN, which is not trivial and should be motivated and discussed in the paper. For instance, using the standard features like Mel filterbanks or MFCC will introduce a delay as they are computed on an overlapping window of 25ms. Phoneme recognition using raw speech as input to a CNN has been presented before, the authors should cite [1] and [2] for instance.\", \"Table 4 is confusing as the first four lines are not actually evaluated on TIMIT, so I don't see the point of adding these numbers to the table, as they cannot be compared anyway. I would remove these four lines from the Table.\", \"In terms of previous works, phoneme recognition on TIMIT is a very popular task, and many others could be cited, such as [3-5].\", \"On the computation consideration, the analysis is interesting, but a discussion on the size (i.e. number of parameter) of the network is missing: one way to decrease computation time is to have a smaller network, which is in line with the application: hearing aids probably do not have gigabytes of ram available.\", \"Question about the network: the input segment seems to be of size 3072 samples, why ? any motivation for this particular input size ?\", \"My review can seem to be a bit harsh, I actually enjoyed the paper, but I don't think ICLR is the right conference for it, and I would advise the authors to improve it and submit it to a speech conference.\"], \"references\": \"[1] Palaz, D., Magimai Doss, M. and Collobert, R.. \\\"Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks.\\\" Proceedings of Interspeech 2013.\\n[2] Zeghidour, N., Usunier, N., Kokkinos, I., Schaiz, T., Synnaeve, G., & Dupoux, E. \\\"Learning filterbanks from raw speech for phone recognition\\\". Proceedings of ICASSP 2018.\\n[3] Zhang, Ying, Mohammad Pezeshki, Phil\\u00e9mon Brakel, Saizheng Zhang, Cesar Laurent, Yoshua Bengio, and Aaron Courville. \\\"Towards end-to-end speech recognition with deep convolutional neural networks.\\\" arXiv preprint arXiv:1701.02720 (2017).\\n[4] Chorowski, Jan K., et al. \\\"Attention-based models for speech recognition.\\\" Advances in neural information processing systems. 2015.\\n[5] T\\u00f3th, L\\u00e1szl\\u00f3. \\\"Phone recognition with hierarchical convolutional deep maxout networks.\\\" EURASIP Journal on Audio, Speech, and Music Processing 2015.1 (2015): 25.\"}"
]
} |
BygkQeHKwB | Walking on the Edge: Fast, Low-Distortion Adversarial Examples | [
"Hanwei Zhang",
"Teddy Furon",
"Yannis Avrithis",
"Laurent Amsaleg"
] | Adversarial examples of deep neural networks are receiving ever increasing attention because they help in understanding and reducing the sensitivity to their input. This is natural given the increasing applications of deep neural networks in our everyday lives. When white-box attacks are almost always successful, it is typically only the distortion of the perturbations that matters in their evaluation.
In this work, we argue that speed is important as well, especially when considering that fast attacks are required by adversarial training. Given more time, iterative methods can always find better solutions. We investigate this speed-distortion trade-off in some depth and introduce a new attack called boundary projection BP that improves upon existing methods by a large margin. Our key idea is that the classification boundary is a manifold in the image space: we therefore quickly reach the boundary and then optimize distortion on this manifold. | [
"Deep learning",
"adversarial attack"
] | Reject | https://openreview.net/pdf?id=BygkQeHKwB | https://openreview.net/forum?id=BygkQeHKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"nC_Gq-WDZl",
"HJxpocuhir",
"H1xcvIbwsH",
"BJeE0SbvsS",
"r1eb-VbvsS",
"rJxIPmZPjH",
"SJxJXx2Bcr",
"S1eWEHyecr",
"BJgdRMs6tB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742958,
1573845669092,
1573488225779,
1573488075780,
1573487609462,
1573487453819,
1572352022793,
1571972392784,
1571824336388
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2196/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2196/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2196/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2196/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2196/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2196/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2196/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2196/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"In this paper the authors highlight the role of time in adversarial training and study various speed-distortion trade-offs. They introduce an attack called boundary projection BP which relies on utilizing the classification boundary. The reviewers agree that searching on the class boundary manifold, is interesting and promising but raise important concerns about evaluations on state of the art data sets. Some of the reviewers also express concern about the quality of presentation and lack of detail. While the authors have addressed some of these issues in the response, the reviewers continue to have some concerns. Overall I agree with the assessment of the reviewers and do not recommend acceptance at this time.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary to all reviewers and paper updates\", \"comment\": \"This is a response to all reviewers, meant to summarize the main points and the updates we have made to the paper.\\n\\n1. We would like to thank all reviewers for their in-depth feedback. As a result of this discussion, we are improving a lot our paper. For each point below, we discuss the corresponding updates we have done or we shall do in the paper. Most new material has now been added in the appendices. In the final paper we will re-organize.\\n\\n\\n2. We are already evaluating attacks against robust models including adversarial training by DDN, which is superior than (Madry et al, ICLR\\u201918), in Table 3.\", \"update_of_the_paper\": \"In Appendix B.1, we now study the impact of the attack parameter: $\\\\alpha$ and $\\\\gamma_{\\\\min}$.\\nIt turns out that performance is stable wrt $\\\\alpha$ provided it is large enough. As for $\\\\gamma_{min}$, the best value is 0.7 whatever the value of $\\\\alpha$.\\n\\n\\n6. We shall include a justification of the schedule of $\\\\gamma_i$ (14) summarizing the discussion we had with Reviewer 1 on the behavior of the algorithm.\\n\\n\\n7. We shall move the \\\"speed - distortion trade-off\\\" (now appendix B.2) to the main paper.\"}",
"{\"title\": \"Response to Reviewer 2 (part 2/2)\", \"comment\": \"Comment 3: \\u201cAlso, a popular way to evaluate model robustness would be to evaluate the attack success rate under a given upper bound of distortion (e.g. 0.3 for MNIST). If there is no constraint on the distortion, we can always achieve a 100% attack success rate by simply use an image from another class..\\u201d\", \"response_3\": \"This comment, in fact, applies equally to all our results in Tables 1-3, so we discuss it in this general context.\\n\\nThis kind of evaluation is popular indeed, but is biased to target distortion attacks.\\nAs explained in Section 2.2, target distortion attacks address the first form (2-3) of the problem: maximize the success rate subject to the upper bound of distortion. Our algorithm is clearly a target success attack addressing the second form (4-5) of the problem: minimize distortion subject to attack success. Hence our evaluation in Tables 1-3 is biased to this second form, having attack rates close to 1 and measuring quality in terms of distortion.\\n(Note that not all target success attacks have a 100\\\\% success rate because of the additional constraint in complexity as given by the limited number of iterations.)\\n\\nA fair comparison of both types of attacks is tricky and cannot be done as proposed by the reviewer. For this reason, Section 4.2 explains a new protocol, also introducing operating characteristics as in Fig. 3. This protocol is actually another contribution to our work. It unifies both forms into a single benchmark, facilitating fair comparisons of target distortion attacks (FGSM, I-FGSM, PGD) and target success attacks (C&W, DDN, BP). To evaluate target distortion attacks, we vary the upper bound epsilon and take successful adversarial examples of minimal distortion. This applies to all measurements (statistics and operating characteristics).\\n\\nBy taking a fixed distortion (x-axis) in Figure 3, the characteristic function gives a probability of success (y-axis) equal *by definition* to the attack success rate under a given upper bound on distortion, as suggested by the reviewer. Similarly, by taking a fixed probability of success (y-axis), the characteristic function gives a distortion (x-axis) equal to the least upper bound of distortion required to achieve such a success rate.\\n\\nSo, what the reviewer is recommending was already contained in the operating characteristic. To facilitate its reading, we will extract success rates for a given upper bound of distortion from the operating characteristics and report them in Tables 1-3 as recommended. As we strongly believe that operating characteristic is the only fair comparison, we will add the corresponding ones of Table 3 as stated in points 1,2 above.\", \"comment_4\": \"\\u201cGenerating adversarial examples on natural models is rather a well-solved problem and I do not think a 0.1 decrease in L2 norm is a big contribution since it is already so small that humans cannot distinguish.\\u201d\", \"response_4\": \"This statement is questionable from many perspectives.\\n\\nNumber 0.1 is used here as an example of a small number without any reference measurement. For instance, our improvement over DDN on ImageNet (the most realistic dataset) at 100 iterations is 0.15 (0.43 -> 0.28), a relative decrease of -34%. More importantly, at 20 iterations, the improvement is 0.83 (1.18 -> 0.35), an impressive relative decrease of -70%.\\n\\nWe agree that humans may not distinguish so low distortions. Yet, since operating characteristics rarely cross, an attack with lower average distortion has in general higher success rate for a given upper bound of distortion. In other words, improvements in distortion (subject to success) are in general equivalent to improvements in success rate (under an upper bound of distortion).\\n\\nIs generating adversarial examples a well-solved problem? Not really if one constrains the number of iterations, which is central to our approach. Very recent attacks keep reporting improvements in distortion (in natural models included) with hundreds of thousands of iterations. This is not great progress. As far as we know, DDN is the only state of art attack that produces adversarial examples with low distortion at few iterations, allowing its use in adversarial training. Our attack further improves this state of the art.\\n\\nIs generating adversarial examples a well-solved problem? Not really if one takes into account quantization. Images are quantized in the real world, but almost all academic papers do not consider this constraint. Quantization jeopardizes the adversarial perturbation especially at small distortion: After quantization, some real matrices are no longer adversarial. However, only we and DDN (Rony et al., 2019) consider this effect. This is the reason why we evaluate different attacks with quantization (Table 2) and without quantization (Table 4). It turns out that our attack works well within both cases.\"}",
"{\"title\": \"Response to Reviewer 2 (part 1/2)\", \"comment\": \"Thank you for your careful and valuable comments. We address your concerns point by point.\", \"comment_1\": \"\\u201cIt would better to also evaluate the method on the state of the art robust models (such as Madry et al ICLR'18) instead of only testing it on natural models.\\u201d\", \"response_1\": \"All attacks keep reporting performance on natural models. For completeness, this is the first kind of evaluation every attack should have.\\nMoreover, we do compare to robust models too. And not just that; we also use our attack to build an even more robust model.\\n\\nTable 3 compares three robust models obtained by adversarial training, each using a different attack including ours.\\nAs a defense, our attack (BP) has a similar or better performance than DDN. According to (Rony et al., 2019), the model obtained by adversarial training based on the DDN attack beats the robust model of (Madry et al, ICLR\\u201918). Moreover, the training process of DDN is more efficient since standard (clean) model training is followed by few epochs using adversarial examples alone; while (Madry et al ICLR\\u201918) train from scratch using a mix of clean and adversarial examples. This is why we have chosen DDN as a state of the art defense, which we further improve with our BP defense.\", \"comment_2\": \"\\u201cI do not think the results in Table 3 are convincing or necessary. It is well-known that the FGSM is so weak that the adversarial examples produced by it are not strong enough for adversarial training. The state of the art adversarial training defense uses the adversarial examples obtained from PGD. ... With the current results, I do not believe the robust training with BP can be any better than FGSM. Similar issues also exist in Table 2.\\u201d\", \"response_2\": \"According to Table 3, adversarial training with our BP is similar or better than DDN in terms of distortion, and it also beats adversarial training with FGSM by a large margin in all cases (higher distortion for all attacks). In fact, FGSM defense was included as a baseline since it was the first method used for adversarial training. We agree with the reviewer's comment that FGSM is weak.\\n\\nBesides, since our attack is fast, the adversarial training is fast. Table 3 is then necessary, showing improvements in defense over DDN, which in turn improves over PGD (Madry et al, ICLR'18), which in turn improves over FGSM. For completeness, we shall add PGD and the original (Madry et al ICLR\\u201918) defenses to make Table 3 more convincing, as well as corresponding operating characteristics like those of Fig. 3.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your careful and valuable comments. We address your concerns point by point below.\\n\\nAs general feedback, we fail to see how the concerns discussed here could support a \\\"weak reject\\\" recommendation, especially given the low confidence and quick assessment as stated. Could you please let us know if there are any other issues?\", \"comment_1\": \"\\\"However, despite the excess of the recommended 8 pages, the main parts of the proposed method are not so clearly explained.\\\"\", \"response_1\": \"We have put a lot of effort in describing the method with motivation, description, pseudo-code and diagrams.\\nThe other two reviewers clearly understand our method. R3 even says the paper is a good read.\\nCould you please elaborate on what is not clear? We are willing to make it as clear as possible.\", \"comment_2\": \"\\\"It is not so clear which parts of the proposed method (Section 3) are mathematically justified. For example, \\\\gamma_i in Eq. (14) looks heuristically introduced.\\\"\", \"response_2\": \"This concern is on using heuristics rather than on clarity.\\n\\nThe $\\\\gamma_i$ heuristic (14) builds on a simpler idea of DDN (Rony et al., 2019), where parameter $\\\\gamma$ is constant across iterations.\\n\\nAs explained, $\\\\gamma_i$ controls the distortion:\\nIn stage 1, updates are small at the beginning to keep distortion low, then larger until the attack succeeds.\\nIn stage 2, updates are decreasing as $\\\\gamma_i$ tends to 1. It increases the distortion when the current image is correctly classified (IN) and decreases the distortion when the current image is adversarial (OUT).\\nAll this behavior is controlled by a single parameter, which simplifies the algorithm.\\n\\nThe fact that $(\\\\gamma_i)_i$ is strictly increasing allows us to show that, in Stage 2, an IN iteration (distortion grows by $1/\\\\gamma_i > 1$) followed by an OUT iteration (distortion decays by $\\\\gamma_{i+1} < 1$) is indeed equivalent to a milder IN in the sense that the distortion grows by $\\\\gamma_{i+1}/\\\\gamma_i$ which is larger than 1 but smaller than $1/\\\\gamma_i > 1$. Similarly, OUT followed by IN is equivalent to a mild OUT in the sense that distortion decays by $\\\\gamma_i/\\\\gamma_{i+1} < 1$. Both cases lead towards the class boundary by a factor that tends to 1: if the algorithm keeps alternating between OUT and IN and we only look at the OUT iterates (remember, all attacks output the successful iterate of least distortion), this is equivalent to strictly decreasing distortion. This behavior is more stable than having a constant parameter $\\\\gamma$ as in DDN. We shall add this jusitification.\\n\\nFrom all the possible increasing sequences $(\\\\gamma_i)_i$ that go to 1 as i goes to the maximum number of iterations, we pick the simplest one: a linear sequence. That is the only heuristic.\", \"comment_3\": \"\\u201cAlthough the abstract and introduction emphasize that the main focus of BP is speed-distortion tradeoff, the experiments section does not discuss it so much and so clearly. While the operating characteristic of probability of success and distortion is mainly discussed, it is unclear which argument most demonstrates the improvement in speed-distortion tradeoff.\\u201d\", \"response_3\": \"The speed-distortion trade-off is partially addressed by reporting results for 20 and 100 iterations in Tables 1, 2, 3. A more complete treatment is given in Appendix B.1 and Fig. 5, showing probability of success and distortion for more choices than 20 and 100 iterations.\\nFollowing the benevolent recommendation of R3, we will move Appendix B.1 and Fig. 5 to the main body of the paper.\\nAs attacks become more and more powerful, speed becomes as important as distortion and probability of success.\", \"comment_4\": \"p.5, l.7: 1(a) -> Figure 1(a);\\np.8, l.10: measure measure;\\np.8, right after Eq. (21): `\\\"`is conditioned is\\\" -> ``\\\"is conditioned\\\"\", \"response_4\": \"Thank you for spotting these mistakes. They are now corrected.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your careful and valuable comments. We address your concerns point by point.\", \"comment_1\": \"\\u201cHowever, one of the claims is to trade the distortion level to speed that needs verifying in the main manuscript, therefore, it is suggested that the section B.1 moves to the main manuscript and discussed more thoroughly.\\u201d\", \"response_1\": \"It is reasonable to move section B.1 to the main manuscript. This should also help with respect to point 3 of reviewer 1. We will find the space for it and update the manuscript, but this probably means moving some other material to an Appendix or otherwise significantly shortening or removing other material.\", \"comment_2\": \"\\u201cAlso the effect of other parameters on this trade-off (such as the number of iterations K).\\u201d\", \"response_2\": \"The effect of the number of iterations $K$ is exactly the same as $\\\\# grad$ as shown in Fig. 5 because we only calculate one gradient per iteration.\\nOther parameters are $\\\\alpha$ and $\\\\gamma_{min}$. We will add more results to show the effect of these parameters in an appendix.\", \"comment_3\": \"\\u201cIt is also interesting to discuss how the algorithm performs in classes that are linearly separable on a toy dataset.\\u201d\", \"response_3\": \"Considering Fig. 1, if the classes were linearly separable, the boundary and all iso-contours would be straight lines, the gradient would be normal to these lines, and all algorithms would move along a line normal to the boundary. The problem would then be one-dimensional, which is not interesting to display like Fig. 1. What may be more interesting is to plot the position on this line as a function of iteration. This we may attempt to include in an appendix.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a parameterized approach to generate adversarial samples by balancing the speed-distortion trade-off. The method first tries to reach the boundary of classes in the classifier space, then walks on the classifier manifold to find adversarial samples that make the classifier to fail in prediction while minimizing the level of distortion in the sample. Having a limited number of iterations, the method reduces the fluctuations around the boundary and paves the classification manifold.\\n\\nThe idea is novel, interesting and well-formulated, while the intuition could be better explained. The paper is a good read, has an adequate amount of literature review, and the results are supporting the claims of the paper: lower distortion while having comparable accuracy, the use of generated samples in fortifying the classifier, and keeping distortion to a reasonable level (qualitative results in appendix). However, one of the claims is to trade the distortion level to speed that needs verifying in the main manuscript, therefore, it is suggested that the section B.1 moves to the main manuscript and discussed more thoroughly. Also the effect of other parameters on this trade-off (such as the number of iterations K).\\n\\nIt is also interesting to discuss how the algorithm performs in classes that are linearly separable on a toy dataset.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed an adversarial attack method based on optimization on the manifold. The authors claim it is a fast and effective attack even with quantization.\\n\\nIt would better to also evaluate the method on the state of the art robust models (such as Madry et al ICLR'18) instead of only testing it on natural models. Generating adversarial examples on natural models is rather a well-solved problem and I do not think a 0.1 decrease in L2 norm is a big contribution since it is already so small that humans cannot distinguish. A better way to prove the strength would be to test it on a robust model to achieve higher success rates given a maximum distortion.\\n\\nI do not think the results in Table 3 are convincing or necessary. It is well-known that the FGSM is so weak that the adversarial examples produced by it are not strong enough for adversarial training. The state of the art adversarial training defense uses the adversarial examples obtained from PGD. Also, a popular way to evaluate model robustness would be to evaluate the attack success rate under a given upper bound of distortion (e.g. 0.3 for MNIST). If there is no constraint on the distortion, we can always achieve a 100% attack success rate by simply use an image from another class. So in Table 3, the authors may either make sure all attacks have a 100% success rate and compare the distortion, or set an upper bound of distortion and compare the success rate (just as in the operating characteristics plot). With the current results, I do not believe the robust training with BP can be any better than FGSM. Similar issues also exist in Table 2.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers efficiently producing adversarial examples for deep neural networks and proposes boundary projection (BP), which quickly searches an adversarial example around the classification boundary. The BP approach is tested on three benchmark datasets and compared with existing adversarial attacking methods.\\n\\nThe key idea of BP, searching on the class boundary manifold, is interesting and promising. However, despite the excess of the recommended 8 pages, the main parts of the proposed method are not so clearly explained.\\n\\n- It is not so clear which parts of the proposed method (Section 3) are mathematically justified. For example, \\\\gamma_i in Eq. (14) looks heuristically introduced.\\n- Although the abstract and introduction emphasize that the main focus of BP is speed-distortion tradeoff, the experiments section does not discuss it so much and so clearly. While the operating characteristic of probability of success and distortion is mainly discussed, it is unclear which argument most demonstrate the improvement in speed-distortion tradeoff.\\n\\np.5, l.7: 1(a) -> Figure 1(a)\\np.8, l.10: measure measure\\np.8, right after Eq. (21): `\\\"`is conditioned is\\\" -> ``\\\"is conditioned\"}"
]
} |
Bkx1mxSKvB | Disentangling Trainability and Generalization in Deep Learning | [
"Lechao Xiao",
"Jeffrey Pennington",
"Sam Schoenholz"
] | A fundamental goal in deep learning is the characterization of trainability and generalization of neural networks as a function of their architecture and hyperparameters. In this paper, we discuss these challenging issues in the context of wide neural networks at large depths where we will see that the situation simplifies considerably. To do this, we leverage recent advances that have separately shown: (1) that in the wide network limit, random networks before training are Gaussian Processes governed by a kernel known as the Neural Network Gaussian Process (NNGP) kernel, (2) that at large depths the spectrum of the NNGP kernel simplifies considerably and becomes ``weakly data-dependent'', and (3) that gradient descent training of wide neural networks is described by a kernel called the Neural Tangent Kernel (NTK) that is related to the NNGP. Here we show that by combining the in the large depth limit the spectrum of the NTK simplifies in much the same way as that of the NNGP kernel. By analyzing this spectrum, we arrive at a precise characterization of trainability and generalization across a range of architectures including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs). We find that there are large regions of hyperparameter space where networks will train but will fail to generalize, in contrast with several recent results. By comparing CNNs with- and without-global average pooling, we show that CNNs without average pooling have very nearly identical learning dynamics to FCNs while CNNs with pooling contain a correction that alters its generalization performance. We perform a thorough empirical investigation of these theoretical results and finding excellent agreement on real datasets. | [
"NTK",
"NNGP",
"mean field theory",
"CNN",
"trainability and generalization",
"Gaussian process"
] | Reject | https://openreview.net/pdf?id=Bkx1mxSKvB | https://openreview.net/forum?id=Bkx1mxSKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2ohlSmPGnD",
"H1x-k_Lhir",
"HyxVdSLniB",
"HkxBD7UnsS",
"HJxQmJ8niB",
"rJgSrGqM5S",
"SkgIDMbRYS",
"r1x-LlrnFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742927,
1573836760753,
1573836140192,
1573835612552,
1573834522769,
1572147773509,
1571848798307,
1571733576778
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2195/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2195/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2195/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2195/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2195/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2195/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2195/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper investigates the trainability and generalization of deep networks as a function of hyperparameters/architecture, while focusing on wide nets of large depth; it aims to characterize regions of hyperparameter space where networks generalize well vs where they do not; empirical observations are demonstrated to support theoretical results. However, all reviewers agree that, while the topic of the paper is important and interesting, more work is required to improve the readability and clarify the exposition to support the proposed theoretical results.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Addressing major comments\", \"comment\": \"Thank you for your careful review of our work. We appreciate your time and agree that our exposition needs to be improved.\\n\\nWe have worked to make the exposition more friendly to newcomers and clearer to everyone. To do this we have made a few significant changes to our exposition. First, we have restricted the discussion in the main text to fully-connected networks and left a discussion of other architectures to the supplementary information. This allows us to improve the clarity of our exposition. We have also summarized the takeaway messages of our paper at the beginning and create a new table for our key results (Table 1 in the new version). \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\n\\\"- In equation (2), the operator T is defined as the kernel K(x,x'). However, the definition seems different from that in equation (8). The authors need to make clear the definition of T.\\n\\\"\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for pointing this out; equation 8 is correct and we have made the change.\\n\\n\\n- What is the \\\"DC\\\" mode in the sentence above the equation (15)? \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for raising this. It represents the eigenvector whose entries are all equal to 1. \\n\\n\\n\\\"- Is the derivation of the left part in equation (9) straightforward? How was the second term, chi_1 q^* p^(ell), derived? I'm not sure how the dot{T} was dealt with. The argument below equation (3) should be used? \\\" \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks. The main source of confusion comes from typos in the definition of $\\\\chi_1$ in the paragraph below equation (8). The correction definition should be $\\\\chi_1 = \\\\sigma_w^2 \\\\dot T(q^*)$. We have added more details to the deviation of this equation; see equations (13) and (14) in the new version.\"}",
"{\"title\": \"Addressing major comments\", \"comment\": \"We thank you for your thorough review of our work. We agree with the overall comments that our exposition could be significantly clearer. We have taken initial steps in this direction and will continue to improve the clarity of our results.\\n\\nAs you correctly note, in the general case the spectrum of the NTK is deeply connected to specific properties of the dataset. This makes it difficult to study generalization and trainability in the general case and so simplifications must be made. Some approaches such as [1] choose to make progress here by considering simple data manifolds such as the unit hypercube or hypersphere. However, we note that another place where simplicity can emerge is in the large depth limit and we believe that we can use this limit to gain significant insight into the properties of networks described by the NTK. \\n\\nAt large depth we show that Theta becomes simple in the sense that it can be written as a small correction to the limiting kernel. Moreover, the scaling of the correction with depth is independent of dataset and can be studied in general for classes of neural network architectures. We leverage this property to make general comparisons between fully-connected networks and convolutional networks with-and-without pooling. \\n\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n1. I don't really get how the spectrum of large-depth NTK is connected to generalization. At infinite depth, the NTK is just a trivial kernel Theta^*, as noted in the paper. It is claimed that a finite-depth correction Eqn. (7) \\\"captures the generalization.\\\" How exactly does it capture the generalization? Generalization appears to be highly dependent on the data distribution. I don't understand how the paper arrives at its conclusions regarding generalization.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for bring this up. We claim that this is a necessary condition for generalization. The linear operator $\\\\Delta^l$, in the infinite width setting (where the NTK captures dynamics), measures the distance between the finite depth (denoted by $l$) and the fixed-point (i.e. $l=\\\\infty$, data independent) prediction. We used this to lower bound the generalization error; see equation (9) in the new version. The norm of this operator decays exponentially in the ordered and chaotic phases, and polynomially on the critical line. A necessary condition for the generalization error to be small is $\\\\Delta^l$ shouldn\\u2019t be too small. Thus after $O(-\\\\log \\\\Delta^l)$ layers, the generalization error has to be large. Green and yellow lines in Figure 3 correspond to $-\\\\log \\\\Delta^l$ ~ constant. To pass these arguments from kernel to finite (large) width network, we applied results from Jacot, https://arxiv.org/abs/1806.07572; Lee, et al https://arxiv.org/pdf/1902.06720, etc, that the training dynamics of real network is $1/\\\\sqrt n$ away from its linearization and converges to the infinite width (i.e. analytic NTK) dynamics as $n$, the width of the network, goes to infinity.\\n\\nWe agree with the reviewer that any fine-grained analysis of generalization should take the data distribution, optimization method, etc. into account. However, finding a necessary condition for generalization of neural networks in terms of hyper-parameters are important research questions. We believe that the large depth setting allows us to study these questions in a uniquely systematic manner.\\n\\n\\n\\n\\n\\n2. The paper (esp. Section 3) is written in a way very unfriendly to someone who is not familiar with previous work, with notation, derivations and conclusions buried in paragraphs. I wish there were some theorems clearly and formally summarizing the conclusions.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nWe agree with this comment; thanks for the feedback! We have worked to make the exposition more friendly to newcomers and clearer to everyone. To do this we have made a few significant changes to our exposition. First, we have restricted the discussion in the main text to fully-connected networks and left a discussion of other architectures to the supplementary information. This allows us to improve the clarity of our exposition. We have also summarized the takeaway messages of our paper at the beginning and create a new table to summarize our key results (Table 1 in the new version). \\n\\n[1] Greg Yang, Hadi Salman, A Fine-Grained Spectral Perspective on Neural Networks\"}",
"{\"title\": \"Addressing major comments\", \"comment\": \"Thank you for your extremely thorough reading of our paper. We appreciate your time and agree that our exposition needs to be improved. We have taken steps towards this in a round of revisions and will continue to improve the clarity of our writing. We believe our paper will be stronger as a result.\\n\\n\\n1) It looks to me that Eq(2) and Eq(6) are contradictory, where T already contains sigma_w and simga_b in Eq(2) but re-multiplied in eq(6).\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for pointing out the typos. The correct definition of $T$ does not contain $\\\\sigma_b$, $\\\\sigma_w$. Please see Eq(2) and (12) in the new version. \\n\\n\\n2) The paper analyzes the dynamics by assuming the variances of inputs are q*, which is debatable. The variance q^l also evolves with the depth increases. It is unclear whether the condition number will change if you takes the evolution of q^l into considered.\\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nIn practice, the diagonal term $q^l$ converges much faster then the off-diagonal; see Figure 1 in Ben Pool etc. https://arxiv.org/pdf/1606.05340.pdf. This observation has been widely used in followup work to analyze the dynamics with q^l -> q* with excellent agreement [1,2,3]. Moreover, one can of course choose to normalize data so that the norm is exactly q* in the first layer. Nonetheless, we agree that we could do a better job of making this point in the text and have added a few sentences to this effect.\\n\\n\\n3) It is unclear how Eq(9) comes directly from Eq(6), and there aren't any rigorous proofs in the Appendix. Similarly for eq(14).\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for pointing out the gap between (6) and (9). The main source of confusion comes from typos in the definition of $\\\\chi_1$ in the paragraph below equation (8). The correction definition should be $\\\\chi_1 = \\\\sigma_w^2 \\\\dot T(q^*)$. We have add new equations (Eq (13), (14) in the new version) to bridge the gap. \\n\\n\\n4) In the paragraph below Eq(11) the paper states that \\\\Theta* becomes an all one-matrix. However, Eq(11) states the diagonal converges to q*/(1-xi_1), but the paragraph below Eq(9) states the off-diagonal converges to q*_{ab}/(1- xi_c). Because q*=q*_{ab} as you stated nearby, do you mean xi_1 = xi_c ? \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nWe could indeed improve the clarity of our discussion here. Note that in the ordered phase (where \\\\Theta* becomes an all-ones matrix) it is indeed the case that \\\\chi_c* == \\\\chi_1 because c* = 1 is the only stable fixed-point. \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\n\\n5) In the first paragraph of Section 3.3, p^l = q* and p^l=l q*. \\n\\n\\nThanks. The first equation should be $q^l = q^*$. \\n\\n\\n6) In the 2nd contribution, you mentioned \\\"eigenvector correlation\\\", while I cannot find anywhere else introducing this.\\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for pointing out this. We should have been more precise about this. We were referring to $Delta^l$. We have worked to clarify the exposition around $\\\\Delta^l$ in general.\\n\\n\\n7) The plots of Figure 1(b) should behave like convex if the kappa really evolves like x_1^l / l. However it is concave. \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for bringing this up. To capture the polynomial correction, the Y-axis is indeed set to be $\\\\chi_1^l \\\\kappa^l$ (rather than $\\\\kappa$), which should be roughly $1/l$ for large depth. We will make the labels more visible.\\n\\n\\n8) In the first experiment, you state \\\"To confirm that the maximal feasible learning rate are ... 2/(lambda_max)\\\". However, learning rates are never discussed in this paper. It is confusing why this experiment is useful. \\n\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for pointing this out. We agree we should have clarified our interest in the maximum feasible learning rate. We have two reasons for investigating this point: 1) While it has been hypothesized in a number of recent papers that the maximum feasible learning rate scales like $\\\\frac 2 {\\\\lambda_{max}}$, we are not aware of a systematic study of this point. 2) In order to conduct subsequent experiments it was necessary to scale the learning rate appropriately since $\\\\lambda_{max}$ varies by several orders of magnitude over the range of hyperparameters / depths that we study.\"}",
"{\"title\": \"Addressing minor comments\", \"comment\": \"3. It's unclear whether the studied regime (large depth, probably even larger with) is relevant in practice. Although there are experimental results provided, the CNN experiments are for the infinite-width NTK. It's unclear how they look like for practical networks.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n\\nLarger models often give superb performance, e.g. WideResNet, BERT https://arxiv.org/abs/1810.04805 (increasing the number of layers and heads), Efficient-Nets (increasing the resolution of the input images, widths and depths simultaneously) https://arxiv.org/abs/1905.11946. How to scale up the size of the models correctly and find the right range of hyper-parameters (weight/bias variances, learning rates, etc.) so that the models are able to train and generalize are important research problems in deep learning. Our paper gives insights into these questions: e.g. increasing the depth hurts generalization and trainability in the ordered phase, while in the chaotic phase, improve trainability but test performance degrades, showing a trade-off between generalization and optimization. Note that previous works were not able to train deep networks in the chaotic phase because the learning rates are not chosen correctly. Understanding the effects of architecture modules (pooling, normalization, dropout, etc.) to generalization and trainability is critical for architecture design. For example, we show that there is a trainability-generalization trade-off for `average pooling`: it increases the condition number of the NTK by $d$ (the window size of the pooling layer), but slow down the decay of $\\\\Delta^l$ by $d$. We show that (new version of the paper), dropout improves the trainability of the network (or more precisely, the condition of the NTK) in the ordered phases and induces an implicit regularization that is similar to L2-regularization.\\n\\n\\n\\n4. There are numerous typos and grammar errors in the paper, even in abstract and introduction.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nThanks for pointing this out! We have worked to improve our exposition and cleaned up the writing.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the spectra of neural tangent kernels (NTKs) at large depth -- first let width go to infinity, and then let depth go to infinity. At infinite depth the kernel has the form a*identity+b*(all-one matrix), and the paper studies how the large-depth NTK converges to the limit in three cases: chaotic, ordered, and critical line. The paper draws connection between these behaviors with the trainability and generalization of corresponding neural networks. Furthermore, the difference between CNNs with and without global average pooling is studied.\\n\\nNTK has been a popular subject of research in deep learning theory, and it's an interesting direction to study the NTK in large depth. However, the exposition is confusing and I'm missing some key points of this paper. Therefore I cannot recommend acceptance at this time. See below for detailed comments.\\n\\n1. I don't really get how the spectrum of large-depth NTK is connected to generalization. At infinite depth, the NTK is just a trivial kernel Theta^*, as noted in the paper. It is claimed that a finite-depth correction Eqn. (7) \\\"captures the generalization.\\\" How exactly does it capture the generalization? Generalization appears to be highly dependent on the data distribution. I don't understand how the paper arrives at its conclusions regarding generalization.\\n\\n2. The paper (esp. Section 3) is written in a way very unfriendly to someone who is not familiar with previous work, with notation, derivations and conclusions buried in paragraphs. I wish there were some theorems clearly and formally summarizing the conclusions.\\n\\n3. It's unclear whether the studied regime (large depth, probably even larger with) is relevant in practice. Although there are experimental results provided, the CNN experiments are for the infinite-width NTK. It's unclear how they look like for practical networks.\\n\\n4. There are numerous typos and grammar errors in the paper, even in abstract and introduction.\\n\\n\\n------\", \"update\": \"Thanks to the authors for the response, especially the clarification about what they mean by generalization. Since the concern about the exposition is still present, I can only update my rating to \\\"weak reject.\\\" I hope the authors could further improve the exposition of this paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the evolution of Neural Tangent Kernel (NTK) at large-depth regimes. By analyzing the conditional number and eigenvalues, they identify three phases of hyper-parameters; 1) In the chaotic phase NTK converges to an identity matrix, which is easy to train but hardly to generalize. 2) In the ordered phase NTK converges to an all-one matrix, which is hard to train but generalizes well. 3) In the critical phase the conditional number converges to a constant. Furthermore, they also analyze the influence of pooling and flattening in CNNs and identify potential regimes where pooling hurts the generalization. They conduct empirical experiments to supporting their theoretical analyses.\\n\\nHowever, I think this paper is worth of more revisions because many theoretical analyses are unjustified. And some potential typos makes the analyses even more difficult to understand.\\n\\n1) It looks to me that Eq(2) and Eq(6) are contradictory, where T already contains sigma_w and simga_b in Eq(2) but re-multiplied in eq(6).\\n2) The paper analyzes the dynamics by assuming the variances of inputs are q*, which is debatable. The variance q^l also evolves with the depth increases. It is unclear whether the condition number will change if you takes the evolution of q^l into considered.\\n3) It is unclear how Eq(9) comes directly from Eq(6), and there aren't any rigorous proofs in the Appendix. Similarly for eq(14).\\n4) In the paragraph below Eq(11) the paper states that \\\\Theta* becomes an all one-matrix. However, Eq(11) states the diagonal converges to q*/(1-xi_1), but the paragraph below Eq(9) states the off-diagonal converges to q*_{ab}/(1- xi_c). Because q*=q*_{ab} as you stated nearby, do you mean xi_1 = xi_c ? \\n5) In the first paragraph of Section 3.3, p^l = q* and p^l=l q*. \\n6) In the 2nd contribution, you mentioned \\\"eigenvector correlation\\\", while I cannot find anywhere else introducing this.\\n7) The plots of Figure 1(b) should behave like convex if the kappa really evolves like x_1^l / l. However it is concave. \\n8) In the first experiment, you state \\\"To confirm that the maximal feasible learning rate are ... 2/(lambda_max)\\\". However, learning rates are never discussed in this paper. It is confusing why this experiment is useful. \\n\\nGenerally speaking, I think the paper needs careful revisions to support its theoretical analyses.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper studies the relation between trainability and generalization ability in deep neural networks. In the theoretical analysis, the authors used the Neural network Gaussian process (NNGP) kernel and Neural Tangent kernel (NTK). The paper clarified that the spectrum of the NTK and NNGP has an important role in investigating the generalization and trainability, i.e., the condition number of the NTK. Some numerical experiments showed an agreement of theory with the practical behavior of learning algorithms.\", \"In this paper, some existing theoretical results on deep neural networks were combined to extract new insight. Thought the attempt of this paper is interesting, the readability of the paper is not necessarily high.\", \"In equation (2), the operator T is defined as the kernel K(x,x'). However, the definition seems different from that in equation (8). The authors need to make clear the definition of T.\", \"What is the \\\"DC\\\" mode in the sentence above the equation (15)?\", \"Is the derivation of the left part in equation (9) straightforward? How was the second term, chi_1 q^* p^(ell), derived? I'm not sure how the dot{T} was dealt with. The argument below equation (3) should be used?\"]}"
]
} |
HyeJmlrFvH | Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization | [
"Ali Ramezani-Kebrya",
"Fartash Faghri",
"Ilya Markov",
"Vitalii Aksenov",
"Dan Alistarh",
"Daniel M. Roy"
] | As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf. | [
"sgd",
"nonuniform quantization",
"variants",
"qsgd",
"size",
"complexity",
"models",
"datasets",
"need",
"stochastic gradient descent"
] | Reject | https://openreview.net/pdf?id=HyeJmlrFvH | https://openreview.net/forum?id=HyeJmlrFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"aNi8w9akuD",
"BkxWwBzniS",
"rygJKdFDjr",
"HJeK8dYwir",
"rkgOQdKDjS",
"SklTsSGviH",
"Bkxbah715H",
"r1xsybtTFB",
"SJxLkbwatH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742898,
1573819737265,
1573521526991,
1573521489080,
1573521439666,
1573492133122,
1571925177391,
1571815651509,
1571807454156
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2194/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2194/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2194/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2194/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2194/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2194/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2194/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2194/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a communication-efficient data-parallel SGD with quantization. The method bridges the gap between theory and practice. The QSGD method has theoretical guarantees while QSGDinf doesn't, but the latter gives better result. This paper proves stronger results for QSGD using a different quantization scheme which matches the performance of QSGDinf.\\n\\nThe reviewers find issues with the approach and have pointed some of them out. During the discussion period, we did discuss if reviewers would like to raise their scores. Unfortunately, they still have unresolved issues (see R1's comment).\", \"r1_made_another_comment_recently_that_they_were_unable_to_add_to_their_review\": \"\\\"The proposed algorithm and the theoretical analysis does not include momentum. However, in the experiments, it is clearly stated that momentum (with a factor of 0.9) is used. Thus, it is unclear whether the experiments really validate the theoretical guarantees. And, it is also unclear how momentum is added for both NUQSGD and EF-SGD, since momentum is not mentioned in Algorithm 1 in this paper, or the paper of QSGD, or the paper of EF-SignSGD. (There is a version of SignSGD with momentum *without* error feedback, called SIGNUM).\\\"\\n\\nWith the current score, the paper does not make the cut for ICLR, but I encourage the authors to revise the paper based on reviewers' feedback. For now, I recommend to reject this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes\", \"comment\": \"We will be posting a new version of the paper momentarily. This note summarizes the changes:\\n\\n1. We now report results comparing NUQSGD with error-corrected methods, notably EF-SIGNSGD, on ImageNet. We find that our techniques are superior. In particular, we had to perform significant hyperparameter tuning to even get the error corrected methods (EF-SIGNSGD) to converge. Once we got them to converge, the communication benefits had largely disappeared. We emphasize that our methods achieve full accuracy and speedup under the baseline hyperparameter settings, and do not require additional tuning. This is essential on data sets like ImageNet where tuning is extremely expensive. We also include learning curves when \\u2018time\\u2019 is the x-axis.\\n\\n\\n2. In the appendix, we prove that, for any given set of levels, there exists a distribution of points with dimension d such that the variance is in Omega(sqrt{d}), and so our bound is tight in d. \\n\\n3. Regarding our upper bound and its dependence on s: In the appendix, we now derived the optimal worst-case variance upper bounds expressed as an integer QP. We present several relaxations of this bound and plot its dependence on s and d in the appendix. \\n\\n4. In the appendix, we now state the implications of our work for convergence on nonconvex problems. As stated in the paper, these results are standard. The important work in this setting is control of the variance and communication cost.\\n\\n5. We've made various other minor improvements to notation, explanations, etc.\\n\\nWe would welcome suggestions as to what material we might promote to the main body (rather than the appendix). We have left most changes in the appendix to ease the reviewers job in finding these new contributions.\"}",
"{\"title\": \"Response to Review 1\", \"comment\": \"Thanks for your feedback. Below is our specific feedback to your review. We have also posted a general response (see our top-level comment) to all reviewers addressing high level points.\\n\\n>In this paper, a very important reference and baseline is missing, which is call error-feedback SGD [1]. Although the title of [1] focuses on SignSGD, it provides a general algorithm for arbitrary compressor with an error/variance bound similar to Theorem 2 in this paper, no matter the compressor is unbiased or not. Since [1] provides the SOTA results for quantized SGD, the proposed algorithm should be compared to it in the experiments.\\n\\nWe agree that it is interesting to compare NUQSGD with error-corrected methods (although we feel this comparison is orthogonal to the problem of closing the performance--theory gap between QSGD and QSGDinf.) We are running experiments comparing NUQSGD with error-corrected methods and hope they are finished before the rebuttal period. Regardless, we will include them in the final paper. One important note is that error-corrected signSGD, sparsified methods, and TernGrad require non-trivial additional parameter tuning to reduce accuracy loss (learning rate. momentum, and warmup tuning---see e.g. \\\"Deep Gradient Compression\\\"). By contrast, our experiments target the setting where training is performed with standard hyperparameters as the full-precision version, and we are able to recover full accuracy in this regime. This is the standard set by QSGD, which is closer to practical applications.\\n\\n>This paper claims to have strong theoretical guarantees. However, the theoretical analysis only works for convex functions. Note that the theoretical analysis in [1] also works for non-convex functions. \\n\\nUsing standard arguments, NUQSGD does provide guarantees in the non-convex case as well, since the quantized stochastic gradients are still unbiased. (This is the same for QSGD.) We mention this in the paper, after Theorem 4: \\\"On nonconvex problems, convergence guarantees can be established along the lines of, e.g., (Ghadimi and Lan, 2013, Theorem 2.1).\\\" In particular, this results gives convergence to a second-order stationary point. These are virtually the same guarantees as error-corrected signSGD. We will include the nonconvex convergence statement in the updated paper.\\n\\n>Regardless of the convergence guarantees (which is weak considering the existing theorems in [1]). the proposed algorithm, NUQSGD, does not show improvement on the convergence, compared to the baseline QSGDinf. \\n\\nThe goal of the paper was to close the gap between QSGD and QSGDinf. QSGD provides theoretical guarantees but is empirically worse than QSGDinf. QSGDinf has no theoretical guarantees. NUQSGD matches the empirical performance of QSGDinf and has slightly stronger asymptotic guarantees than QSGD. We think this progress is worth reporting.\\n\\nSince submission, we have also improved our understanding of the variance bounds for NUQSGD. \\n\\nWe have proven that, for any given set of levels, there exists a distribution of points with dimension d such that the variance is in Omega(sqrt{d}), and so our bound is tight in d. We will include this proof in the updated version (forthcoming).\", \"regarding_our_upper_bound_and_its_dependence_on_s\": \"We have now derived the optimal worst-case variance upper bound for a fixed set of arbitrary levels, expressed as the solution to an integer program with quadratic constraints. We can relax the program to obtain a quadratic program. A coarser analysis yields an upper bound expressed as the solution to a linear program, which is more amenable to analysis.\\n\\nWe are now using these numerical tools to build insight, and will include some plots in the updated draft. For an exponentially spaced collection of levels of the form ((0,p^s, ... , p^2 ,p,1) for p in (0,1) and an integer number of levels, s, we have a numerical method for finding the value p that minimizes the worst-case variance, for any given s and d. We know that our current scheme is near optimal (in worst case) according to the LP bound in some cases. Using these techniques we can get slightly tighter bounds numerically.\\n\\n>In Figure 3, the experiments only show loss vs. # of iterations, which does not show the actual training time.\\n\\nRegarding simulation-based learning curves with respect to time, if different compression schemes are run on the same gpu, there will be no difference between any quantization method. This does not hold for error-corrected methods though, since they require additional storage for the error. We will add convergence-versus-time bounds to the updated version.\\n\\n>In Definition 1, in some cases s is a constant integer, and in some other case become a function, which is very confusing. I also hope the authors can highlight the definition of and , which are essential for understanding the nonuniform quantization mechanism. \\n\\nIn the revision, we clarify these definitions.\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"Thanks for your feedback. Below is our specific feedback to your review. We have also posted a general response (see our top-level comment) to all reviewers addressing high level points.\\n\\n>It would be great to include more theoretical analysis which demonstrates the importance of variance upper bound for convergence speed guarantee.\\n\\nWe have improved our understanding of the variance bounds for NUQSGD. \\n\\nWe have proven that, for any given set of levels, there exists a distribution of points with dimension d such that the variance is in Omega(sqrt{d}), and so our bound is tight in d. We will include this proof in the updated version (forthcoming).\", \"regarding_our_upper_bound_and_its_dependence_on_s\": \"We have now derived the optimal worst-case variance upper bound for a fixed set of arbitrary levels, expressed as the solution to an integer program with quadratic constraints. We can relax the program to obtain a quadratic program. A coarser analysis yields an upper bound expressed as the solution to a linear program, which is more amenable to analysis.\\n\\nWe are now using these numerical tools to build insight, and will include some plots in the updated draft. For an exponentially spaced collection of levels of the form ((0,p^s, ... , p^2 ,p,1) for p in (0,1) and an integer number of levels, s, we have a numerical method for finding the value p that minimizes the worst-case variance, for any given s and d. We know that our current scheme is near optimal (in worst case) according to the LP bound in some cases. Using these techniques we can get slightly tighter bounds numerically.\\n\\n>In the experimental part, they control the hyperparameters including batch-size, base learning rate, momentum, and weight decay to be identical with each method. This may cause tuning biases (the setting may favor one method but hurt others' performance).\\n\\nWe agree that the performance of each method might slightly improve if we tune hyperparameters for that specific method. However, we are interested in a setting where training is performed with the same standard hyperparameters as those for the full-precision version. We would like to recover full accuracy in this regime. This is the standard set by the original work on QSGD, which is closer to practical applications where hyperparameter tuning is expensive. Again, the goal was to close the empirical performance gap with QSGDinf (we did) and the theoretical gap with QSGD (we did).\\n\\n>Although the paper mainly focuses on comparing with QSGD, there are several relative communication efficient training algorithms which I think are worth to compare empirically (at least one of them)\\n\\nAmong unbiased schemes, QSGDinf is state-of-the-art but it does not come with theoretical guarantees. QSGD has guarantees but worse performance. Our goal was to close this gap, and we achieved this goal. We think this progress is worth reporting. \\n\\nWe agree that it is interesting to compare NUQSGD with signed-based methods (although we feel this comparison is orthogonal to the problem of closing the performance--theory gap between QSGD and QSGDinf). Recently, error-feedback SGD has been shown to outperform signSGD. We are running experiments comparing NUQSGD with error-corrected methods and hope they are finished before the rebuttal period. Regardless, we will include them in the final paper. One important note is that error-corrected signSGD, sparsified methods, and TernGrad require non-trivial additional parameter tuning to reduce accuracy loss (learning rate. momentum, and warmup tuning---see e.g. \\\"Deep Gradient Compression\\\"). By contrast, our experiments target the setting where training is performed with standard hyperparameters as the full-precision version, and we are able to recover full accuracy in this regime.\\n\\n>In figure 4, the encoding cost is significantly increased from 4-bit to 8-bit NUQSGD. Any reason why it happens? Is it due to inefficient encoding implementation?\\n\\nIt is because the cost of the compression is proportional to the number of quantization points used, i.e., # quantization points for 8bit = #quantization points for 4bit^2.\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thanks for your feedback. Below is our specific feedback to your review. We have also posted a general response (see our top-level comment) to all reviewers addressing high level points.\\n\\n>NUQSGD does not provide significant improvements in terms of the variance and communication cost.\\n\\nThe goal of the paper was to close the gap between QSGD and QSGDinf. QSGD provides theoretical guarantees but is empirically worse than QSGDinf. QSGDinf has no theoretical guarantees. NUQSGD matches the empirical performance of QSGDinf and has slightly stronger asymptotic guarantees than QSGD, and so we don't see the fact that the improvement is \\\"minor\\\" as undermining the significance. In practice, it's much better than QSGD.\\n\\n>We would expect NUQSGD to improve the dependence on the dimension d, which is more significant\\n\\nWe have proven that, for any given set of levels, there exists a distribution of points with dimension d such that the variance is in Omega(sqrt{d}), and so our bound is tight in d. We will include this proof in the updated version (forthcoming). \\n\\n>It would be great to add learning curves with the \\u2018time\\u2019 being the x-axis as well. Also, I would suggest the authors to record the time needed to proceed one iteration for each parallel algorithm to compare the communication cost.\\n\\nRegarding simulation-based learning curves with respect to time, if different compression schemes are run on the same gpu, there will be no difference between any quantization method. This does not hold for error-corrected methods though, since they require additional storage for the error. We will add convergence-versus-time bounds to the updated version. In addition, we will record the time needed to proceed one iteration for each parallel algorithm.\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": \"Dear reviewers,\\n\\nThank you for your reviews. In summary, we received the following feedback (key issues):\\n\\n1. [Variance upper bound; R2,3]. The theoretical improvement over QSGD seems minor. Can stronger theoretical guarantees be obtained? In particular, can you tighten the variance bound in terms of d? \\n\\n2. [Nonconvexity; R1]. Can convergence results be obtained for nonconvex problems? \\n\\n3. [Sign-based methods; R1,3]. Is NUQSGD interesting if its performance is comparable to QSGDinf? How does NUQSGD compare with sign-based methods? \\n\\n4. [Loss vs time; R1,2] How do learning curves look if \\u2018time\\u2019 is the x-axis? \\n\\nWe agree these are important questions. We have a plan to address each of them. We describe that plan below. We hope that if we indeed succeed in executing this plan, you will raise your scores to 8!\\n\\nWe plan to make the following four changes to address the key issues/questions above. If you would require further changes to update your score to 8, please let us know!\\n\\n\\n*****\\n**1**\\n*****\\nThe goal of the paper was to close the gap between QSGD and QSGDinf. QSGD provides theoretical guarantees but is empirically worse than QSGDinf. QSGDinf has no theoretical guarantees. NUQSGD matches the empirical performance of QSGDinf and has slightly stronger asymptotic guarantees than QSGD, and so we don't see the fact that the improvement is \\\"minor\\\" as undermining the significance. In practice, it's much better than QSGD.\\n\\nThat said, we have improved our understanding of the variance bounds for NUQSGD.\", \"regarding_tightness_of_our_variance_bounds\": \"Reviewer 1 asks whether we can beat the O(sqrt{d}) dimension dependence in the variance bound. We have proven that, for any given set of levels, there exists a distribution of points with dimension d such that the variance is in Omega(sqrt{d}), and so our bound is tight in d. We will include this proof in the updated version (forthcoming).\", \"regarding_our_upper_bound_and_its_dependence_on_s\": \"We have now derived the optimal worst-case variance upper bound for a fixed set of arbitrary levels, expressed as the solution to an integer program with quadratic constraints. We can relax the program to obtain a quadratic program. A coarser analysis yields an upper bound expressed as the solution to a linear program, which is more amenable to analysis.\\n\\nWe are now using these numerical tools to build insight, and will include some plots in the updated draft. For an exponentially spaced collection of levels of the form ((0,p^s, ... , p^2 ,p,1) for p in (0,1) and an integer number of levels, s, we have a numerical method for finding the value p that minimizes the worst-case variance, for any given s and d. We know that our current scheme is near optimal (in worst case) according to the LP bound in some cases. Using these techniques we can get slightly tighter bounds numerically.\\n\\n\\n*****\\n**2**\\n*****\\nNUQSGD does provide guarantees in the non-convex case as well, since the quantized stochastic gradients are still unbiased. (This is the same for QSGD.) We mention this in the paper, after Theorem 4: \\\"On nonconvex problems, convergence guarantees can be established along the lines of, e.g., (Ghadimi and Lan, 2013, Theorem 2.1).\\\" In particular, this results gives convergence to a second-order stationary point. These are virtually the same guarantees as error-corrected signSGD. We will include the nonconvex convergence statement in the updated paper.\\n\\n\\n*****\\n**3**\\n*****\\nAmong unbiased schemes, QSGDinf is state-of-the-art but it does not come with theoretical guarantees. QSGD has guarantees but worse performance. Our goal was to close this gap, and we achieved this goal. We think this progress is worth reporting. \\n\\nWe agree that it is interesting to compare NUQSGD with error-corrected methods (although we feel this comparison is orthogonal to the problem of closing the performance--theory gap between QSGD and QSGDinf.) We are running experiments comparing NUQSGD with error-corrected methods and hope they are finished before the rebuttal period. Regardless, we will include them in the final paper. One important note is that error-corrected signSGD, sparsified methods, and TernGrad require non-trivial additional parameter tuning to reduce accuracy loss (learning rate. momentum, and warmup tuning---see e.g. \\\"Deep Gradient Compression\\\"). By contrast, our experiments target the setting where training is performed with standard hyperparameters as the full-precision version, and we are able to recover full accuracy in this regime. This is the standard set by QSGD, which is closer to practical applications.\\n\\n\\n*****\\n**4**\\n*****\\nRegarding simulation-based learning curves with respect to time, if different compression schemes are run on the same gpu, there will be no difference between any quantization method. This does not hold for error-corrected methods though, since they require additional storage for the error. We will add convergence-versus-time bounds to the updated version.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Brief summary of the paper:\\nThis paper studies data-parallel SGD that K processors work together to minimize an objective function. Each processor computes a stochastic gradient and broadcasts to other peers. In this distributed system, there is a trade-off between the *communication cost* from sharing the stochastic gradient and the *variance* from gradient quantization. This paper is a follow-up of Alistarh et al.\\u00a0(2017). It proposes a non-uniform (logarithmic) quantization scheme (NUQSGD). This paper provides theoretical analysis of the variance and communication cost of NUQSGD. Then the paper analyzes the convergence rate of NUQSGD for convex and smooth objective function. At the end, this paper empirically evaluates NUQSGD for image classification problem.\", \"originality_and_significance\": \"This paper follows up on the parallel SGD framework proposed by Alistarh et al.\\u00a0(2017), where the authors proposed QSGD using a uniform quantization. This paper proposes NUQSGD using a non-uniform quantization method. The quantization of the stochastic gradient amplifies the stochastic variance, which influences the rate of convergence of SGD. Thus, on one hand, it is important to design a quantization method to improve the variance, for the sake of convergence rate. On the other hand, it is also important to decrease the communication cost. NUQSGD does not provide significant improvements in terms of the variance and communication cost.\", \"theorem_2_and_theorem_3\": \"QSGD has a variance of min {d/s^2, \\\\sqrt{d}/s} and NUQSGD has a variance of min{O(d/2^{-2s}), O(\\\\sqrt{d/2^{-2s}})}. QSGD has communication cost of \\\\tilde O(s(s+\\\\sqrt{d})) and NUQSGD has communication cost of \\\\tilde O(2^{2s}\\\\sqrt{d} ). Compared to QSGD, we can see that NUQSGD improves the dependence on s for the variance term, but it has a worse (exponential) dependence on s for the communication cost. Usually s is a small number and it serves as a hyper-parameter to be tuned. We would expect NUQSGD to improve the dependence on the dimension d, which is more significant. However, NUQSGD has the same dependence on d as QSGD in terms of both variance and communication cost.\", \"experiments\": \"Figure 3 compares NUQSGD with other parallel SGD algorithms and vanilla SGD. Figure 3 shows how fast the training loss decreases with respect to iterations. It would be great to add learning curves with the \\u2018time\\u2019 being the x-axis as well. Also, I would suggest the authors to record the time needed to proceed one iteration for each parallel algorithm to compare the communication cost.\", \"quality_and_clarity\": \"This paper is well-written.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a new scheme for quantizing gradients which are followed by the previous work QSGD [1]. They show that it yields stronger theoretical guarantees than QSGD while showing a great empirical performance.\\nThe main difference between their scheme NUQSGD and QSGD is that they use nonuniform quantization (0, 1/2^{s}, \\u2026., 2^{s-1}/2^{s}, 1) instead of uniform quantization (0, 1/s, \\u2026, (s-1)/s,1). Intuitively, by the way, it could reduce quantization error and variance by better matching the properties of normalized vectors.\\nThe results are in 2 parts. First comparing with QSGD, they establish stronger convergence guarantees for NUQSGD, under standard assumptions. They also establish theoretical results for the variance upper bound and expected communication cost of their scheme. Second, they show strong empirical performance on deep models and a large dataset, with an efficient implementation in PyTorch.\\n\\nHowever, there are several issues and questions that if fixed or illustrated could be a great paper.\\n\\n\\t1) The author claim NUQSGD achieves stronger convergence guarantees comparing with QSGD but hasn't illustrated the point in detail. On page 6, the paragraph named 'NUQSGD vs QSGD' mainly claims that variance upper bound controls the guarantee on the convergence speed by empirically showing the results of variance upper bound. It would be great to include more theoretical analysis which demonstrates the importance of variance upper bound for convergence speed guarantee.\\n\\t2) In the experimental part, they control the hyperparameters including batch-size, base learning rate, momentum, and weight decay to be identical with each method. This may cause tuning biases (the setting may favor one method but hurt others' performance).\\n\\t3) Although the paper mainly focuses on comparing with QSGD, there are several relative communication efficient training algorithms which I think are worth to compare empirically (at least one of them). For example:\\n\\t\\ta. Deep Gradient compression [2]\\n\\t\\tb. signSGD [3]\\n\\t\\tc. TernGrad [4]\\n\\t4) In figure 4, the encoding cost is significantly increased from 4-bit to 8-bit NUQSGD. Any reason why it happens? Is it due to inefficient encoding implementation? \\n\\nI agree with the authors' point that it's worth to explore the interaction between NUQSGD with more complex reduction patterns like ring-based. Since the ring-based algorithm like all-reduce is more popular in practice nowadays, interacting with it would have a better practical meaning. \\n\\n[1] D. Alistarh, D. Grubic, J. Z. Li, R. Tomioka, and M. Vojnovic. QSGD: Communication-ef\\ufb01cient SGD via gradient quantization and encoding. In Proc. Advances in Neural Information Processing Systems (NIPS), 2017.\\n\\n[2] Lin Y, Han S, Mao H, Wang Y, Dally WJ. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887. 2017 Dec 5.\\n\\n[3] Bernstein J, Zhao J, Azizzadenesheli K, Anandkumar A. signSGD with majority vote is communication efficient and fault-tolerant. arXiv. 2018 Oct 11.\\n\\n[4] W. Wen, C. Xu, F. Yan, C. Wu, Y. Wang, Y. Chen, and H. Li. TernGrad: Ternary gradients to reduce communication in distributed deep learning. In Proc. Advances in Neural Information Processing Systems (NIPS), 2017.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors propose a new gradient compression method, which is called nonuniform quantization. The algorithm is a reasonable variant of SGD with uniform quantization. The paper is well written. The experiments show good performance.\\n\\nHowever, there are several weakness in this paper:\\n\\n1. In this paper, a very important reference and baseline is missing, which is call error-feedback SGD [1]. Although the title of [1] focuses on SignSGD, it provides a general algorithm for arbitrary compressor with a error/variance bound similar to Theorem 2 in this paper, no matter the compressor is unbiased or not. Since [1] provides the SOTA results for quantized SGD, the proposed algorithm should be compared to it in the experiments.\\n\\n2. This paper claims to have strong theoretical guarantees. However, the theoretical analysis only works for convex functions. Note that the theoretical analysis in [1] also works for non-convex functions.\\n\\n3. Regardless of the convergence guarantees (which is weak considering the existing theorems in [1]). the proposed algorithm, NUQSGD, does not show improvement on the convergence, compared to the baseline QSGDinf.\\n\\n4. In Figure 3, the experiments only show loss vs. # of iterations, which does not show the actual training time. In Figure 4, training time is only shown for NUQSGD, which ignores the other baselines including QSGD and QSGDinf. What I really what to see is training loss (or testing accuracy) vs. training time (or communication overhead, such as number of bits), so that we can evaluate the trade-off between communication overhead and the convergence, compared to the baselines.\\n\\n\\n\\nMinor issue (I hope the authors can consider the following suggestions in a revised version. However, since the issue is minor, it doesn't affect the score):\\n\\n!. In Definition 1, in some cases $s$ is c constant integer, and in some other case $s$ become a function, which is very confusing and not friendly to the readers. I also hope the authors can highlight the definition of $r$ and $p$, which are essential for understanding the nonuniform quantization mechanism. \\n\\n\\n\\n\\n--------------\\nReference\\n\\n[1] Karimireddy, Sai Praneeth et al. \\u201cError Feedback Fixes SignSGD and other Gradient Compression Schemes.\\u201d ICML (2019).\"}"
]
} |
HkxCzeHFDB | Functional Regularisation for Continual Learning with Gaussian Processes | [
"Michalis K. Titsias",
"Jonathan Schwarz",
"Alexander G. de G. Matthews",
"Razvan Pascanu",
"Yee Whye Teh"
] | We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs – a fixed-size subset of the task inputs selected such that it optimally represents the task – and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms. Our method thus unites approaches focused on (pseudo-)rehearsal with those derived from a sequential Bayesian inference perspective in a principled way, leading to strong results on accepted benchmarks. | [
"Continual Learning",
"Gaussian Processes",
"Lifelong learning",
"Incremental Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=HkxCzeHFDB | https://openreview.net/forum?id=HkxCzeHFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ghAs9VZgN8",
"HklXQqHKir",
"B1gBw8Q7oS",
"HJlEyIXXiH",
"HygVsr7miH",
"rJgDaEQXsr",
"HJeWwjLpqH",
"B1xoID52tS",
"HkeRF7LnKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742868,
1573636634868,
1573234268708,
1573234139743,
1573234075556,
1573233854637,
1572854617149,
1571755859376,
1571738501598
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2193/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2193/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2193/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2193/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2193/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2193/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2193/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2193/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors introduce a framework for continual learning in neural networks based on sparse Gaussian process methods. The reviewers had a number of questions and concerns, that were adequately addressed during the discussion phase. This is an interesting addition to the continual learning literature. Please be sure to update the paper based on the discussion.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 2 (Additional experiments)\", \"comment\": \"As promised, here are additional experimental results on Omniglot using an MLP. All experiments for VCL were run using the official implementation provided by the authors (https://github.com/nvcuong/variational-continual-learning/):\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nAlgorithm | Accuracy over all tasks at the end of training\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nVCL (No coreset) |\\t 48.4 +- 0.7\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n |1 points/class | 2 points/class | 3 points/class\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nVCL (Random coreset) | 49.18 +- 2.1 |\\t 50.50 +- 1.2\\t| 51.64 +- 1.0\\nVCL (K-center coreset) | 48.89 +- 1.1 |\\t 49.58 +- 1.4\\t| 49.61 +- 1.0\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nFRCL (Trace) | 48.84 +- 1.1 | 52.10 +- 1.2\\t| 53.86 +- 2.3\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\nFor all experiments, we used an MLP with 4 hidden layers of 256 units each and ReLU activations, a batch size of 100 and the Adam Optimiser (Step size of 0.001 for VCL and 0.0001 for FRCL). We optimised both types of algorithms independently and found the following parameters:\", \"vcl\": \"100 training epochs per task, 50 adaptation epochs to coreset*, Multi-Head\", \"frcl\": \"1500 training steps per task, 2000 discrete optimization steps, inducing points initialised as a uniform distribution over classes.\\n\\n* VCL with coresets relies on training task-specific (in the multi-head case) and shared parameters on the coreset of each task before evaluation. This means that for Omniglot, the algorithm eventually requires 50 copies of the same network. \\n\\nDoes this answer the reviewer's questions?\"}",
"{\"title\": \"Response to Reviewer 2 (Part 2)\", \"comment\": \"- Why consider only the output layer as a GP for CL: \\n\\nIt\\u2019s important to note that the entire network is implicitly regularised by the loss function, even though the GP is only formulated for the last layer. Indeed, we are not posing any constraints about how individual weights can move during optimisation, as long as the network as a whole succeeds at explaining the current tasks and all functional regularisation terms. This is the fundamental difference between VCL or EWC (Kirckpatrick et. al, 2017 [https://arxiv.org/abs/1612.00796 ]) which instead explicitly regularise each parameter. \\n\\nThis is actually interesting in the mentioned case where intermediate representations require a domain shift. Note that this will be necessary every time a new task is added, as the representations so far will not necessarily constitute optimal features for the next task. By allowing weights to vary freely as long as they explain all previous functions, we argue that intermediate representations can change more gracefully as opposed to the case where we force parameters to stay close to specific values.\\n\\nThe entire network is regularized. We will make this more explicit in the main text. Why the mechanism of selecting the induced points is based on which datapoints are most significant to the GP (which encodes just the output layer), these data-points are then used to regularize the entire model. \\n\\n- Comparison to VCL on Omniglot:\\n\\nWe agree that a comparison against VCL on Omniglot would be interesting. The primary reason this hasn\\u2019t been done in the initial submission is that reliable variational inference methods for CNNs (which are usually used for Omniglot) are yet to be developed. VCL relies on Mean-field VI which tends to not work very well for CNNs (see discussion here: https://openreview.net/forum?id=BkQqq0gRb ). We thus felt this was not a fair comparison, even though our method works well with both types of networks.\\n\\nNevertheless, we are happy to conduct an ablation experiment on Omniglot for both VCL variants and our method. Due to the fairness of comparison concerns and the fact that the official VCL implementation has no code for CNNs, we will focus on MLPs for this comparison. We will report those results asap.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thoughtful review . Point 1) and 2) describes well the differences between regularising continual learning in the function space rather than on the weight space. To add to that, the motivation behind our method is that learning a supervised task corresponds to learning a function, and thus our method tries to \\u201cremember\\u201d the task by remembering direct posterior estimates over output values of that function at informative inputs. The \\u201cmoving target\\u201d comment made by the reviewer regarding methods that regularise based on posteriors over weights captures precisely the intuition of what mathematically we analyse in Section 2.5 and we are glad that the reviewer found this useful. Given that Section 2.5 provides a more technical explanation, we will try to follow reviewer suggestions and provide a more intuitive discussion earlier in the paper.\\n\\n\\n- Robustness on FRCL-based classifiers and impactfulness of hybrid representation for test-time performance:\\n\\nIt is hard to know how well calibrated are the uncertainties of FRCL since we do not know the ground-truth. FRCL is based on a neural network where we do Bayesian inference only over the final layer weights (Usually termed Deep Kernel learning (Wilson et al., AISTATS 2016) [http://proceedings.mlr.press/v51/wilson16.pdf ]). It is encouraging that according to the large scale study in (Riquelme et. al, 2018 [https://arxiv.org/abs/1802.09127 ]) such as (not fully Bayesian approach) is always one of the best techniques among many other methods in contextual bandits applications, where modelling well uncertainties within a Thompson sampling exploration framework is very crucial. The aforementioned work by (Wilson et al., AISTATS 2016) also shows that predictive uncertainty can be of high quality. Regarding b) if we do not apply the hybrid approach where inference over the current task is done in the weight-space, and instead we apply variational sparse GPs for the current task, the performance is significantly worse (we can add a Table for that in the Appendix for completeness). The reason is that the estimate of the posterior distribution $q(u_i)$ over the inducing values is not accurate enough both in terms of the mean and also in terms of variances (typically underestimation) when compared to $q(u_i)$ obtained by the hybrid approach. The latest allows fitting the current task with the tightest possible ELBO and getting the best possible approximate posterior (i.e. with no additional approximation error due to the inducing points and the variational sparse GP) which leads to better estimate of each $q(u_i)$ and subsequently better regularisation for continual learning. \\n\\n- GP approximations in terms of finitely many basis functions:\\n\\nThe degradation of predictive uncertainties of finite basis functions certainly occurs when the basis function are local, e.g radial basis functions, but typically does not occur for non local basis functions/activation units as the ones we typically use in neural networks. E.g. when the feature vector is defined by ReLUs or tanh activation functions, which are non local, the degradation of predictive variances is typically not observed as we move away from the training inputs. This is to some extent also confirmed by the task-boundary detection results.\"}",
"{\"title\": \"Response to Reviewer 2 (Part 1)\", \"comment\": \"Thank you for the useful comments. We acknowledge that we could improve the manuscript to provide a more intuitive introduction and allow the reader to contrast weight-space and function-space regularisation. We would be grateful to hear if the reviewer has any concrete suggestions under which they would consider an increase in their rating.\\n\\n- Which true posterior does the learned model correspond to? Would not it be a slightly more principled Bayesian approach.... knowledge transfer to assign the posterior of one task as the prior of the other?\\n\\nThe model learns a posterior distribution over each task-specific GP function f_i(x); Section 2.5 discusses the difference with weight-space posteriors. Regarding the second point, in Nguyen et al., 2017 it is important to distinguish between task-specific and shared parameters across tasks. From a Bayesian perspective it is not correct for the posterior over task-specific parameters to act as prior over the task-specific parameters of the next task (and this is not done in VCL; see discussion on page 4 before Section 4 in https://arxiv.org/pdf/1710.10628.pdf ). In contrast, we fully agree with the reviewer that the posterior distribution over shared parameters indeed must be the prior for the next task. In our formulation the only shared parameter is the feature vector parameter $\\\\theta$ which is constantly updated by point estimation (not Bayesian inference), i.e the initial value $\\\\theta$ for the next task is the final value from the previous task and etc. Therefore, the comment \\u201ctransfer to assign the posterior of one task as the prior of the other\\u201d is consistent with the way we learn $\\\\theta$, but instead of Bayesian inference we do point estimation for that parameter. In contrast, all output weights $w_i$ and their corresponding function vectors $f_i$ (and the subset $u_i$), obtained by repametrising from the weight-space to the GP space, are task-specific parameters. Therefore the variational posterior over $q(u_i)$ needs to be updated when we see data from the i-th task and it should not be the prior for the next task $i+1$. Therefore, our method is principled from Bayesian perspective. \\n\\nTo add more intuition, one can preserve our work as a mechanism for compressing data of previously seen tasks to the most significant data points that describe them. This is done individually (and sequentially) on each encountered task. These compressed sets (inducing points) and their posterior distributions $q(u_i)$ are then used for replay (through each KL term $KL(q(u_i) || p_{\\\\theta}(u_i))$) in order to not forget the previously seen tasks. From this perspective it becomes clear that concatenating the different compressed sets (the inducing points) of each task is the right thing to do to represent all tasks, and that the last set of inducing points will only be useful to recover that particular task. We will try to make this explanation clearer in the main text of the paper.\\n\\n- Is it variational inference or a hand-designed regularization term?\\n\\nIt is a principled variational inference procedure. As in any variational Bayesian procedure in the ELBO you automatically get KL divergence terms between posterior distributions and prior distributions (hence such terms also appears in VCL). Precisely the same holds for our method where each term $KL(q(u_i) || p_{\\\\theta}(u_i))$ is the KL divergence between the posterior $q(u_i)$ and the prior $p_{\\\\theta}(u_i)$. Each KL term regularises the shared representation parameters $\\\\theta$ of the neural network, and therefore it regularises the full deep network. The main difference however is that in our GP-based formulation the parameters correspond directly to values of the output function (each vector $f_i$ and its subset $u_i$ are values of the task-specific function $f_i(x)$) which leads to functional regularisation, i.e. regularisation by preserving posterior knowledge about the task in the output function space, as opposed to weight space. Therefore, our method is fundamentally different from methods like VCL and EWC that regularise based on the weight space (e.g. by using variational posteriors over task-specific weights $q(w_i)$). We have mathematically analysed this difference in Section 2.5; see also the discussion about the \\u201cmoving target\\u201d point 1) made by Reviewer 3 above.\", \"description_of_figure_1\": \"We will improve the captioning of Figure 1, which is a very high level description. The calligraphic L is the loss for the corresponding task (e.g. negative log likelihood for classification, or L2 for regression). The diagram also makes explicit the features $\\\\phi(x)$, which are simply features from a neural network. Thereafter, the output layer of our model is replaced by a GP. Once the task is learned, block B depicts what is means to define inducing points. Given these inducing points we can learn task 2, but now there is a regularisation term associated with task 1 (as explained above).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your excellent comments. We address questions below:\\n\\n- Is the joint ELBO across successive tasks still lower-bounding the actual objective?\\n\\nAs mentioned in Section 2.2 the ELBO across successive tasks is only an approximation to the full ELBO obtained under the constraint that we only maintain a subset $Z_1$ and not the full training set for the task 1. When $Z_1$ (and similarly each $Z_k$ for more than two tasks) is a random subset of $X_1$ then the approximation is, up to a constant, an unbiased approximation to the full exact ELBO (constructed similarly to stochastic/minibatch variational inference; see Hoffman et al. 2013 [http://www.columbia.edu/~jwp2128/Papers/HoffmanBleiWangPaisley2013.pdf]). This means that such an approximation might not be a strict lower bound on the exact log marginal likelihood (over the full data that are not kept in memory), but it lower bounds this log marginal likelihood stochastically, i.e. on expectation under the distribution associated with selecting a random $Z_k$ from $X_k$. When $Z_k$ is not chosen randomly we might lose the unbiasedness property, but we might get better approximation for performing continual learning since intuitively we want to maintain the most informative $Z_k$, as inducing inputs, for each task in order to avoid forgetting. \\n\\n- Changepoint detection:\\n\\nThanks for the reference. We agree that the connection to Changepoint detection is very interesting and will make this more precise. Note that the works by e.g. (Adams and Mackay (2007)[https://arxiv.org/abs/0710.3742] and Fearnhead (2006) [https://eprints.lancs.ac.uk/id/eprint/8189/]) are actually straight-forwardly applicable to the time series shown in (Fig. 5), which would be an elegant alternative to the t-test used in our initial submission. We will strive to include those results)\\n\\n- Regarding Section 2.3 and 2.4:\\n\\nThank you for this comment. Regarding the weight space inference for the current task that we do in Section 2.3 this step is rigorous since we never change or approximate the exact GP model, we simply reparametrize it. This is because the special form of the linear kernel $k(x,x\\u2019) = \\\\phi(x;\\\\theta)^\\\\top \\\\phi(x\\u2019;\\\\theta)$ allows to express two equivalent representations of the model: (i) over the weights $w_i$ and (ii) over the full vector of training function values $f_i$. The exact marginal likelihood can be written as\\n$$ \\np(y) = \\\\int p(y|w_i) p(w_i) d w_i = \\\\int p(y_i|f_i) p(f_i) d f_i,\\n$$ \\nwhere the second integral (the standard form for the exact GP marginal likelihood) is obtained by reparametrizing $f_{i,j} = w_i^\\\\top \\\\phi(x_j;\\\\theta), j=1, \\\\ldots,N_i$. This proves that an ELBO based on the first integral (over $w_i$) is an ELBO on the exact GP marginal likelihood. However, there are important computational differences. If you work with $f_i$ the complexity per optimisation step is $O(N^3$) but if you work with $w_i$ complexity is $O(b K^2)$, where $b$ is the minibatch size and $K$ the size of the feature vector. Both approaches lower bound the same exact GP marginal likelihood and therefore they approximate the same exact GP model (so the statement \\u201cresorting to an approximate model with weight spaces\\u201d is not true since inference over $w_i$ corresponds to inference under the exact model). The theoretical equivalence and computational differences between inference in function space and weight space for this particular type of models is discussed e.g. in (Williams, 1998) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.1226&rep=rep1&type=pdf and in the Rasmussen and Williams book. \\n\\nRegarding Section 2.4 we have chosen to perform discrete optimisation over the inducing points because we believe that continuous optimisation over the inducing inputs will be hard in high dimensions. E.g consider Figure 4 where the panel shows the initial inputs and panel c the final ones, found under discrete optimisation. While continuous gradient-based optimisation over the inducing inputs is in general very difficult, it may in practice be possible to initialise the continous optimisation procedure by first taking a few steps of discrete optimisation.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe authors propose a method to perform continual learning with neural networks by incorporating variational Gaussian Processes as a top layer (also called Deep Kernel Learning) and constructing an objective utilizing the inducing inputs and outputs to memorize across tasks.\\nThey further study ways to approximate this behavior with weight space models and use their model for task boundary detection by utilizing statistical tests and Bayesian model selection.\\nExperiments show good performance of their method.\", \"comments\": \"1. The mathematical formulation of the basic model is very elegant. However, it is not immediately clear to me that the joint ELBO across successive tasks is still lower-bounding the actual objective.\\n2. The paper is well written overall.\\n3. To the best of my knowledge using such a model for task boundary detection is novel and quite interesting. There are obvious links to Bayesian changepoint detection in the timeseries setting. Possibly these links would be made more clear by a citation to a recent paper such as Spatio-temporal Bayesian On-line Changepoint Detection with Model Selection by Knoblauch and Damoulas, or any other paper of similar content. The link is quite fascinating.\\n4. Sections 2.3 and 2.4 of the paper are the weakest points and quite unsatisfactory as they forgo the elegance of the proposed approach to do \\\"something else\\\" that Sec. D explains how to salvage with \\\"tricks\\\". Especially with regards to Sec. 2.4, why can't we just do inference on Z_i and have to pick datapoints via discrete optimization? That comparison would be useful in the experiments. Furthermore, recent papers utilizing GPytorch by Gardner et al have dramatically sped up GP inference. Could we aim to make the original idea fast enough to be used instead of resorting to an approximate model with weight spaces and corrections to extract Z and u per task?\\n5. The experiments are good, but very focused on MNIST tasks. I would appreciate tasks of different structure given how well the method appears to work.\", \"decision\": \"I find the basic idea of the paper quite appealing as it leverages the elegance of the deep kernel learning formulation to yield an attempt at a principled Bayesian version of continual learning and demonstrates empirical value. \\nSome discussion on the objective might be warranted to demonstrate that it actually lower bounds the true LLK.\\nI am quite happy with the task boundary detection section and would encourage the authors to strengthen the link to changepoint detection.\\nMy biggest qualms with the paper are that it departs from that strategy and performs weight space inference for training per task and then \\\"corrects\\\" to move back to the GP representation. A more convincing discussion would be welcome here.\\nThe experiments are functional and show good results, but I would appreciate more diversity in the tasks.\\nAs the paper stands I learn towards recommending acceptance and would strongly encourage the authors to iron out the weaknesses of the paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a function-space based approach to continual learning problems (CL), wherein a learned embedding\\n\\n $\\\\hat{\\\\mathbf{x}} = \\\\text{NN}(\\\\mathbf{x}; \\\\theta)$\\n\\nis shared between task-specific GPs s.t.\\n\\n $f_{i}(\\\\mathbf{X}) \\\\sim \\\\mathcal{N}(\\\\mathbf{0}, k_{i}(\\\\hat{\\\\mathbf{X}},\\\\hat{\\\\mathbf{X}}))$, \\n\\nwhere the $i$-th task's covariance $k_{i}$ is a defined via standard variational inducing points methods. CL manifests as KL divergences between tasks' variational posteriors $q_{i}$ and their respective priors $p_{i}$. Since the embedding helps define $p_{i}$, its parameters $\\\\theta$ are regularized to promote sharing.\\n\\nThe work investigates both practical and theoretical implications of this setup. On the practical side, the authors discuss enhanced 'on-task' inference via hybridization of function- and weight-space based approaches and, subsequently, strategies for optimizing inducing points. Additionally, a novel approach for automatically detect task switching is introduced that exploits the Bayesian aspects of the proposed framework.\\n\\nOn the theoretical side, points of (personal) interest revolved around differences between weight- and function-space approaches to CL. Here, I think that streamlining the presented argument would go a long ways. Paraphrasing, one of the authors' key insights is that:\\n\\n 1) CL in weight-spaces is hard, since weights' semantics are moving target that change along with shared parameters.\\n 2) CL in function-space is easy, since the functions (i.e. tasks) themselves remain the same.\\n\\nThis information is provided in the introduction, but (as a relative newcomer to CL) I failed to connect regularization and rehearsal/replay based methods with the aforementioned spaces. It was only upon reading Sect 2.5 that this intuition 'clicked' for me. Hence, I suggest making this observation as obvious and intuitive as possible.\\n\\nThe provided experiments seem reasonable and do a good job highlighting different facets of the paper. Two additional results would be appreciated:\\n\\n a) How well calibrated are FRCL-based classifiers?\\n b) How impactful is the hybrid representation (Sect 2.3) for test-time performance?\\n\\nGP approximations formulated solely in terms of weighted sums of (finitely many) basis functions typically suffer from degradation of predictive uncertainties. Since one often motivates use of GPs via a desire for well-calibrated uncertainty, (a) seem quite pertinent.\\n\\n\\nNitpicks, Spelling, & Grammar:\\n - Lots of run-on sentences; consider breaking these up.\\n - Introductory modifying phrases are missing commas.\\n - Consider citing other recent works that use NN basis functions in conjunction with Bayesian Linear Regression.\\n - Various missing or superfluous words resulting in some garbled sentences, e.g.:\\n - \\\"... our approach looks constraining.\\\"\\n - \\\"The ability to detect changes based on the above procedure comes from that in\\\"\\n - \\\"While the task boundary detection results for Omniglot are less strong, which may due to the smaller batch size (32 for Omniglot, \\u2265 100 for the MNIST-versions), resulting a noisier test result.\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper develops a continual learning method based on Gaussian Processes (GPs) applied in the way introduced by prior work as Deep Kernel Learning (DKL). The proposed method summarizes tasks as sparse GPs and use them as regularizers for the subsequent tasks in order to avoid catastrophic forgetting. Salleviating the instability resulting from the representation drift.\\n\\nEmploying inducing point training for task memorization is a novel and interesting idea, which could be useful for the continual learning community. The fact that this approach also captures the uncertainty of the replays contributes fairly to robustness. Lastly, performing knowledge transfer by inheriting the KL term of the ELBO is also interesting, however, its theoretical implications deserve a close look. It would be enlightening to analyze which true posterior the learned model then corresponds to. Would not it be a slightly more principled Bayesian approach (i.e. one that has stronger grounds at first principles) to perform the knowledge transfer to assign the posterior of one task as the prior of the other, alternatively to keeping the entire KL term intact which employs the q(u_i) as the surrogate for q(u_j), i.e. the way introduced by Nguyen et al., 2017?\\n\\nThe presentation clarity of the paper is open for improvement. For instance, the abstract is written in a sort of convoluted way. I do not get how the KL divergence suddenly kicks in and for what exact purpose. Is it variational inference or a hand-designed regularization term?\\n\\nI find the argumentation from Eq. 1 downwards until the end of Sec 2.1 on BNNs with stochastic weights and their relation to GPs a bit unnecessary complication. These are very well known facts. It would suffice to state briefly that the task learner is a vanilla DKL used within a vanilla sparse GP.\\n\\nFigure 1 is also not so descriptive. I do not get what the GP here is exactly doing. What is input to and for which output modelity does it find a mapping? What is the calligraphic L in the figure? Is it a neural net loss or an ELBO? \\n\\nIn general I could not grasp why it makes sense to treat the the output layer params of a neural net treated for continual learning? They will not be sufficient to encode a task anyway, as an expressive enough neural net will leave only a linear mapping to the final layer. What happens if the intermediate representations of the input observations require a domain shift as the tasks evolve?\\n\\nOverall, the presented ideas are fairly interesting and the experimental results are good enough for proof-of-concept, though not groundbreaking (behind the close relative VCL on MNIST and no comparison against VCL on Omniglot). Hence, this is a decent piece of work that lies somewhere around the borderline. My major concern is that the proposed method is conceptually not novel enough compared to Nguyen et al., 2017. My secondary concern is that the presentation is very much open to improvement in points hinted above.\\n\\n--\", \"post_rebuttal\": \"Thanks to authors for their tremendous effort to alleviate my concerns. The fact is that the conceptual novelty of the paper is too slim compared to VCL. As mentioned above, I even find the VCL approach more principled. I could have viewed the outcome of the paper as a slightly bigger news for the community if there was something unexpectedly positive on the reported results. However, as it appears from the comment below, the authors propose a super close variant of VCL that combines a few well-known techniques in a rather straightforward way and achieves in the end a model that performs on par with it. Under these conditions, I have hard time to find a reason to champion this paper for acceptance. That being said, I view this paper a tight borderline case due to its technical depth, hence I will not object to a reject decision either.\"}"
]
} |
HJxRMlrtPH | Verification of Generative-Model-Based Visual Transformations | [
"Matthew Mirman",
"Timon Gehr",
"Martin Vechev"
] | Generative networks are promising models for specifying visual transformations. Unfortunately, certification of generative models is challenging as one needs to capture sufficient non-convexity so to produce precise bounds on the output. Existing verification methods either fail to scale to generative networks or do not capture enough non-convexity. In this work, we present a new verifier, called ApproxLine, that can certify non-trivial properties of generative networks. ApproxLine performs both deterministic and probabilistic abstract interpretation and captures infinite sets of outputs of generative networks. We show that ApproxLine can verify interesting interpolations in the network's latent space. | [
"robustness certification",
"formal verification",
"robustness analysis",
"latent space interpolations"
] | Reject | https://openreview.net/pdf?id=HJxRMlrtPH | https://openreview.net/forum?id=HJxRMlrtPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"TyBkcfDRc4",
"rkxG6PQ3jS",
"BkeIpRLOoS",
"SygWvCLdoS",
"HkeEJ68djH",
"SJxQ8hL_oB",
"Bkx7AjUOir",
"rkgJpdFCKr",
"rJen8KnatS",
"H1xHn7EwKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742839,
1573824441872,
1573576381851,
1573576280786,
1573575899839,
1573575754669,
1573575627107,
1571883191495,
1571830100047,
1571402668602
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2192/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2192/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2192/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2192/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2192/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2192/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2192/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2192/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2192/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The goal of verification of properties of generative models is very interesting and the contributions of this work seem to make some progress in this context. However, the current state of the paper (particularly, its presentation) makes it difficult to recommend its acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"I appreciate that the authors added the overview section, it benefits the paper. However, I stand by my opinion that the paper is still too hard to read for the target audience at ICLR. Specifically section 3 and 4 are still challenging to follow for readers without extensive knowledge in program verification with sentences like:\\n\\n\\u201cAny deterministic abstract domain can be directly interpreted as a probabilistic abstract domain, where the concretization of an element is given as the set of probability measures whose support is a subset of the deterministic concretization.\\u201d (section 4)\\n\\nOr \\n\\n\\u201cTherefore, our abstract transformers may, before applying a lifted abstract transformer, apply relaxation operators that turn an abstract element a into another abstract element a\\u2019 such that \\u2026.\\u201d (section 5)\\n\\n\\nI believe the paper contains several interesting ideas, however the presentation of the contributions simply makes it too hard to read in its current state.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank you for your review, and have fixed the typos that have been spotted. While Figure 1 is referred to on the first page in the original version, we agree that this is insufficient and have added an overview which we hope explains it and our methods better. As per your suggestion, we included these relevant definitions, and added pseudocode which will hopefully help explain our extensions to ExactLine.\"}",
"{\"title\": \"Response Part 1\", \"comment\": \"Thank you for the thorough and clear review. We will answer in two parts.\\n\\nQ1.1: It is not clear to me why the attribute consistency score, a key component in the paper, is a good measure of consistency in generative models. Notably, I miss motivation for why linear interpolations between encoded inputs should necessarily keep the attribute stable.\\n\\nThere have been a wealth of papers that propose autoencoder/generative model systems that claim that linear interpolations produce \\\"interpretable\\\" results [1-11]. It is in fact possible that not all defined attributes for a dataset should be preserved under \\\"interpretable\\\" interpolations. For example, interpolating between a person with only a beard and the same person with only a mustache would likely fail to satisfy the disjunctive attribute \\\"no beard or no mustache.\\\" However, for many attributes we do expect intuitively consistency along interpolations between examples with those attributes, such as \\\"has blond hair.\\\" One can examine the named attributes provided with CelebA to decide whether they should remain consistent among interpolations (we believe they should).\\n\\nInterestingly, this brings up the important point that deciding whether an attribute should correspond to a direction in the encoded representation for a dataset is likely a subjective question. We do not claim to answer this. Rather, our system attempts to verify this for a given dataset and given property.\\n\\nQ1.2: Especially, why is an attribute considered consistent if it is stable to linear interpolations in the latent space? \\n\\nWe clarify that we do not measure the consistency of an attribute in vacuum, but with respect to a particular autoencoder. An attribute which is consistent for one autoencoder might very well be inconsistent for another autoencoder, and this is not a judgement on the consistency of the attribute as much as it is a judgement on the consistency of the autoencoder.\\n\\n[1] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.\\n[2] Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, pp. 5040\\u20135048, 2016.\\n[3] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio.\\nGenerating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.\\n[4] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\\n[5] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2391\\u20132400. JMLR. org, 2017.\\n[6] David Ha and Douglas Eck. A neural representation of sketch drawings. arXiv preprint arXiv:1704.03477, 2017.\\n[7] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.\\n[8] Anders Boesen Lindbo Larsen, S\\u00f8ren Kaae S\\u00f8nderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.\\n[9] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in neural information processing systems, pp. 4790\\u20134798, 2016.\\n[10] Yongyi Lu, Yu-Wing Tai, and Chi-Keung Tang. Attribute-guided face generation using conditional cyclegan. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 282\\u2013297, 2018.\\n[11] Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. Attgan: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing, 2019.\"}",
"{\"title\": \"Response Part 2\", \"comment\": \"Q2.0: There is no discussion or experiments probing the dependency on the quality of the auxiliary classifier or the encoder/decoder model used.\\n\\nWe have in fact included experiments comparing different autoencoder models using a single auxiliary classifier (Figure 4). We can also certainly include experiments comparing different auxiliary classifiers. However, we note that the goal of our experimental section is to highlight the performance of our certification method and didactically demonstrate the variety of properties it is capable of certifying, and not to make claims about the full performance of models. Our hope is that our system might be used as yet another tool to evaluate these systems, rather than as a definitive answer to the superiority of any system at the moment.\\n\\nQ2.1: Did you perform any experiments on how the L1 score of the auxiliary classifier affects the consistency score? I would also like to see some quantitative numbers on the auxiliary classifier.\\n\\nDo you mean F1 Score? We note that our contribution is to the efficiency of a technique for certifying models and not a judgement on whether the models are actually quality models. However, we would be happy to run such tests and include scores for any (relu) models that you think would be of interest to the community.\\n\\nQ2.3: Similarly, I would like to see some numbers on the quality of the encoder/decodes. Simply inspecting the interpolations in figure 3) the reconstructions seem quite blurry, likely due to the relatively small models used. Is it prohibitively expensive to run the proposed method on bigger models (e.g. ResNet based encoder/decoders or Unet-style models)?\\n\\nIt is true that our system is somewhat size-limited, but more relevantly to the models mentioned, our system is currently limited to feed-forward connections. While this is a practical limitation and not a theoretical limitation (DiffAI and ERAN can both handle recurrent connections), the non-convexity of the domain would mean that such connections would introduce multiplicatively many overapproximations. Specifically, n convex components (line segments or boxes) from a previous layer added to m convex components from a downstream layer would produce n*m boxes. While it is an interesting future research objective to optimize this case, it is out of the scope of this work.\\n\\nHowever, we would be happy to run our system on any feed-forward relu models, within an order of magnitude of the size we have shown, for any papers that you would like (especially if code is available).\\n\\nQ2.4: I believe it would be more informative to show the actual confidence intervals in figure 2b) instead of only the width of the confidence intervals?\\n\\nThe purpose of Figure 2b is only to demonstrate the relative performance of the different certification techniques. The confidence interval width does not refer to an estimate on the correct output of these methods, but to the comparable property of the output of these methods themselves. We are happy to put the actual confidence intervals in the appendix, but for HZono, Sampling, and Exact, these would be almost entirely vacuous (as the confidence interval widths are so large), and for ApproxLine it would be indistinguishable from a line in the middle of the graph, which we worry would be a confusing visualization to have in the main body when comparing certification methods.\", \"q3\": \"I found it quite challenging to understand how the proposed is implemented in practice - My suggestion is that the authors add a pseudo-code / algorithm to section 3 clarifying exactly how the bounds reported in the experimental section are calculated.\\n\\nWe have updated the paper to include pseudocode and have added an overview which explains our methods at a high level (we already updated the overview). We will continue to lighten the presentation of ApproxLine to reduce the dependency on prior abstract interpretation knowledge.\"}",
"{\"title\": \"Significance\", \"comment\": \"Thank you for your review. We will gladly incorporate your suggestions to reduce our reliance on terms and knowledge from abstract interpretation, and have included pseudocode so that our techniques can be understood in isolation. As probabilistic abstract interpretation has proved a powerful framework for probabilistic bound inference in the program analysis community, we felt that it would be an important contribution on its own to describe a usage for the machine learning community.\", \"on_the_significance_over_exactline\": \"providing fast and sound overapproximations to exact methods is known to be non-trivial, and certainly publishable (see [1,2,3,4,5,6,7,8]). While abstracting exact domains to less precise domains (sets of boxes) is a well known technique, the decision of whether, where, when, and what to approximate has no definitive answer. In the case of restrictions to lines, we found that for larger networks, to be significantly more performant than simply sampling (as can be seen in Figure 2 and Section 4.2), over-approximation of ExactLine was absolutely necessary. The design of our core novel contribution, a heuristic for the parts of ExactLine to approximate, turned out to be both a delicate and a critical problem. We tested quite a few seemingly obvious heuristics, such as clustering ExactLine edges using k-means (both in the original dimensional space and projected to single dimensions in various ways) before determining the necessity of ensuring connectedness between the nodes in each approximation cluster (which turned out to be the best heuristic, and the one we present). We will update the paper to mention these other options.\\n\\n[1] Singh Gagandeep, Gehr Timon, P\\u00fcschel Markus, Vechev Martin. Boosting Robustness Certification of Neural Networks. ICLR 2019\\n[2] Singh Gagandeep, Gehr Timon, Mirman Matthew, P\\u00fcschel Markus, Vechev Martin.\\nFast and Effective Robustness Certification. NeurIPS 2018\\n[3] Singh Gagandeep, Ganvir Rupanshu, P\\u00fcschel Markus, Vechev Martin. Beyond the Single Neuron Convex Barrier for Neural Network Certification. NeurIPS 2019\\n[4] Zhang, Huan, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. NeurIPS 2018.\\n[5] Wang, Shiqi, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In NeurIPS 2018.\\n[6] Salman, Hadi, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to tight robust verification of neural networks. NeurIPS 2019.\\n[7] Wong, Eric and J. Z. Kolter. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. ICML 2018\\n[8] Mirman Matthew, Gehr Timon, and Vechev Martin. Differentiable Abstract Interpretation for Provably Robust Neural Networks. ICML 2018\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank all reviewers for their detailed responses. A common thread among all reviewers is that our presentation was not pedagogical enough or self contained given the target audience.\\n\\nBy now various prior works published at ICLR [1], NeurIPS [3,4], and ICML [2] have heavily relied on abstract interpretation for verification and training and have achieved state of the art results.\\n\\nTo our knowledge, this is the first time probabilistic abstract interpretation has been introduced in this context, and we believe it can have similar impact as in classic verification. As this is the first time this method was introduced in this context, the presentation was not perfectly accessible, even though all definitions required to technically understand our work are present.\\n\\nBecause of this, we will work very hard on improving the presentation -- we have already updated the overview to provide for a more intuitive presentation. We will update the introduction and the paper to provide even more intuition and simplify our presentation by shifting some of the terms to appendix, explaining them less formally in the text, and reduce the overhead needed to understand key concepts.\\n\\n[1] Singh Gagandeep, Gehr Timon, P\\u00fcschel Markus, and Vechev Martin. Boosting Robustness Certification of Neural Networks. ICLR 2019\\n[2] Mirman Matthew, Gehr Timon, and Vechev Martin. Differentiable Abstract Interpretation for Provably Robust Neural Networks. ICML 2018\\n[3] Singh Gagandeep, Gehr Timon, Mirman Matthew, P\\u00fcschel Markus, and Vechev Martin.\\nFast and Effective Robustness Certification. NeurIPS 2018\\n[4] Singh Gagandeep, Ganvir Rupanshu, P\\u00fcschel Markus, and Vechev Martin. Beyond the Single Neuron Convex Barrier for Neural Network Certification. NeurIPS 2019\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes APPROXLINE, which is a sound approximation to EXACTLINE and is able to compute tight deterministic bounds on probabilities efficiently when the input is restricted on a line. It is a nonconvex relaxation, therefore it is able to capture the nonconvexity of neural networks. APPROXLINE is applied to generative models to verify the consistency of image attributes through linear interpolations on the latent variables.\\n\\nTo me, the most significant part is that the proposed approach has the potential to become a reliable metric for evaluating whether a generator disentangles latent representations, as long as a reliable attribute classifier can be trained. I would suggest the authors to emphasize this part in their future versions.\\n\\nHowever, the current version is quite difficult for me to understand, and I guess it is difficult for a broad range of audiences without background in program analysis. Somehow I think the same message can be conveyed better without abusing terms from abstract interpretation. I would also suggest the authors to reduce such abuse of notations. At least, pseudocode could be provided. \\n\\nAs a result, I cannot give a confident judgement about whether the contribution of this paper is significant given the existence of EXACTLINE. Still, I tend to accept this paper for its potential to become a good metric for generative models.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"summary:\\nThe paper proposes a method to efficiently verify that generative models are consistent with respect to some known (latent) attribute. The authors defines attribute consistency by 1) mapping pairs of input (x1, x2) with matching attribute to a latent space using an encoder n_E(x) and 2) measuring how correctly an auxiliary classifier will classify the known attribute using (decoded) linear interpolations between the two latent encodings. Importantly, the proposed method gives guaranteed bounds on this consistency score, as opposed to simply evaluating the classifier on a fixed set uniformly sampled points between x1 and x2. In experiments the authors use their method to test for attribute independence as well as consistency under left-right flipping of an image using two different autoencoder models (VAE and CycleAE) obtaining tighter bounds on the \\u2018attribute consistency\\u2019 score than competing methods. \\n\\nDecision & supporting arguments:\\nConceptually I found the paper very appealing, and it tackles an important problem in generative modelling. However I have some concerns with respect to the paper in its current state:\\n1) It is not clear to me why the attribute consistency score, a key component in the paper, is a good measure of consistency in generative models. Notably, I miss motivation for why linear interpolations between encoded inputs should necessarily keep the attribute stable.\\n2) Although I found the experiments interesting, I did not find the experimental section completely comprehensive. There is no discussion or experiments probing the dependency on the quality of the auxiliary classifier or the encoder/decoder model used. \\n3) I did not find the description of the proposed method to be reasonably self-contained. Especially section 3 which describes the proposed method is challenging to follow. The background material in section 2 reads very much like a set of definitions. Since ICLR has a quite broad audience, I think the paper should be written in a more pedagogical way, with for instance clarifying examples. An example of a sentence that is incredibly hard to parse is on page 4, describing domain lifting: \\u201cAny deterministic abstract domain can be directly interpreted as a probabilistic abstract domain, where the concretization of an element is given as the set of probability measures whose support is a subset of the set produced by the deterministic concretization.\\u201d I think making this paper more pedagogical requires major rewriting.\\n\\nDue to the above reasons I currently score the paper as a \\u2018weak reject\\u2019.\\n\\nFurther detailed questions/comments:\\nConsistency Score\", \"q1\": \"What is the motivation behind the definition of the consistency attribute score. Especially, why is an attribute considered consistent if it is stable to linear interpolations in the latent space?\", \"experiment_results\": \"Q2.1: Did you perform any experiments on how the L1 score of the auxiliary classifier affects the consistency score? I would also like to see some quantitative numbers on the auxiliary classifier.\\nQ2.2: Why is the L1 score used for training the classifier instead of bernoulli which seems more natural for binary attributes?\\nQ2.3: Similarly, I would like to see some numbers on the quality of the encoder/decodes. Simply inspecting the interpolations in figure 3) the reconstructions seem quite blurry, likely due to the relatively small models used. Is it prohibitively expensive to run the proposed method on bigger models (e.g. ResNet based encoder/decoders or Unet-style models)?\\nQ2.4: I believe it would be more informative to show the actual confidence intervals in figure 2b) instead of only the width of the confidence intervals?\", \"readability\": \"\", \"q3\": \"I found it quite challenging to understand how the proposed is implemented in practice - My suggestion is that the authors add a pseudo-code / algorithm to section 3 clarifying exactly how the bounds reported in the experimental section are calculated.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary\\n\\nThis work aims to provide warranties on the outputs of generative models by providing bounds on robustness\\u00a0(over adversarial attacks for instance, or other transformation in this case). The specific case of restricting the inputs to a line segment allows performing verification of robustness exactly (Exact-line approach, NeurIPS'19). The authors extend this work and apply it to verify robustness of some VAE and BEGAN like models.\", \"positive_aspects\": [\"Rigorous work, I did not spot much typos.\", \"First proofs given for generative models.\"], \"negative_aspects\": [\"My main concern about this work is that the presentation is not didactic enough.\", \"From the beginning, key concepts are not clearly defined, such as network certification, specification, and the \\\"verification problem\\\". In the definition of robustness, a reference to a \\\"safe set of outputs\\\", as in Gehr et al. would help the understanding.\", \"The introduction is too short and lacks context. Figure 1 is not referred\\u00a0in the text and is not understandable with notations that are not yet introduced.\", \"Then follows without transition two pages of background that are mostly definitions but without\\u00a0a proper motivation, these are difficult to process.\", \"The work, an extension of the Exact-line approach, only gives a 5 lines description which is not insufficient to understand the approach.\", \"Perhaps my assessment is too negative because I am unfamiliar with the certification literature, but since the work present applications in generative modeling, I think it should be understandable by readers from this background as well.\"], \"minor\": \"of of in caption of Fig 2\", \"last_line_of_page_5\": \"J -> j\\nI found the equivalence sign in the last equation of page 6 confusing, is it really supposed to be an equivalence here?\", \"page_6\": \"attribute detector... described below -> in appendix\\n\\nAfter seeing the authors changes in the manuscript the paper does look better, but similarly to Reviewer 2 I still judge it difficult to understand.\"}"
]
} |
Syg6fxrKDB | A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem | [
"Zhihao Xing",
"Shikui Tu"
] | We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem (TSP). We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively. A graph neural network (GNN) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step. The prior probability provides a heuristics for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure. Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods. | [
"Traveling Salesman Problem",
"Graph Neural Network",
"Monte Carlo Tree Search"
] | Reject | https://openreview.net/pdf?id=Syg6fxrKDB | https://openreview.net/forum?id=Syg6fxrKDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"oOs5o2CXG",
"BJlWnNN2jH",
"rkeV3rQ3sH",
"BJebXVm3oH",
"HyxFhSiYsH",
"SJenfBotor",
"H1eL7g195S",
"ryxI8_Q1qS",
"Syga-qC2FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742810,
1573827753472,
1573823916084,
1573823513440,
1573660080774,
1573659923521,
1572626461944,
1571924045547,
1571772932526
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2191/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2191/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2191/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2191/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2191/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2191/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2191/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2191/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper is a contribution to the recently emerging literature on learning\\nbased approaches to combinatorial optimization. \\nThe authors propose to pre-train a policy network to imitate SOTA solvers for \\nTSPs. \\nAt test time, this policy is then improved, in an alpha-go like manner, with \\nMCTS, using beam-search rollouts to estimate bootstrap values. \\n \\nThe main concerns raised by the reviewers is lack of novelty (the proposed \\nalgorithm is a straight forward application of graph NNs to MCTS) as well a the \\nexperimental results. \\nAlthough comparing well to other learning based methods, the algorithm is far \\naway from the performance of SOTA solvers. \\n \\nAlthough well written, the paper is below acceptance threshold. \\nThe methodological novelty is low. \\nThe reported results are an order of magnitude away from SOTA solvers, while previous work \\nhas already reported the general feasibility of learned solvers to TPSs. \\nFurthermore, the overall contribution is somewhat unclear as the policy relies \\non pre-training with solutions form existing solvers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for addressing my comments, see some replies below.\", \"q1\": \"Thank you for clarifying and updating the contributions in the introduction. I think they now reflect better the contributions of the paper.\", \"q3\": \"Thank you for the additional experiments, the results on generalization specifically seem very valuable.\", \"q5\": \"Thank you for updating the description, and changing it from +inf to -inf, which is drastically different. However I still find it very confusing that on one hand you say you set it to -inf, and then on the footnote you say that you actually set it to -5, -10, and -15, depending on the problem. This sounds to meet that in that case the initial q value is just another hyparameter and it should be indicated as such. I wonder if you could just set it to a fixed value of 0, and then tune the c_puct constant separately. By the way, what is the value of c_puct used? I don't think I could find it on the paper, you may consider adding it.\", \"q4\": \"The main reason I asked abut Q5 was the confusion caused by the \\\"bug\\\" pointed in Q5. If the Q values were initiated at +Inf, this would mean the policy would favor exploration over exploration a lot, which is why I thought that the pure MCTS baseline could do better. Where the hyperparameters, and changed tuned for the pure MCTS baseline, though? I would expect tuning c_puct and tuning the initial q value would be very important here.\", \"q6\": \"Thanks, I was asking this because if the Q values had been initialized to \\\"+Inf\\\" as indicated in the previous version then all actions would be explored first, so I thought in your case you may have never been under the circumstances of having unexplored actions at the root.\", \"from_the_reply\": \"\\\"In the \\u201cplay\\u201d phase, we pick action according to the biggest Q value at the root and mask out the actions that have not been explored because these nodes have very small prior or Q value.\\\" What do you mean by \\\"because these nodes have very small prior or Q value.\\\", are you saying that they will be implicitly masked out because any action that is explored will already have higher value that the prior initial value, or do you explicitly mask out the actions. If it is the first then you should probably indicate in the paper that care should be taken when choosing the initial value depending on the reward structure so this still holds true. If it is the second, you should probably include a sentence saying that in the Play section.\\n\\nFinally, I would strongly recommend checking out and referencing some of the papers indicated on the review, particularly those on classic combinatorial problems (SAT and MIPs). Since this work is very similar in spirit to those but adding MCTS on top, and the approaches that they used on those could probably be augmented by MCTS too.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for seeing the importance of the problem and the value of showing the broad applicability. Please let us address your concerns.\", \"question_1\": \"\\u201cThe motivation for introducing a new model is not very clear, specially if these baselines are not compared.\\u201d\", \"answer_1\": \"In this paper, our intention is not to introduce a new GNN model and beats other well-known models, but to use GNN to extract features for TSP. Rather than using the basic GNN, we integrate edge information into the GNN and empirical results show that the incremental chance can improve the feature extraction ability of the GNN. We have modified the representation of the corresponding part in the article. In the feature, we will explore other GNN models as you mentioned to improve the feature extraction and generalization capabilities of our method.\", \"question_2\": \"Could other sorts of problems benefit from the GNN-MCTS?\", \"answer_2\": \"In this paper, we focus on the traveling salesman problem and we will extend the proposed MCTS to other combinatorial optimization problems in the future work.\", \"question_3\": \"Running time and generalization of the algorithm.\", \"answer_3\": \"Running time and generalization of the algorithm are also noted by Reviewer 2 and please see our response to Reviewer 2 on question 1 and question 2.\", \"question_4\": \"\\u201cIt would be good to have a pure MCTS baseline with not learned prior as an additional ablation (e.g. taking the SE-GNN prior out of the picture).\\u201d\", \"answer_4\": \"Thanks for your proposal. We have added the pure MCTS baseline in the new version.\", \"question_5\": \"Question about Q value initialization.\", \"answer_5\": \"We are so sorry for our description to confuse you. The Q value only needs to be initialized to a small value, i.e., - infinity. In our code, we initialize Q value to -5.0 (TSP20), -10.0 (TSP50) and -15.0 (TSP100).\", \"question_6\": \"\\u201cplanning budget is smaller than the number of nodes, and not all actions at the root are explored, the actions that have not been explored are masked out. \\u201d\", \"answer_6\": \"In the MCST procedure, only nodes with high value (Q+U) will be explored multiple times. So, the MCTS can allocate more exploration resources to the direction of the possible optimal solution. By using PUCT, the small prior child nodes are rarely visited, and the solution space can cut down in this way. In the \\u201cplay\\u201d phase, we pick action according to the biggest Q value at the root and mask out the actions that have not been explored because these nodes have very small prior or Q value.\", \"question_7\": \"typos\", \"answer_7\": \"We will correct typos in the new version.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for the detailed review and helpful suggestions. We address your concerns below:\", \"question_1\": \"Running time of the algorithm.\", \"answer_1\": [\"Thank you for this suggestion (which was also noted by reviewer 4), this is indeed something that was missing which we have supplemented the running time of our algorithm, Gurobi, and other learning-based methods. Running times are important but hard to compare: they can vary by two orders of magnitude as a result of implementation (Python vs C++) and hardware (CPU vs GPU). Our algorithm is slower than other learning-based algorithms due to the look-ahead search. Our code is written by Python and we note that the MCTS procedure can speed up by rewritten code to C++. We test our algorithm, Gurobi and learning-based methods on a machine with 32 virtual CPU systems (2 * Xeon(R) E5-2620)) and 8 * 2080ti. At each epoch, we test 32 instances in parallel and after 4 epochs, we report the time it takes to solve on each test instance. The results are as follows:\", \"TSP20\\t\\tTSP50\\t\\tTSP100\", \"Our \\t\\t\\t3.2s \\t\\t6.6s\\t\\t\\t31.4s\", \"Gurobi\\t\\t\\t0.017s \\t 0.2s\\t\\t\\t1.9s\", \"Dai et al\\t\\t0.007s\\t\\t0.018s\\t\\t0.043s\", \"Kool et al\\t\\t0.036s\\t\\t0.054s\\t\\t0.084s\", \"Although the cost time of our algorithm is not as fast as the traditional optimizer such as Gurobi, our algorithm has a good generalization ability than other learning-based algorithms.\"], \"question_2\": \"typos\", \"answer_2\": \"We will correct typos in the new version.\", \"question_3\": \"Misleading metric in Table 6.\", \"answer_3\": \"We agree with you and have removed Acc* metric.\", \"question_4\": \"\\u201cTable 3 title is confusing\\u201d.\", \"answer_4\": \"We change the title to \\u2018\\u2019Confidence interval on different confidence levels\"}",
"{\"title\": \"Response to Review #3 Part 2\", \"comment\": \"Question 8: \\u201c The related work section would be more instructive if it also gave some information about the limitations of the alternative deep learning approaches and how the proposed technique overcomes these.\\u201d\", \"answer_8\": \"Thank you for the suggestion. We reorganized the related work. We agree with you that all the approaches discussed in the second paragraph are \\\"greedy\\\" and suffer from the limitations mentioned in the introduction. What\\u2019s more, we have made more context and discussion of Nowak et al 2017 and Dai et al 2017 and you will see that in the new version.\", \"question_9\": \"typos\", \"answer_9\": \"We will correct typos in the new version.\", \"question_10\": \"The meaning of the \\\"improved probability \\\\hat{P} of selecting the next vertex\\\".\", \"answer_10\": \"It should be that \\u201cbased on the improved probability \\\\hat{P} generated by the GNN-MCTS\\u201d.\", \"question_11\": \"The value of Q is initialized to infinity.\", \"answer_11\": \"We are so sorry for our description to confuse you. The Q value only needs to be initialized to a small value, i.e., - infinity. In our code, we initialize Q value to -5.0 (TSP20), -10.0 (TSP50) and -15.0 (TSP100).\", \"question_12\": \"Suggestions for improvement\", \"answer_12\": \"We are so grateful for your suggestions and we will adjust the corresponding part in the new version.\"}",
"{\"title\": \"Response to Review #3 Part 1\", \"comment\": \"Thank you for your constructive and encouraging comments. we address your concerns below:\", \"question_1\": \"\\u201cFirst, the heuristic value function: this value function h(s) is defined in the appendix but should be motivated and described (in detail) in the text body.\\u201d\", \"answer_1\": \"We accept your suggestion, and adjust the value function\\u2019s position to the corresponding place in the article's body in the new version.\", \"question_2\": \"\\u201cAlso, though it is intuitively clear why a random policy is unlikely to result in a poor result, it is never compared against; how does the performance degrade if the heuristic value function is not used?\\u201d\", \"answer_2\": \"The results were included in Table 5, where SE-GNN+Tree_v denotes using the policy random and SE-GNN+Tree denotes using the value function. The description for Table 5 was not clear in the manuscript. We will revise the related part accordingly.\", \"question_3\": \"\\u201cFinally, the parameter 'beam width' used in the evaluation of the value function but is only set to 1 in all experiments. Some experiments should be included to show how increasing beam width impacts performance (or the authors should provide a reason these experiments were not run).\\u201d\", \"answer_3\": \"We conduct experiments to explore the effects of different widths on the performance of the algorithm. Since the beam width mainly affects the accuracy of the value function, we use the result of the value function as a measure and report the Gap as defined in Table 1. Specifically, we set beam width to 1, 5, 10, 20 and test performance of the value function on random instances including TSP20, TSP50, and TSP100. The experimental results are as follows: For TSP20, the Gap is 2.25%(1), 1.50%(5), 1.50%(10), 1.50%(20) ; For TSP50, the Gap is 5.32%(1), 3.64%(5), 3.38%(10), 3.22%(20); For TSP100, the Gap is 11.37%(1), 8.11%(5), 7.48%(10), 6.87%(20). We also count the time cost of the different settings of the beam width. The result of the time cost are as follows: For TSP20, 55ms(1), 265ms(5), 534ms(10), 1063ms(20); For TSP50, 147ms(1), 730ms(5), 1461ms(10), 2957ms(20); For TSP100, 323ms(1), 1639ms(5), 3338ms(10), 6820ms(20). The experimental results show that as the beam width increases, the performance of the value function will get better while the time cost will become larger. We need to make a trade-off between accuracy and time cost.\", \"question_4\": \"Finally, it seems as if there already exists heuristic methods (against which the paper compares performance); could these be used instead of this value function?\", \"answer_4\": \"We conduct experiments about replacing value function with different heuristic methods including nearest insertion, farthest insertion and random insertion. We report the Gap as defined in Table 1. The results are as follows: For nearest insertion, TSP20(4.53%), TSP50(14.95%), TSP100(21.79%); For farthest insertion, TSP20(4.40%), TSP50(14.52%), TSP100(21.76%); For random insertion, TSP20(4.99%), TSP50(13.95%), TSP100(22.03%). The results show that the heuristic methods mentioned in the article are not suitable for our algorithm. We think that the partial tour corresponding to the leaf node in the tree suffers the performance of the above heuristic methods. Designing an effective evaluation function is indeed a very important direction for further research.\", \"question_5\": \"\\u201cHow is the set of Neighbors defined?\\u201d\", \"answer_5\": \"Complete graph is constructed for TSP, so the set of neighbors of one node contains all nodes except itself. We will describe it with more details in the new version.\", \"question_6\": \"\\u201cRelatedly, it would be helpful if the authors could better motivate their additional term in Eq. (2);\\u201d \\u201cA comparison against a network implemented using the basic GNN model, defined in Eq. (1), should be included to compare performance.\\u201d\", \"answer_6\": \"We agree that in principle the neural network should have learned the distance information from the coordinates of the nodes. It is a simple thing for people, but our empirical results indicate that it is difficult for the neural network to learn the distance information.\\nIn the paper, we compare the basic GNN (no edge information) with SE-GNN in the following ways. Firstly, we compare the accuracy of two models on test data when training the neural network. And then we use the greedy policy that selecting node with biggest prior output by the network as next move to derive the tour. Table 6 reports the corresponding results and shows that adding distance information to the GNN can improve the performance of the model.\", \"question_7\": \"Alternative way to measure the distance between nodes.\", \"answers_7\": \"Like Gaussian kernel function, we use $e_{v,u}W^{t}_{3}$ to map Euclidian distance to high dimensional in Eq.(2).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"EDIT: After the authors response and update of the contributions to indicate that the main contribution of the paper is the application of GNNs and MCTS to the TSP (rather than the original claims that that the model architecture and search approach were novel contributions), I increased my score from Weak Reject to Weak Accept. However, given that the paper is now more focused on solving the TSP, and I am not an expert on that specifically I had to reduce my experience assessment, as while I am more confident now that the paper is technically correct, it is harder for me to judge if the paper should be accepted in terms of empirical strength since I am not familiar with TSP baselines.\\n\\nThe authors propose an MCTS-based learned approach using Graph Neural Networks to solve the traveling salesman problem (TSP) agents. \\n\\nThe authors write the TSP as an MDP where the state consists of the nodes visited by the agent and the last node visited by the agent, the action consists of selecting the next node to visit, and the reward at each step is the negative cost of the travel between the last node and the next node. \\n\\nThe learned part of the model uses a \\u201cstatic-edge graph neural network\\u201d (SE-GNN). This network allows to access the full graph context, including edge features, to make node predictions. This is listed as the first paper contribution. At train time, this network is trained to predict the probability of each unvisited node to be next in the optimal path. This is trained via supervised learning using optimal paths precomputed with state of the art TSP solvers.\\n\\nAt test time, they use MCTS with a variant of PUCT, where the pre-trained SE-GNN is used as the prior policy, and there is a selection strategy during search that balances the prior probability, and the Q values estimated by MCTS, using max based updates (e.g. during back up new Q estimates replace old estimates if and only if the are larger than the previous ones). This is listed as the second paper contribution. Authors show that the approach beats other learned solvers in the TSP problem by a large margin in terms of optimality gap.\\n\\nWhile I think the work is interesting, I am not sure that what the authors cite as main contributions of the paper are truly the main contributions. In my opinion the main contribution would be the state of the art performance at solving the TSP using learned methods. I cannot, however, recommend acceptance due to the following reasons.\\n\\nWith respect to the first claim \\u201cSE-GNN that has access to the full graph context and generates the prior probability of each vertex\\u201d, there are already many models that allow to condition on edge features, including InteractionNetworks, RelationNetworks and GraphNetworks. This paper has a good overview of this family of methods and most of them allow to access the full graph context too (https://arxiv.org/abs/1806.01261). Most of these models are very well known and are in principle more expressive than the one proposed in this paper, and allow generalization to different graph sizes, so the motivation for introducing a new model is not very clear, specially if these baselines are not compared.\\n\\nWith respect to the MCTS contribution at test time, it seems that the changes made to the algorithm compared to AlphaGo, are very specific to the TSP and there is not much discussion about which other sort of problems may benefit from the same modifications, so it is hard to evaluate its value as a standalone contribution independent from the TSP.\", \"on_the_basis_of_state_of_the_art_performance_at_solving_the_tsp_using_learned_methods\": [\"The model requires access to a dataset with optimal solutions to train it, and I doubt it can solve the problems faster than Gurobi in terms of wall time. For this result to be more interesting, the authors should be able to show that the model can generalize to larger problems (where the combinatorial complexity may start making approaches like Gurobi struggle). However it is not clear if the model can generalize to larger graphs.\", \"Beyond that I am not an expert on TSP specifically, and I don\\u2019t know the TSP literature, so I cannot give a strong recommendation.\"], \"there_are_some_additional_papers_that_may_be_relevant_to_this_line_of_work\": [\"(MIP, NeurIPS 2019) Learning to branch in MIP problems using similar technique pretraining a GNN and use it to guide a solver at test time (no MCTS though) (https://arxiv.org/abs/1906.01629)\", \"(SAT, SAT Conference 2019) Learning to predict unsat cores (similar to the previous one but for SAT problems) (https://arxiv.org/abs/1903.04671)\", \"(Structural construction, ICML 2019) Building graphs by choosing actions over the edges of a graph solving the full RL problem end to end, and also integrating MCTS with learned prior both at train time and test time (together and independently) (http://proceedings.mlr.press/v97/bapst19a/bapst19a.pdf)\", \"Some additional typos/ feedback:\", \"It would be good to have a pure MCTS baseline with not learned prior as an additional ablation (e.g. taking the SE-GNN prior out of the picture).\", \"In the \\u201cSelection Strategy\\u201d paragraph, the action is said to be picked as argmax(Q + U), where U is proportional to the prior for each action. However, Q is said to be initialized to infinite. This would mean that at the beginning of search all actions will be tied at infinite value, and my default assumption would be that in these conditions an action is chosen uniformly at random. I suspect what happens in this case is that the action with the highest prior is picked to break the tie at infinite, however if this is the case this should be indicated in the math.\", \"In the \\u201cExpansion Strategy\\u201d paragraph, the Q values are said to be initialized to infinite. However in the Back-Propagation strategy it is said they are updated using newQ = max(oldQ, value_rollout). If this was true the values would always remain infinite, I assume the max is not applied if the previous value was still infinite.\", \"In the \\u201cPlay\\u201d paragraph: The action is said to be picked according to the biggest Q value at the root, I assume in cases where the planning budget is smaller than the number of nodes, and not all actions at the root are explored, the actions that have not been explored are masked out.\", \"Non-exhaustive list of typos: \\u201cRondom\\u201d \\u2014> \\u201cRandom\\u201d, \\u201cprovides a heuristics\\u201d \\u2014> \\u201cprovides a heuristic\\u201d, \\u201cstrcuture2vec\\u201d \\u2014 > \\u201cstructure2vec\\u201d, weird line break at top of page 8.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes learning a TSP solver that incrementally constructs a tour by adding one city at a time to it using a graph neural network and MCTS. The problem is posed as a reinforcement learning problem, and the graph neural network parameters are trained to minimize the tour length on a training set of TSP instances. A graph neural network architecture called Static Edge Graph Neural Networks is introduced which takes into account the graph of all cities in a given problem instance as well as the partial tour constructed so far in an episode. The network predicts probabilities for the remaining cities to be selected as the next city in the tour, which is then used to compute a value function that guides MCTS. Results on synthetic TSP instances with 20, 50, and 100 cities show that the approach is able to achieve better objective values than prior learning-based approaches. Applying AlphaZero-like approaches to TSP is an interesting test case for understanding how well they can work on hard optimization problems.\", \"the_paper_has_several_drawbacks\": [\"The evaluation seems to be flawed as there is no mention of running time of the various algorithms being compared anywhere in the text. It\\u2019s not possible to make a fair comparison without controlling for running time. As an extreme example, even random search will eventually find the global optimum if given sufficient time. So the results are not very meaningful without the running times.\", \"Novelty is fairly low. The changes in SEGNN compared to previous works are incremental or not novel, and the overall idea is the same as AlphaGo/Zero. While I don\\u2019t think novelty is a strict requirement, if it is absent, then it should be compensated with strong empirical results, but the paper lacks that as well.\", \"A discussion on whether the approach can plausibly scale to much larger TSP instances is missing. First, there is the question of whether learning can succeed on much larger instances. Second, even if good policies can indeed be learned, can they provide competitive running times compared to the state-of-the-art TSP solvers? Graph net inference\\u2019s compute cost scales linearly with graph size (number of cities), and since multiple inference passes need to be performed per step (to pick the next city to add to the current partial tour), the overall cost scales quadratically. This is worse than the empirical scaling of solvers like LKH and POPMUSIC. One has to consider approaches with cost that scales roughly linearly to be able to compete with state-of-the-art solvers. It should be noted that TSP instances with <= 100 cities are really trivial for the best solvers, and outperforming them with a learning-based approach may not be plausible until much larger instances are considered (e.g., > 10K cities). The ML community needs to move away from evaluating on small instances if the long term goal is to beat state-of-the-art solvers with learning.\"], \"additional_comments\": [\"There are a lot of typos. A few that I caught: Tables 1 and 7 say \\u201cRondom\\u201d, \\u201capproximation ration\\u201d, \\u201cReLu\\u201d, \\u201cprovides a heuristics\\u201d, \\u201cSimilar to the implement\\u201d.\", \"Table 6 gives the highest test accuracy during training, but this could be misleading (e.g., there could be random spikes in test performance during training). A smoother metric should be used.\", \"Table 3 title is confusing.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors introduce a new Monte Carlo Tree Search-based (MCTS) algorithm for computing approximate solutions to the Traveling Salesman Problem (TSP). Yet since the TSP is NP-complete, a learned heuristic is used to guide the search process. For this learned heuristic, the authors propose a Graph Neural Network-derived approach, in which an additional term is added to the network definition that explicitly adds the metric distance between neighboring nodes during each iteration. They perform favorably compared to other TSP approaches, demonstrating improved performance on relatively small TSP problems and quite well on larger problems out of reach for other deep learning strategies.\\n\\nI believe that the paper is built around some good ideas that tackle an interesting problem; the Traveling Salesman Problem and variants are popular and having learning-based approaches to replace heuristics is important. In particular, choosing to use an MCTS to tackle this problem feels like a natural approach, and using a GNN as a learning backend feels like a encourage better performance with fewer training samples. However, there are too many questions raised by decisions the authors have made to warrant acceptance in the current state; I would be willing to revise my score if some more detailed analysis of these points were included.\\n\\nFirst, the heuristic value function: this value function h(s) is defined in the appendix but should be motivated and described (in detail) in the text body. As written, this information is not included in the main body of the paper yet is critical for the implementation. Also, though it is intuitively clear why a random policy is unlikely to result in a poor result, it is never compared against; how does the performance degrade if the heuristic value function is not used? Finally, the parameter 'beam width' used in the evaluation of the value function but is only set to 1 in all experiments. Some experiments should be included to show how increasing beam width impacts performance (or the authors should provide a reason these experiments were not run). Finally, it seems as if there already exists heuristic methods (against which the paper compares performance); could these be used instead of this value function?\\n\\nAdditionally, how is the set of Neighbors defined? It is suggested in the text that it is not all nodes, but not using all nodes is a limiting assumption. Relatedly, it would be helpful if the authors could better motivate their additional term in Eq. (2); at the moment, though using the euclidian distance to weight the edges, it is unclear why this function is a better choice than something else, for instance a Gaussian kernel or a kernel with finite support. In addition, the authors motivate that the distance between nodes is very important for the performance of the system, yet the coordinates of each vertex are included as part of the input vector so that (in principle) the network could learn to use this information. A comparison against a network implemented using the basic GNN model, defined in Eq. (1), should be included to compare performance.\\n\\nIn summary, there are a few choices that would need to be better justified for me to really support acceptance. However, there are some quite interesting ideas underpinning this paper, and I hope to see it published.\", \"minor_comments\": [\"Overall, I like the structure of the paper. At the beginning of all major sections there is an overview of what the remainder of the section will contain. This helps readability. I also like the comparison between the proposed work and AlphaGo, which popularized using deep learning in combination with MCTS; this enhances the clarity of the paper.\", \"The related work section would be more instructive if it also gave some information about the limitations of the alternative deep learning approaches and how the proposed technique overcomes these. My assumption is that all approaches discussed in the second paragraph are \\\"greedy\\\" and suffer from the limitations mentioned in the introduction. However, I am not sufficiently familiar with the literature to be certain. A sentence or two mentioning this or relating that work to the proposed MCTS approach would be informative.\", \"The last paragraph of the Related Work section, discussing the work of Nowak et al 2017 and Dai et al 2017, introduces some numbers with no context: e.g., \\\"optimality gap of 2.7%\\\". It is unclear at this stage if this number is good or bad. Some more context and discussion of this work might be helpful for clarity, particularly since the Nowak work seems to be the only other technique using GNN.\", \"Some general proofreading for language should be performed, as there are occasionally typos or missing words throughout the paper. Some examples: \\\"compute the prior probability that indicates how likely each vertex [being->is] in the tour sequence\\\"; \\\"Similar to the [implement->implementation], in Silver...\\\"; \\\"[Rondom->Random]\\\" in tables.\", \"In Sec. 4.1, it is unclear what is meant by \\\"improved probability \\\\hat{P} of selecting the next vertex\\\".\", \"I believe there is an inconsistency in the description of the MCTS strategy. Though the action value is set to the 'max' during the Back-Propagation Strategy, the value of Q is initialized to infinity.\", \"Suggestions for improvement (no impact on review):\", \"Clarity: the language in the 3rd and 4th paragraphs of the introduction [begins with \\\"In this paper, ...\\\"] could be made clearer.\", \"The language \\\"part of the tour sequence\\\" is not quite clear, since, when the process is complete, all points will be in the tour. It should be made clearer that the algorithm is referring to a \\\"partial tour\\\" as opposed to the final tour. This clarity issue also appears later in Sec. 4.\", \"\\\"Similar to above-learned heuristic approaches...\\\" It might be clearer if you began the sentence with \\\"Yet,\\\" or \\\"However,\\\" so that it is more obvious to the reader that you intend to introduce a solution to this problem.\", \"Equation formatting: Please use '\\\\left(' and '\\\\right)' for putting parenthesis around taller symbols, like \\\\sum.\", \"When describing the MCTS procedure, I have seen the word \\\"rollouts\\\" used much more frequently than \\\"playouts\\\". Consider changing this language (though the meaning is clear).\"]}"
]
} |
rJg3zxBYwH | Learning Likelihoods with Conditional Normalizing Flows | [
"Christina Winkler",
"Daniel Worrall",
"Emiel Hoogeboom",
"Max Welling"
] | Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula. Such behavior is desirable in multivariate structured prediction tasks, where handcrafted per-pixel loss-based methods inadequately capture strong correlations between output dimensions. We present a study of conditional normalizing flows (CNFs), a class of NFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x). CNFs are efficient in sampling and inference, they can be trained with a likelihood-based objective, and CNFs, being generative flows, do not suffer from mode collapse or training instabilities. We provide an effective method to train continuous CNFs for binary problems and in particular, we apply these CNFs to super-resolution and vessel segmentation tasks demonstrating competitive performance on standard benchmark datasets in terms of likelihood and conventional metrics. | [
"Likelihood learning",
"conditional normalizing flows",
"generative modelling",
"super-resolution",
"vessel segmentation"
] | Reject | https://openreview.net/pdf?id=rJg3zxBYwH | https://openreview.net/forum?id=rJg3zxBYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"97nrRkPW3R",
"HylwT6ZjoS",
"Skgiqp-ooB",
"rkgaw6Wijr",
"H1l055o6qB",
"SJxUsi1j9H",
"S1eCD0Vv9H",
"BklyFUW-9r",
"BJxfq8jg5S",
"Skx3ofIe5r",
"r1lfgEbJ9r",
"rJeN7RdTYr",
"B1ggTdXDKS",
"S1xLH2ZGYr",
"BkegD4JfKS",
"Hklnft3zdS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798742780,
1573752255301,
1573752210963,
1573752164744,
1572874901795,
1572694941541,
1572453989623,
1572046454895,
1572021898443,
1572000419770,
1571914730136,
1571814939912,
1571399864101,
1571064893540,
1571054680286,
1570060564353
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"~Lynton_Ardizzone1"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"~Lynton_Ardizzone1"
],
[
"ICLR.cc/2020/Conference/Paper2189/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2189/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"~Lynton_Ardizzone1"
],
[
"ICLR.cc/2020/Conference/Paper2189/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"~Lynton_Ardizzone1"
],
[
"ICLR.cc/2020/Conference/Paper2189/Authors"
],
[
"~Joseph_Marino1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a conditional normalizing flow approach to learning likelihoods. While reviewers appreciated the paper, in its present form it lacked a clear champion, and there were still some remaining concerns about novelty and clarity of presentation. The authors are encouraged to continue with this work and to account for reviewer comments in future revisions. Following up on the author response, a reviewer adds:\\n\\\"Thanks for your clarification. I still disagree that the conditional flow architecture proposed should be considered as a novel contribution. The reason why I mentioned [1] or [2] was not because they follow the exact setting (coupling based conditional flow model) discussed in this paper. I wanted to highlight that the idea to use conditioning variables as an input to the transforming network (whether it is an autoregressive density function, autoregressive transforming network, or coupling layers) is quite universal (as we all know many of the existing codes implementing flow-based models includes additional keyword arguments 'context' to model conditioning). I'm not sure why the fact that the proposed framework is conditioning on high-dimensional variables makes a contribution. There seems to be no particular challenge in doing that and novel design choices to circumvent that (i.e., we can just use existing architectures with minor modifications).\\n\\nI agree that the binary dequantization should be considered as a contribution, but as significant as to change my decision to accept. Thanks for the clarification on experiments. Considering this, I raise my rating to weak reject...\\n\\nAnother previous work I forgot to mention in the initial review is \\\"Structured output learning with the conditional generative flow\\\", Lu and Huang 2019, ICML 2019 invertible neural network workshop. This paper discusses the conditional flow based on a similar idea, and attacks high-dimensional structured output prediction. I think this should be cited in the paper.\\\"\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your comments\\n\\nIn Figure 2, we will include examples of the low resolution input, for an easier comparison of the results. In this figure in particular, we did not use a temperature for sampling of the baseline, since we are displaying the mode of the distribution. Since the distribution is factorized, sampling would add uncorrelated noise, meaning this comparison is actually skewed in favour of the baseline model.\\n\\nConcerning the DRIVE database, it indeed has very few images. Since we are training a likelihood based model, it is very easy for us to check for overfitting and early stop accordingly. In practice, we found that the standard data augmentation implied that we do not overfit. Furthermore, since the task is a per-pixel reconstruction task, the effective number of labels is much higher than the number of training images.\\n\\nIn the DRIVE experiments, we dropped the scaling modules since they did not appear to add much benefit to the results.\\n\\nWith regards to the exact architectures we have now placed network architecture tables in the appendix to clear up any confusion. Furthermore, we are adding a diagram of the conditional coupling layers in the appendix, which show the invertibility property clearly.\\n\\nWe have extended our related work on non-flow-based competing methods from the literature. and we have added some extra references on (conditional) normalizing flows as well.\\n\\nWe have already cleaned up the bibliography and any formatting issues, which we had at submission time. Thank you also for the sharp observation regarding the missing ^{-1} in Figure 1.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your comments.\\n\\nTo clarify your concerns, the design is covered in the section 3.1 Conditional modules. In particular the main invertible module is the conditional coupling layer. This takes in a conditioning input x and a latent variable z, which is transformed deterministically into a latent variable y. The transformation y <-> z conditioned on x is invertible. This transformation is similar to the coupling layer of RealNVP, but where every subnetwork in the layer takes and additional x as input. For clarity, we can add a diagram detailing this in the appendix.\\n\\nWith regards to your comments about mode collapse and training instability, it has been noted in the literature that normalizing flows do not suffer so much from mode collapse in the same way that GANs do, for instance. And on the topic of training instability, we did not notice any instabilities in the training of our flow models.\\n\\nThank you for your suggestions on follow up experiments. We agree that a text to image scenario would be interesting, since the conditioning argument in this case is structured.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your comments.\\n\\nWe would like to disagree on the topic on novelty of our contribution. In particular, reference [1] is not a flow. And indeed as you state, class-conditional flow models are not new in the literature. Where we would like to draw our distinction, however, is that we are in fact not considering class-conditional flow models. Instead our generative flow uses high-dimensional images as the conditioning argument. This warrants the use of a different kind of conditional coupling layer, unlike the ones in the papers that you cite. These papers also happen to be autoregressive, which makes sampling computationally expensive. \\n\\nAnother contribution we would like to highlight, which was also recognized by reviewers 2 and 3, is the link we draw between variational dequantization and existing variational inference methods. This new viewpoint allows us to derive a form of variational dequantization adapted to binary random variables in a consistent probabilistic framework. This innovation is important when it comes to finding a good lower bound on the likelihood. For instance, in the retinal vessel segmentation experiments, the log-likelihood scores for uniform dequantization versus our method is about 0.35 BPD versus 0.025 BPD (2.s.f). In an updated version of the paper, we are going to include this results to stress this improvement.\\n\\nWith respect to Table 2, the specific metrics that are important depend on what task you are willing to solve. In terms of fitting distributions, we outperform our baselines. As stated in our introduction, we want to learn distributions over the data, because they can be easily evaluated in terms of likelihood, they are very interpretable compared to other generative methods such as GANs, there is no mode dropping behavior, and there exists easy tests for overfitting.\"}",
"{\"title\": \"MAP vs MLE\", \"comment\": \"Hello,\\n\\nI apologize, I misunderstood what was meant by MAP (I took it to mean the MAP w.r.t. x of p(x|y), as would be learned by a standard feed-forward regression model).\\nIn this case, I agree with the distinction you make.\\n\\nSo as I understand it, the practical difference between the two training procedures (MAP/MLE) is whether L2 weight regularization is applied to the network weights or not.\"}",
"{\"title\": \"MAP vs MLE\", \"comment\": \"Hi Lynton,\\n\\nThank you for your reply. We agree with you that eq. 4 is the maximum likelihood.\", \"however_in_your_paper_you_say_that_you_minimize_the_loss_as_the_negative_logarithm_of\": \"p(theta | x, c) proportional to p(x | c, theta) p(theta). (Eq. 5 & 6 in your paper)\\n\\nThe paper refers to this as the \\\"posterior over model parameters\\\". Perhaps you could explain what you think is the difference between MLE and MAP? \\n\\nBest,\\nThe authors\"}",
"{\"title\": \"Response to prior work\", \"comment\": \"Hi,\\n\\nI am sorry in case our notation is confusing, we will change it in a future revision in this case. But I am confident we use the same loss, because we also use the standard maximum likelihood loss used to train normalizing flows. Put simply,\\n\\nL = 0.5 * z^2 - log(det(J))\\n\\nWith latent vector z and Jacobian J.\\nThe loss is only applied in z-space, not on the actual images (therefore not MAP).\\n\\nYour eq. 7 is the same as our eq. 4 (without the conditional split prior), and we directly optimize the negative logarithm of this, in the same way as you.\\n\\nSo I feel you have misunderstood our training procedure. (Perhaps it is also the case, because we use the term 'backwards' and 'fowards' with regards to the flow in the opposite way, calling X -> Z 'forward')\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presented the conditional normalizing flows (CNFs) as a new kind of likelihood-based learning objective. There are two keys in CNFs. One is the parametric mapping function f_{\\\\phi} and the other is the conditional prior. This paper assumed the conditional prior as Gaussian distribution of x. The mapping function is invertible with x as a parameter. The prior parameter and \\\\phi are updated by stochastic gradient descent. The latent variable z is then sampled from conditional prior. The output targe y is obtained with dependency on x and f_{\\\\phi}.\", \"strength\": \"1. This study adopted the flow-based model to estimate the conditional flow without using any generative model or adversarial method.\\n2. This method obtained the advanced results on DRIU dataset without the requirement of pretraining.\\n3. This paper proposed an useful solution to train continuous CNFs for binary problems.\", \"weakness\": \"1. It is required to address how to design the function f_{\\\\phi} which depends on x. In particular, the property invertibility should be clarified.\\n2. Why the issues of mode collapse or training instability in flow are considerable in the experiments?\\n3. It will be meaningful to evaluate this method by performing the tasks on text to image or label to image.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Figure 2 is hard to get any idea of the sample quality would be good also to put the low resolution input to the algorithm . Also did you use a temperature sampling for the baseline ? otherwise the comparison is not fair.\", \"The Drive database is too small 20 training samples and 20 testing only? can the model be just overfitting?\", \"In the vessel implementation why do you drop the scaling modules?\", \"The conditioning for the vessel implementation on x is on two layers , would be great to put all architectures of the models in details , and to show both sampling and training paths\", \"It would be great to add the details of the skip connection used from the network processing x, and how ensure that the flow remains invertible.\", \"Overall this is a well written paper and a good addition to normalizing flows methods , some discussion of related works on conditional normalizing flows and more baselines with other competitive methods based on GANs for example would be helpful but not necessary.\", \"It would be great to add details of the architectures and on skip connections and how to ensure invertibility for this part in the model .\"], \"minor_comments\": [\"Formatting the bibliography is messed up and needs some cleaning , Figure 5 is also making formatting issues of the paper.\", \"Figure 1 for sampling it should be f^-1_{\\\\phi } and not f_{\\\\phi}\"]}",
"{\"title\": \"Response to prior work\", \"comment\": \"Thanks for acknowledging the differences in the architecture and dequantization.\\n\\nWe do have one disagreement though, which we would like to flag about the training objective. In Sec 3.2 of your paper, equations 5 and 6 denote the (unnormalized) log-posterior distribution of the weights of your flow given the data. Therefore, it seems from the paper that you are performing maximum a posteriori (MAP) model fitting of your weights. \\n\\nBest,\\nThe authors\"}",
"{\"title\": \"Response to prior work\", \"comment\": \"Hello,\\n\\nthank you for the response!\\n\\nWe agree with the differences in architecture and dequantization.\", \"concerning_the_training\": [\"We also train as a normalizing flow using maximum likelihood training (see our Sec. 3.2).\", \"The main difference we see, is that you use a conditional split prior in addition to the conditional Flow, whereas we only have a conditional flow, with a fixed unconditional prior.\", \"It may be informative performing an ablation, to demonstrate the improvement produced by the more flexible conditional prior.\", \"Best,\", \"Lynton\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes the conditional normalizing flow for structured prediction. The idea is to use conditioning variables as additional inputs to the flow parameter forming networks. The model was demonstrated on image superresolution and vessel segmentation.\\n\\nI find the contribution of this paper minimal. The idea of conditioning has extensively been used during recent years because it is the most natural thing to do (e.g., [1], [2] and numerous other papers). Their's nothing new about the flows used in this paper. The results in table 2 are not convincing; I see no benefit of using the proposed flow model for image super-resolution instead of the SOTA super-resolution methods. This also applies to other experiments.\\n\\n[1] van den Oord et al., Conditional Image Generation with PixelCNN Decoders, 2016.\\n[2] Papamakarios et al., Masked Autoregressive Flow for Density Estimation, 2017.\"}",
"{\"comment\": \"Hi Lynton,\\n\\nThanks for the reference. It looks like a very nice paper. It certainly is relevant and we shall of course include it in our related work.\\n\\nTo answer your question about differences and similarities, here is a brief list:\\n\\n- Architecture: To some degree our architectures are similar. We do indeed both use conditional affine coupling layers (cACL). Perhaps the largest difference is that you couple two cACLs together; whereas, we use a single cACL followed by a learnable, (nonconditional) 1x1 convolution (you refer to this as a soft channel permutation). Furthermore, we deploy a dequantization network (more below).\\n\\n- Dequantization: We introduce a new variational dequantization scheme, which builds on the work of Flow++, (Ho et al., 2019). This works for binary data spaces. Furthermore, we make a connection between variational dequantization and variational inference, which allows us to generalize the binning scheme of Flow++.\\n\\n- Per-pixel loss interpretation: We make explicit the disadvantages of previous per-pixel reconstruction losses, which forms the motivation for why we would wish to use a flow.\\n\\n- ML versus MAP: We do maximum likelihood, which is well known to be parameterization invariant instead of, say, MAP inference.\\n\\nWe hope this answers your questions. If you have more, do feel free to let us know.\\n\\nBest,\\nThe authors\", \"title\": \"Response to prior work\"}",
"{\"comment\": \"Hello,\\n\\nwe would like to point out our work, which was published on arxiv on July 4th this year:\\n\\\"Guided Image Generation with Conditional Invertible Neural Networks\\\" (https://arxiv.org/abs/1907.02392 ),\\nwhich is very similar in the approach, and is also applied to an inverse problem in computer vision (colorization).\\n\\nWe feel this should be included in the related work, and differentiated from your own contributions.\\n\\nBest regards,\\nLynton Ardizzone\\nVisual Learning Lab Heidelberg\", \"title\": \"Prior Work\"}",
"{\"comment\": \"Thank you for your comment and the reference. We will include it in our paper.\", \"title\": \"Thank you\"}",
"{\"comment\": \"I'm a strong advocate of moving beyond the limitations of parametric distributions by using normalizing flows, so I'm happy to see the nice set of experiments in this paper. Best of luck! You might consider including the following, somewhat obscure, reference:\\n\\nDeep Variational Inference Without Pixel-Wise Reconstruction, Agrawal & Dukkipati, 2016, (https://arxiv.org/abs/1611.05209)\\n\\nThey parameterize the conditional likelihood in a variational autoencoder using normalizing flows.\", \"title\": \"Great idea, nice set of experiments, and an additional reference\"}"
]
} |
S1ghzlHFPS | Informed Temporal Modeling via Logical Specification of Factorial LSTMs | [
"Hongyuan Mei",
"Guanghui Qin",
"Minjie Xu",
"Jason Eisner"
] | Consider a world in which events occur that involve various entities. Learning how to predict future events from patterns of past events becomes more difficult as we consider more types of events. Many of the patterns detected in the dataset by an ordinary LSTM will be spurious since the number of potential pairwise correlations, for example, grows quadratically with the number of events. We propose a type of factorial LSTM architecture where different blocks of LSTM cells are responsible for capturing different aspects of the world state. We use Datalog rules to specify how to derive the LSTM structure from a database of facts about the entities in the world. This is analogous to how a probabilistic relational model (Getoor & Taskar, 2007) specifies a recipe for deriving a graphical model structure from a database. In both cases, the goal is to obtain useful inductive biases by encoding informed independence assumptions into the model. We specifically consider the neural Hawkes process, which uses an LSTM to modulate the rate of instantaneous events in continuous time. In both synthetic and real-world domains, we show that we obtain better generalization by using appropriate factorial designs specified by simple Datalog programs.
| [
"factorized LSTM",
"temporal point process",
"event streams",
"structural bias",
"Datalog"
] | Reject | https://openreview.net/pdf?id=S1ghzlHFPS | https://openreview.net/forum?id=S1ghzlHFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"D5j0__OBk",
"r1e_z2XijB",
"Skx3yBR8qH",
"r1lm7jx-9H",
"BJxQnlqy9S"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742751,
1573760015847,
1572426979764,
1572043547013,
1571950762519
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2188/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2188/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2188/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2188/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"While reviewers find this paper interesting, they raised number of concerns including the novelty, writing, experiments, references and clear mention of the benefit. Unfortunately, excellent questions and insightful comments left by reviewers are gone without authors\\u2019 answers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks and clarification.\", \"comment\": \"Thanks very much to the reviewers -- these are high-quality reviews. We appreciate the time you spent on the paper and the thoughtful feedback.\\n\\nOur presentation was written too quickly, and more careful writing would have answered some of your main concerns. In the model, we do have ways to handle parameter sharing (last sentences in sections 3.2, 3.3 and 3.4) and event type composition (section A.4). In the experiments, we tuned hyper-params (including # hidden nodes) for the baseline model (which is indeed a strong multivariate point process) as well as for our model, so the gain is from the design improvement. We will clarify these points in the next version.\\n\\nUnrelatedly, our technical approach has evolved and deepened considerably since we submitted this version. We don't think it would be appropriate to deeply change our ICLR submission at this late stage, so we'll just submit our next version to the next conference. We'll certainly take your comments into account as well -- thanks again!\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\n \\nThe paper proposed to use Datalog rules to specify the design of the LSTM architecture for event data in continuous time. The LSTM module will be used to model the rate of the events. By incorporating Datalog rules, the paper aims to encode informed inductive biases into the model.\", \"comments\": \"After reading the entire paper, I think the main idea of this paper is to use sparse and structured weight matrices (called structural zeros\\u201d in the paper) to substitute the dense weight matrices in the original LSTM, and to split the hidden state into blocks where each block refer to a different world\\u2019s state. \\n\\n\\n How to design the structured weight matrices and how to define the node blocks, it is informed by the Datalog rules. This design, however, will lead to a huge weight matrix and a very long hidden state once the types of events and number of entities grow. The proposed model will face a severe scalability issue. From this point of view, only \\u201cstructural zeros\\u201d weight matrices are not enough for an elegant model. \\n\\n \\n\\n How to smartly share parameters and how to control the number of parameters will be an interesting direction to explore. This submission touches on this a bit but not in a principled way. For example, in Eq. 18(a) and 18(b), the embedding vectors for the grounded predicate is a summation of the embedding vectors of the entities and the predicates. This final embedding is empirically validated or is based on some permutation invariant property? This needs more clarification or some references.\\n\\n \\n\\n \\n 2. The presentation needs to be polished. The current writing is not easy to follow. Especially for section 3. The architecture design needs to be clarified more. When I read this part, I felt a little difficult to map the Datalog rules to your model. \\n 3 Since you are learning the vector embeddings for event types and entities, what are the advantages of this compared to the marked point process model, where the event types and entities are treated as discrete markers and are a much more parsimonious model. The Datalog rules can also be defined on the marker level by introducing a structured dependency structure over the markers. What are the potential benefits of learning the embeddings? The explanation is missing in this paper. \\n\\n 4 Lack of references. The proposed neural-symbolic architecture shares some similarities to the following papers: \\n (1) End-to-End Differentiable Proving \\n\\n (2) DeepProbLog: Neural Probabilistic Logic Programming \\n\\n (3) Neural Logic Machines.\\n \\n What are your contributions and differences in terms of the neural-symbolic architecture design?\\n\\n \\n As for introducing logic rules to guide event predication, this is not a new topic. Here is a list of references:\\n\\n \\n (1) PEL-CNF: Probabilistic event logic conjunctive normal form for video interpretation. \\n (2) A general framework for recognizing complex events in Markov logic. \\n\\n (3) Learning Bayesian networks for clinical time series analysis. \\n\\n (4) Logical Hierarchical Hidden Markov Models for Modeling User Activities. \\n\\n (5) Slice Normalized Dynamic Markov Logic Networks. \\n 5. Lack of strong baselines. The paper only did a small-scale experiment study. It only compares a neural Hawkes process model. The experimental evaluation also needs stronger baselines. Specifically, methods that can handle continuous-time (e.g. marked point process) or probabilistic logic methods that can discretize time (as mentioned in the above references). The baselines are not quite strong and appear a bit arbitrary in the paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Review for Temporal Modeling via Logical Specification of Factorial\\nLSTMs\", \"this_paper_addresses_a_key_problem_in_machine_learning\": \"how to control\\nthe inductive bias of a model in an interpretable way. The paper\\ncontributes a Datalog-based language that allows a human to hand-code\\nstructural assumptions (typically based on domain knowledge) that are\\nautomatically translated into sparsity patterns in the parameter\\nmatrices of an ML model (in this case, a neural Hawkes process,\\nalthough the idea would [probably] generalize to other cases). The\\nlanguage plus structured-neural-Hawkes process is demonstrated on a\\nfew very small problems, with mixed results.\\n\\nThis paper is borderline. However, I tend to favor rejection because\\nwhile the ideas are very interesting (and potentially impactful),\\nvalidation of the claims is weak.\", \"contributions\": \"\", \"on_the_positive_side\": \"A Datalog interface to specifying structural zeros in parameter\\nmatrices is a good idea. The language is natural, and the high-level\\nmapping from structure and objects to low-level parameters seems\\nreasonable and potentially useful.\\n\\nThe method makes it easier to specify an inductive bias. This is a\\nstep in the right direction; but at its heart, this paper does not do\\nanything that couldn't have been done by hand - it only makes it\\neasier.\\n\\nThe method is potentially more interpretable than other attempts at\\ncontrolling inductive bias (for example, simple weight regularizaion),\\nbut see below for why this might be a red herring.\\n\\nThe paper is very nicely written. It's clear that a lot of attention\\nto detail went into writing it. Well done.\", \"weaknesses\": \"There are a few major points to criticize about this paper.\\n\\nFirst, there is no clear learning or prediction benefit. The results\", \"are_mixed\": \"while it appears that the SHP learns faster than the\\nunstructured HP, they appear to be asymptoting at the same point.\\nThis is perhaps to be expected, as the structural zeros introduced by\\nthe corresponding Datalog program effectively reduce the parameter\\ncount, but the shape of the learning curves is unchanged.\\n\\n(Also: please include error bars in Fig. 2(a1) and 2(a2))\\n\\nThe proper comparison would probably be to a low-rank parameter\\nmatrix, where the parameter count is similarly reduced, but in an\\nunstructured way. That would allow us to disentangle \\\"parameter count\\nreduction\\\" from \\\"inductive bias\\\", which is currently not done in the\\npaper. \\n\\nThe results in Figure 3c are mixed - it appears that SHP is only\\nbetter in 1/4 of the cases; in all other cases, the error bars seem to\\nindicate that there is no predictive power.\\n\\nFinally, I am concerned that the method may give a false sense of\\nexplainability to the model - why it is true that a highly structured,\\nsymbolic language is being used to craft an inductive bias, there is\\nno \\\"symbol grounding\\\". That is, there is no guarantee that the neural\\npart of the learning algorithm will use the parameters in the way the\\nhuman intended it to, because the parameters are ultimately\\ndisconnected from the symbols.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper builds an interesting connection between Datalog rules and temporal point processes. The novelty of the approach is to factorize the latent state of LSTM into different blocks that represent three major interactions between temporal events, including: dependency, affects, and updates. The design of the node blocks within the hidden state allows the modeling of fine-grain structure of a given event type. Based on the Datalog program and the logic rules, the intensity function of the temporal point process can be formulated from facts in a database. The problem of enabling a flexible family of intensity functions is one of the most important topics in point processes, and a paper advancing knowledge in this area is certainly welcome.\\n\\nThe paper is in general well written. Section 2.2 can be more clarified by explicitly comparing the concepts of blocks and entities using \\\"mind(Alice)\\\" and \\\"body(Alice)\\\" before introducing the hidden state h_mind(Alice)(t). It took me some time going back and forth to understand these examples here. With respect to the design of the Datalog interface, it looks like it covers the assertion involving two arguments. Since these arguments affect the partition of the number of node blocks, it would be more clear to illustrate how to design the node blocks as the number of arguments increases (say beyond 2 arguments). In fact, if we know the number of entities in each event type, say the number of node blocks to partition is 3 per hidden state in advance, we can leverage three separate small LSTMs each of which has the private hidden state with the same number of nodes as that in one of the node blocks. Then, we can determine the interactions among these separate small LSTMs based on the logic rules, so it will be helpful to elucidate the additional advantages of partitioning these node blocks in the same hidden state. The proposed technique mainly considers how to incorporate the block design into the LSTM hidden states as a general sequence model. What is the unique characteristics of Neural Hawkes Process have been particularly exploited from this perspective? It looks like it can be applied to other LSTM-based approach as long as the predictions are functions of the hidden states. For the synthetic experiments, it is obvious that single Neural Hawkes process has more challenges to fit the mixture of processes. It will be more convincing to compare with a mixture model, like \\\"A Dirichlet Mixture Model of Hawkes Processes for Event Sequence Clustering\\\" with the proposed approach, and the same as in the real experiments. Also, a standard test-of-goodness fit like QQ-plot will also be more useful to improve the experiments.\"}"
]
} |
HyljzgHtwS | Regularly varying representation for sentence embedding | [
"Hamid Jalalzai",
"Pierre Colombo",
"Chloé Clavel",
"Eric Gaussier",
"Giovanna Varni",
"Emmanuel Vignon",
"Anne Sabourin"
] | The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses. The obtained embeddings have desirable properties such as compositionality and distance preservation (sentences with similar meanings have similar representations). In this paper, we develop a novel method for learning an embedding enjoying a dilation invariance property. We propose two algorithms: Orthrus, a classification algorithm, constrains the distribution of the embedded variable to be regularly varying, i.e. multivariate heavy-tail. and uses Extreme Value Theory (EVT) to tackle the classification task on two separate regions: the tail and the bulk. Hydra, a text generation algorithm for dataset augmentation, leverages the invariance property of the embedding learnt by Orthrus to generate coherent sentences with controllable attribute, e.g. positive or negative sentiment. Numerical experiments on synthetic and real text data demonstrate the relevance of the proposed framework.
| [
"extreme value theory",
"classification",
"supvervised learning",
"data augmentation",
"representation learning"
] | Reject | https://openreview.net/pdf?id=HyljzgHtwS | https://openreview.net/forum?id=HyljzgHtwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"qYp1XTbCV8",
"rJg0xTz3iS",
"rJlc-qfhor",
"SklA3vGnjB",
"rye-_wpCFS",
"S1evAPSTtS",
"HyeJgFbiFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742720,
1573821685589,
1573820929818,
1573820341596,
1571899240780,
1571801039231,
1571653863225
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2186/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2186/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2186/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2186/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2186/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2186/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Three reviewers recommend rejection. After a good rebuttal, the first reviewer is more positive about the paper yet still feels the paper is not ready for publication. The authors are encouraged to strengthen their work and resubmit to a future venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer to AnonReviewer3\", \"comment\": \"We thank AnonReviewer3 for articles [1,2,3]. Though our framework is different, connections with these refs are worthy of attention and we now cite these papers in the introduction.\\n\\n\\u2022 \\u201cIn order to show that the EVT indeed helps empirically in the way that an adversarial classifier enforces the inf-norm of vectors follow the Generalised Extreme Value (GEV) [...] supporting evidence.\\u201d\\n\\u279c Selecting the logistic distribution is not the central point since we apply a standardization whose purpose is to place ourselves in the framework where the considered tail index is equal to 1. \\n\\u201cFrom the perspective of learning representations with structured priors, there exists an interesting work on decomposing vector representations into lengths [...]. It would be interesting to see if the proposed method is indeed better than the way that structured priors are enforced in [2].\\u201d\\n\\u279c We thank AnonReviewer3 for mentioning article [2]. In the newest version we mention that future work will implement a comparison with this work which we have not done yet due to time constraints. \\n\\n\\u2022 \\u201cLinguistically, given the distributional hypothesis, the length of learnt vectors tends to be highly correlated with the frequency information of available concepts and the direction of them matters more. [...] would be applicable in fine-grained sentiment analysis, such as Stanford Sentiment Treebank [3].\\u201d\\n\\u279c We make no claim that the proposed method would be applicable in fine-grained sentiment analysis such as [3]. We would like to mention that the extreme values in [3] correspond to annotation labels with sharp and strong sentiment. Such extremes are not the same as the extreme embeddings that we thoroughly study in this paper.\\n\\n\\u2022 \\u201cThe construction of the two datasets seems to be very arbitrary given that there exists a large number of sentiment analysis datasets and many with lots of samples, I am not sure that the results on the chosen constructed two datasets are sufficient enough to support the claim.\\u201d\\n\\u279c The datasets are commonly used datasets for binary classification of text data (see [5, 6]). What are the datasets that AnonReviewer3 seems to have in mind?\\n\\n\\u2022 \\u201cThe size of the datasets is too small. Given that, the marginal improvement against the NN baseline could be a result of a specific initialisation, which doesn't generalise to other random initialisations.\\u201d\\n\\u279c The mentioned datasets are commonly used datasets for binary classification of text data. We have tried different initializations and obtained similar results. We do not report each initialization in this paper. Concerning the sizes of datasets, we work with a limited amount of GPU time and we cannot go upscale in our experiments as R2 suggests. We want to raise that we have more than 200 extreme samples thus the improvement on the extreme samples is far from marginal. Note that the embeddings are not learnt from scratch: they are built to perform a classification task on top of (frozen) BERT embeddings, please refer to the GLUE benchmark (https://gluebenchmark.com/) for similar system.\\n\\n\\u2022 \\u201cThe dimension of vector representations is also too small. [...] it is in a very high dimensional space.\\u201d\\n\\u279c In the original BERT paper, the size of the embedding used is 768. The size of the learnt embedding in the present work is a hyperparameter which varies from 10 to 768. the value 50 was automatically chosen by cross-validation. We now mention this fact in the additional experiment settings for real data section in the Appendix. \\n\\n\\u2022 \\u201cThere are many straightforward distributions [...] a prior on the norm of high dimensional vectors.\\u201d\\n\\u279c As mentioned earlier in our response, In this paper, we do not use an explicit prior on the radius; Instead our target (=prior) is a multivariate extreme value distribution called the Logistic distribution in the EVT setting. It happens that the radial component of this distribution is heavy tailed but this constraint does not need to appear (and does not) in our algorithm. \\n\\n---\\n[1] Siffer, Alban, et al. \\\"Anomaly detection in streams with extreme value theory.\\\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.\\n[2] Guu, Kelvin, et al. \\\"Generating sentences by editing prototypes.\\\" Transactions of the Association for Computational Linguistics 6 (2018): 437-450.\\n[3] Socher, Richard, et al. \\\"Recursive deep models for semantic compositionality over a sentiment treebank.\\\" Proceedings of the 2013 conference on empirical methods in natural language processing. 2013.\\n[4] Hamid Jalalzai, Stephan Cl\\u00e9mencon, and Anne Sabourin. On binary classification in extreme regions. In Advances in Neural Information Processing Systems, pp. 3092\\u20133100, 2018.\\n[5] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.\\n[6] R Stewart, S Ermon, Label-free supervision of neural networks with physics and domain knowledge, In Thirty-First AAAI Conference on Artificial Intelligence, 2017.\"}",
"{\"title\": \"Answer to AnonReviewer1\", \"comment\": \"\\u2022 \\u201cThe algorithm takes a sentence embedding from BERT as input. BERT produces contextualized word representations, not sentence embeddings, so I don't know what the authors did here (the intro also claims that ELMo and GPT learn sentence embeddings, which is also confusing).\\u201d\\n\\u279c AnonReviewer1 is right. BERT produces contextualized word representation which can be applied to sentences (refer to the original paper). In our implementation we use the [CLS] token as an embedding of the full sentence as done in the original paper on the glue benchmark for classification task. We applied BERT on the sentences of the studied dataset as input for the algorithms we detail.\\n\\n\\u2022 \\u201cThe paper argues that with empirical risk minimization, \\\"nothing guarantees that such classifiers perform satisfactorily on the tails of the explanatory variables\\\". However, I could not follow what such guarantees the proposed method offers, if any.\\u201d\\n\\u279c Paper [1] precisely details why the tails deserve a specific treatment. The mentioned paper also provides theoretical guarantees (theorem 2). As advised by R2, we will explicitly state the results from [1] that are relevant for the present paper. \\n\\n\\u2022 \\u201cExperiment 4.1 is impossible to follow without reading the appendix. This section should be expanded, or completely moved to the appendix.\\\"\\n\\u279c Experiment 4.1 has been moved to the Appendix.\\n The authors claim without evidence that a baseline of a neural network trained on top of the \\\"BERT embedding\\\" is state-of-the-art for sentiment classification. While there isn't enough information to know what was done, most state-of-the-art approaches involve fine-tuning BERT.\\u201d\\n\\n\\u2022 \\u201cNo comparisons are made with any other work, despite the method attempting a very general and well-studied problem of text classification.\\u201d\\n\\u279c Our aim is to show that learning a regularly varying representation on top of a baseline representation (such as BERT) improves the classification performance of a standard classifier (such as MLP) compared to applying the same standard classification algorithm (MLP) to the baseline representation. Table 1 from the new version of the paper precisely gathers the experimental results with regards to this claim. \\nOur choice of BERT+MLP as baseline was merely guided by the state of the art approaches when we started working on this project. \\n\\n\\u2022 \\u201c The submission claims that \\\"Applying a dilation is equivalent to assess the generalization of classifiers outside the envelope of both training and testing samples.\\\". It isn't obvious to me that dilation captures the variation in embeddings you'd get from out-of-domain training samples.\\u201d\\n\\u279c This sentence has been removed from the newest version for the sake of clarity.\\n\\n\\u2022 \\u201cThe authors compare their data augmentation results to \\\"backtranslation\\\". The citation for the method appears to be a class project, and in fact does round-trip translation for paraphrasing, and not back translation.\\u201d\\n\\u279c The author of the [2] used the word \\u201cbacktranslation\\u201d along their article. We will modify our paper and replace \\u201cbacktranslation\\u201d with \\u201cround-trip translation\\u201d.\\n\\n\\u2022 \\u201cNo attempt is made to show if the data augmentation approach actually improves end task performance.\\u201d\\n\\u279c Please refer to the last experiment: we observe that Hydra outperforms all other methods in terms of distinct 1 and distinct 2. Table 1b shows that improvement in F1 score induced by dataset augmentation by Hydra beats all other methods and is only equaled by EDA.\\n\\n---\\n[1] Hamid Jalalzai, Stephan Cl\\u00e9mencon, and Anne Sabourin. On binary classification in extreme regions. In Advances in Neural Information Processing Systems, pp. 3092\\u20133100, 2018.\\n[2] Shleifer,S. \\u201cLow resource text classification with ulmfit and backtranslation\\u201d, arXiv preprint arXiv:1903.09244, 2019.\"}",
"{\"title\": \"Answer to AnonReviewer2\", \"comment\": \"We thank AnonReviewer2 for spotting the typo.\\n\\n\\u2022 \\u201cIn particular, why is dilation invariance even a good idea?\\u201d\\n\\u279c The dilation invariance is a label invariance of the embeddings. Such invariance, provided by the approach detailed in the paper, allows generating new text data based on labeled inputs while preserving the same label. To the best of our knowledge, no other embedding provides a framework to generate new text data with a label preserving approach.\\n \\n\\u2022 \\u201cWhy not instead present an explicit theorem providing some statistical guarantees for the proposed methodology in Sec 3.1 based on the constant-along-rays result (which would be nice to have regardless), and then follow up the theorem with background math from 2.1-2.2 which is necessary to understand the proof?\\u201d\\n\\u279c Following your suggestion we have added an explicit statement of the statistical guarantees that can be obtained with such constant-along-rays embeddings, citing theorem 1 of [1] in Section 2.1.\\n\\n\\u2022 \\u201cWhy did the authors never evaluate the overall sentiment prediction performance of Orthrus + Hydra used together vs other classifiers + data augmentation strategies?\\u201d\\n\\u279c We point out that the performance of Orthrus is compared to a MLP classifier on similar input (see Table 1 of the new version of the paper). Hydra relies on the regularly varying representation provided by Orthrus. Therefore, the performance of Hydra + Orthrus is compared to state-of-the-art methods in terms of data generation (see Table 2a and Table 2b).\\n\\n\\u2022 \\u201cIf the goal of dilation invariance is to help the classifiers better generalize to out-of-distribution test sentences, then why not verify this happened, eg. by training on Yelp and testing on Amazon?\\u201d\\n\\u279c Although there are works addressing learning a task on a given dataset and testing on a different dataset which relates to transfer learning, we make no claim that the regularly varying embedding allows to do this. The added value of our approach is that it allows: \\n1. Generalization for points which lie out of the envelope of the training inputs,\\n2. A label preserving (text) data augmentation.\\n\\n\\u2022 \\u201cThe authors should better justify the assumption of Jalalzai et al, and why this is appropriate for the MLP classifier used later in the paper.\\u201d\\n\\u279c Following your suggestion we now provide some intuition about the regular variation assumption of Jalalzai et al. in Section 1 and Section 2. In Section 3, we emphasize that in the present work we do not assume that the original representation (from BERT) satisfies this assumption. Instead we construct a new representation that does, via the GAN machinery with a target with the desired property. Also we have added a few sentences concerning the advantage of using this representation for text document classification, namely the probability ratio between the two classes, conditionally on the input x, solely depend on the angle theta(x) above large radial thresholds, which allows to classify the most extreme test points using information from a given fraction of the training set corresponding to inputs with largest norm. \\n\\n\\u2022 \\u201cThis statement needs to be clarified and have citation: \\u201cSuch classifier whose output solely depends on the angle \\u0398(x) of the considered input, with provable guarantees concerning the classification risk in out-of-sample regions scaling as the square root of the number of extreme points used at the training step\\u201d\\n\\u279c The information that helps to understand this statement is provided in Theorem 2 from [1]. But, indeed, it is not clear as it was explained and we clarified the paper accordingly, by adding the aforementioned theorem in Section 2.2.\\n \\n\\u2022 \\u201cThe authors should explain Equation (1) in English rather than referring to it so early on the paper (pg 1: \\\"satisfying Equation 1\\\"). I had no idea what this was supposed to mean as a first time reader.\\u201d\\n\\u279c We have updated the paper accordingly and explain Equation (1) in the introduction as a homogeneity property above large radial thresholds. \\n\\n\\u2022 \\u201cA figure demonstrating an example of the phenomenon explained in Sec 2.2 would be helpful to aid reader's intuition.\\u201d\\n\\u279c We have added a figure (Figure 1) to illustrate the angular classifier in Sec 2.2.\\n\\n---\\n[1] Hamid Jalalzai, Stephan Cl\\u00e9mencon, and Anne Sabourin. On binary classification in extreme regions. In Advances in Neural Information Processing Systems, pp. 3092\\u20133100, 2018.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes a new embedding method for sentences that aims to preserve dilation invariance. Much of the methodology is justified by results for extremal point classification under particular assumptions, and then the authors try and encourage these assumptions to be met via penalty terms introduced in their embeddings/augmentation models. However, while the proposed methodology seems interesting/novel, it remains conceptually unclear why it should be superior to standard text classification methods (ie. exactly what assumptions are being exploited to improve performance and how exactly do those assumptions help should be made more explicit). In particular, why is dilation invariance even a good idea?\", \"Overall, I find the paper a bit mathematically dense in Secs 2.1-2.2, which would not be a bad thing if the math were necessary to justify why the proposed methodology works well, but it in this case seems mainly presented as background material (as if it were a prerequisite to understand the method itself, which it is certainly not).\", \"Why not instead present an explicit theorem providing some statistical guarantees for the proposed methodology in Sec 3.1 based on the constant-along-rays result (which would be nice to have regardless), and then follow up the theorem with background math from 2.1-2.2 which is necessary to understand the proof?\", \"As it is currently written the paper is a bit too dense in terminology, and opaque names like Hydra and Orthrus used to describe straightforward concepts that are essentially a neural classifier (of a particular form) and a seq2seq-based data augmentation procedure (which would be good to describe in language more familiar to the ML audience). In particular, the goals of Hydra and Orthrus should first be intuitively described before delving into their various components.\", \"Why did the authors never evaluate the overall sentiment prediction performance of Orthrus + Hydra used together vs other classifiers + data augmentation strategies?\", \"If the goal of dilation invariance is to help the classifiers better generalize to out-of-distribution test sentences, then why not verify this happened, eg. by training on Yelp and testing on Amazon?\", \"The authors should better justify the assumption of Jalalzai et al, and why this is appropriate for the MLP classifier used later in the paper.\", \"This statement needs to be clarified and have citation: \\\"Such classifier whose output solely depends on the angle \\u0398(x) of the considered input, with provable guarantees concerning the classification risk in out-of-sample regions scaling as the square root of the number of extreme points used at the training step\\\"\", \"The authors should explain Equation (1) in English rather than referring to it so early on the paper (pg 1: \\\"satisfying Equation 1\\\"). I had no idea what this was supposed to mean as a first time reader.\", \"The way figure 4 is presented is a bit opaque and took me a while to understand (have to look closely at Fig 4a to see the columns are not monochromatic).\", \"\\\"We also compare Hydra to a Vanilla Sequence to Sequence to demonstrate the validity of our approach\\\" How \\\"Vanilla Sequence to Sequence\\\" (word 'model' is missing) is used for dataset augmentation needs to be clarified here.\", \"A figure demonstrating an example of the phenomenon explained in Sec 2.2 would be helpful to aid reader's intuition.\", \"Typo: \\\"eugmentation\\\"\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper explores learning dilation-invariant sentence representations, with a goal of improving downstream task performance on rare events. A pre-trained embedding is encoded as a latent variable Z, which is constrained to be multi-variate heavy tailed. Separate classifiers are trained on the head and tail of the distribution. Similarly, separate sentence generators are trained on the head and tail of the distribution, in order to allow data augmentation (creating diversity in the outputs by scaling the representation). While the high level motivation and algorithm is interesting, I found the paper very hard to follow, and the experiments are weak.\", \"i_have_quite_a_few_concerns\": [\"The algorithm takes a sentence embedding from BERT as input. BERT produces contextualized word representations, not sentence embeddings, so I don't know what the authors did here (the intro also claims that ELMo and GPT learn sentence embeddings, which is also confusing).\", \"The paper argues that with empirical risk minimization, \\\"nothing guarantees that such classifiers perform satisfactorily\", \"on the tails of the explanatory variables\\\". However, I could not follow what such guarantees the proposed method offers, if any.\", \"Experiment 4.1 is impossible to follow without reading the appendix. This section should be expanded, or completely moved to the appendix.\", \"The authors claim without evidence that a baseline of a neural network trained on top of the \\\"BERT embedding\\\" is state-of-the-art for sentiment classification. While there isn't enough information to know what was done, most state-of-the-art approaches involve fine-tuning BERT.\", \"No comparisons are made with any other work, despite the method attempting a very general and well studied problem of text classification.\", \"The submission claims that \\\"Applying a dilation is equivalent to assess the generalization of classifiers outside\", \"the envelope of both training and testing samples.\\\". It isn't obvious to me that dilation captures the variation in embeddings you'd get from out-of-domain training samples.\", \"The authors compare their data augmentation results to \\\"backtranslation\\\". The citation for the method appears to be a class project, and in fact does round-trip translation for paraphrasing, and not back translation.\", \"No attempt is made to show if the data augmentation approach actually improves end task performance.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presented two methods for augmenting sentiment classification from the perspective of applying the Extreme Value Theory (EVT), including:\\n\\n1) A classification algorithm which has an adversarial classifier to enforce the intermediate representations of a neural network to be similar to one EVT distribution, logistic distribution;\\n2) An encoder-decoder model that is able to generate grammatically coherent sentences with the same sentiment as the given input sentence.\", \"questions\": \"--\\n1) the Fisher\\u2013Tippett\\u2013Gnedenko theorem states that it is possible that the maximum value of a set of iid samples converges to one of three plausible distributions, and the chosen logistic distribution falls into the Weibull distribution category. I have a couple concerns about this choice:\\n\\n1.1) In order to show that the EVT indeed helps empirically in the way that an adversarial classifier enforces the inf-norm of vectors follow the Generalised Extreme Value (GEV) distribution, at least three plausible distributions from each form of the GEV distribution needs to be checked. The logistic distribution is interesting, but the marginal improvement gained by enforcing the lengths of the produced vectors to follow the logistic distribution could be a result of hyper-param tuning, which shouldn't be a piece of supporting evidence.\\n\\n1.2) From the perspective of applying the EVT, recent successful work from the best of my knowledge is on Anomaly Detection [1], where the EVT enables the system to learn from samples in only one class and also adjust the threshold for detecting the abnormal behaviour of samples. It is also theoretically grounded as the error variable of a logistic regression follows a Gumbel distribution which is one form of the GEV distribution, therefore, applying EVT for binary classification case makes sense.\\n\\n1.3) From the perspective of learning representations with structured priors, there exists an interesting work on decomposing vector representations into lengths and directions and enforcing lengths to follow a uniform distribution and directions a Von Mises\\u2013Fisher (vMF) distribution as in [2]. It would be interesting to see if the proposed method is indeed better than the way that structured priors are enforced in [2].\\n\\n1.4) Linguistically, given the distributional hypothesis, the length of learnt vectors tends to be highly correlated with the frequency information of available concepts and the direction of them matters more. The argument is also presented by the paper. However, in sentiment analysis, the length could contain the information about how strong the sentiment of the input sentence is, so I am not convinced that the proposed method would be applicable in fine-grained sentiment analysis, such as Stanford Sentiment Treebank [3]. \\n\\n\\n--\\n2) A soft approximation over the inf-norm of a set of iid samples is log-sum-exp function, and it is the cdf of softmax function, which is also theoretically grounded in EVT for classifications. It could be a nicer story than the current one as the choice of the logistic distribution seems to be too intend. \\n\\n\\n--\\n3) The construction of the two datasets seems to be very arbitrary given that there exists a large number of sentiment analysis datasets and many with lots of samples, I am not sure that the results on the chosen constructed two datasets are sufficient enough to support the claim.\\n\\n3.1) The size of the datasets is too small. Given that, the marginal improvement against the NN baseline could be a result of a specific initialisation, which doesn't generalise to other random initialisations.\\n\\n3.2) The dimension of vector representations is also too small. Normally, commonly used word embeddings are of 300 dimensions, and contextualised ones are of higher than 1200 dimensions. The chosen 50 dimension could prevent the NN baseline model to perform well and IMO, it is helpful for picking a suitable logistic prior than it is in a very high dimensional space.\\n\\n3.3) There are many straightforward distributions that could be applied as a prior on the lengths of vector representations, e.g. the Rayleigh distribution in 2D and the Chi-squared distribution in higher-dimension. Then again, the distribution gets flatter and becomes similar to a uniform distribution when the dim goes higher, which is a common issue. It goes back to my concern or doubt on the usability of a prior on the norm of high dimensional vectors.\\n\\n\\n--\\n4) I am still interested in seeing EVT being applied in various domains, but I'd be in favor of more justifiable approaches. \\n\\n[1] Siffer, Alban, et al. \\\"Anomaly detection in streams with extreme value theory.\\\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.\\n[2] Guu, Kelvin, et al. \\\"Generating sentences by editing prototypes.\\\" Transactions of the Association for Computational Linguistics 6 (2018): 437-450.\\n[3] Socher, Richard, et al. \\\"Recursive deep models for semantic compositionality over a sentiment treebank.\\\" Proceedings of the 2013 conference on empirical methods in natural language processing. 2013.\"}"
]
} |
rJgjGxrFPS | A Simple and Scalable Shape Representation for 3D Reconstruction | [
"Mateusz Michalkiewicz",
"Eugene Belilovsky",
"Mahsa Baktashmotagh",
"Anders Eriksson"
] | Deep learning applied to the reconstruction of 3D shapes has seen growing interest. A popular approach to 3D reconstruction and generation in recent years has been the CNN decoder-encoder model often applied in voxel space. However this often scales very poorly with the resolution limiting the effectiveness of these models. Several sophisticated alternatives for decoding to 3D shapes have been proposed typically relying on alternative deep learning architectures. We show however in this work that standard benchmarks in 3D reconstruction can be tackled with a surprisingly simple approach: a linear decoder obtained by principal component analysis on the signed distance transform of the surface. This approach allows easily scaling to larger resolutions. We show in multiple experiments it is competitive with state of the art methods and also allows the decoder to be fine-tuned on the target task using a loss designed for SDF transforms, obtaining further gains. | [
"Computer Vision",
"3D Reconstruction"
] | Reject | https://openreview.net/pdf?id=rJgjGxrFPS | https://openreview.net/forum?id=rJgjGxrFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9O_s3zFz3J",
"r1lKDzEtjS",
"ryxABb4KiS",
"SkgVc1NFsH",
"HkgKmkEKir",
"rJe5U-qh9r",
"B1lQSgTfcS",
"H1laIOJ6Fr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742690,
1573630561356,
1573630278110,
1573629836277,
1573629729303,
1572802898136,
1572159546632,
1571776596527
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2185/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2185/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2185/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2185/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2185/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2185/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2185/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to use PCS to replace the conventional decoder for 3D shape reconstruction. It shows competitive performance to the state of the art methods. While reviewer #3 is overall positive about this work, both reviewer #1 and #2 rated weak rejection. Reviewer #1 concerns that important details are missing, and the discussion of results is insufficient. Reviewer #3 has questions on the clarity of the presentation and comparison with SOTA methods. The authors provided response to the questions, but did not change the rating of the reviewers. The ACs agree that this work has merits. However, given the various concerns raised by the reviewers, this paper can not be accepted at its current state.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Changes in the manuscript\", \"comment\": \"Dear Reviewers, Thank you for your comments that help us to revise the manuscript. Based on the reviews we have made the following changes in the manuscript:\\n\\nWe have updated Equation (2) and revised Section 3 to improve clarity \\nWe have added the reference and description of main figure in section 3.2\\nWe have fixed some typos in the Experimental section.\\nWe have revised all minor grammatical and spelling errors noted by reviewers.\\nWe added an Appendix which provides more details and analysis for both the choice of the number of coefficiencts and also discussed why other shape representations are not naturally combined with PCA\"}",
"{\"title\": \"Response to review #1\", \"comment\": \"Thank you for your review.\\n\\n\\\"For example, what is the dashed line mean in Fig. 1?\\\"\\n\\nThe dashed lines indicate that the same values are used in the downstream task. E.g. the encoded PCA representation is used in the downstream tasks combined with MSE. Similarly for the fine tuned version of eigenSDF the eigenvectors are used to initialize a decoder model, which is then further finetuned with chamfer loss.\\n\\n\\\"What's the meaning of the dot over \\\"E\\\" in Sec 3.2, is it \\\"derivatives\\\"? If so, why not use this symbol in Eq. (2) as well?\\\"\\n\\nThe dot over E was a typo. E is just the matrix of eigenvalues. Equation (2) refers to the chamfer loss, which we emphasize is only used in our models denoted eigenSDF (finetuned). Typically we just minimize the loss in the latent space of the PCA. \\n\\n\\\"In general, I find it's a bit hard to follow when only 2-3 paragraphs are used for describing the proposed approach. It'll be good if the authors can elaborate on the approach in a more thorough manner. \\\"\\n\\nWe have updated section 3 to attempt to make it more clear. However the concept of proposed eigenSDF is simple: we learn a PCA model for the shape using the SDF representation, obtaining a simple latent shape representation. For downstream tasks (e.g. 2D image -> 3D shape) we simply minimize the MSE in the latent space, saving completely the decoding step during training, and having a very light decoder for inference. \\n\\nWe can also finetune this entire model using the chamfer loss applied at the level of the SDF representation, but note even without this performance is already competitive. \\n\\nWe do emphasize this high level view in several places besides the more formal Section 3 (e.g. in the introduction and in Fig 1).\\n\\n\\\"Regarding experimental results, (...). However it is not always the best in several metrics (as indicated in experimental result tables). I wonder if authors can provide more analysis or discussions on why this could happen, either the metric may not make too much sense in their setting, or if there is potential room for improvement.\\\"\\n\\nRegarding further insights into the metrics. We want to note that Occupancy networks were explicitly trained for the IoU. On the other hand Chamfer distance is a much better metric for this task as mentioned in [1] and [2].\\n\\nWe also want to note that single instance where LinearSDF is better than EigenSDF was actually due to a typographical error from transferring numbers to the table. This was only for chamfer distance and rifle category. Note that for rifles, the IoU and Normal Consistency measures were better for EigenSDF. We have now fixed this. \\n\\n\\\"A few failure case... of the proposed approach.\\\"\", \"one_of_the_issues_can_be_seen_when_looking_at_the_quantitative_table\": \"similar to 3D-R2N2 or Deep Level Sets, eigenSDF is struggling with reconstructing thin objects such as examples from lamp category.\", \"a_possible_improvement_can_be_the_following\": \"1) pre-processing SDFs by adding a small epsilon to make the SDFs \\u201cfatter\\u201d\\n 2) Training eigenSDF to learn \\u201cfat\\u201d version of examples\\n 3) Switch the L2 loss to chamfer loss and the ground truth to original, thin examples.\\n\\nNote that levelset methods are often susceptible to good initialization procedure.\\n\\n\\\"Compare against DeepSDF\\\"\\n\\nFirst of all we want to emphasize one of the goals of our work is to show a simple (linear) baseline can be competitive to the current state-of-the-art methods on the standard tasks. Indeed DeepSDF is an interesting and related work. As discussed in the Introduction DeepSDF avoids discretization but can lead to a complex decoder model, for example an 8 layer network is used to fit the SDF. In our case the representation is given by a simple linear transformation. Note DeepSDF does not give a task agnostic latent variable model as in our case (aka to represent a specific shape you need to fit a separate deep NN for each shape or do it implicitly conditioned on an image for a given task). In our formulation the shape representation has an explicit small latent code and this thus allows us to perform training in latent space. We also note DeepSDF subsamples 16384 SDF points. In our case, we capture over 99% of variance of over 2 million points. We noted that the dataset and preprocessing used in DeepSDF is different than in our work and the others we compare to thus it is difficult for us to compare at this time. Specifically to use SDFs we need watertight meshes. \\n\\n[1] Sun, Xingyuan, et al. \\\"Pix3d: Dataset and methods for single-image 3d shape modeling.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[2] Tatarchenko, Maxim, et al. \\\"What Do Single-view 3D Reconstruction Networks Learn?.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[3] Michalkiewicz, Mateusz, et al. \\\"Deep Level Sets: Implicit Surface Representations for 3D Shape Inference.\\\" arXiv preprint arXiv:1901.06802 (2019).\"}",
"{\"title\": \"Response to review #3\", \"comment\": \"Thank you for your review.\\n\\nRegarding the motivation of the PCA + SDF. The goal of our paper was to determine if a simple latent variable model can be used to replace the typically complex decoder. PCA seems a natural choice for this however the representation it is applied to is less obvious. As noted in [1], and in many other papers, there are currently 4 main shape representations: voxels, SDFs, point clouds and meshes. Applying the PCA to voxels is somewhat inappropriate, they are binary while PCA is designed for continuous variables. We do however evaluate this now in Appendix A2, where we show 3D reconstruction with 2048 eigenvectors for a random example of ShapeNetCars using voxel-based representation and SDFs. Voxel-based reconstructions perform so poorly that we did not even consider them for quantitative evaluation.\\n\\nApplying PCA to point clouds and meshes is not evident. Point clouds do not have a natural ordering thus it is unclear how one can apply it here. Similarly meshes do not have any canonical representation that can be used to represent them. We have added this discussion to Appendix A2.\\n\\n[1] Michalkiewicz, Mateusz, et al. \\\"Deep Level Sets: Implicit Surface Representations for 3D Shape Inference.\\\" arXiv preprint arXiv:1901.06802 (2019).\"}",
"{\"title\": \"Response to review #2\", \"comment\": \"Thank you for your review.\\n\\n\\u201cBeing able to easily scale to higher resolutions is claimed to be one of the main advantages, but I am not convinced that this is useful under this setting. If I understand this correctly, the number of eigenvectors k is fixed, and projecting the SDF field to this space would remove the higher frequency components of the shape. So wouldn't the number of eigenvectors be the bottleneck in representational precision, not the resolution of the output space?\\u201d\\n\\nFirst of all we want to highlight that the number of eigenvectors k needed to capture the data variance are very small relative to the larger resolutions we consider. For example in our experiments with 128^3 resolution k= 2048 for category ShapeNet-cars. Plot of captured variance of category ShapeNet-cars for resolution 128^3 can be found in the new version in Appendix A1. \\n\\nSecondly, we emphasize that the reason the proposal is scalable is that it avoid having a 3D convolutional decoder which will be by construction much slower than just predicting k coefficients, even if k were big in practice, which it isn\\u2019t. \\n\\n\\\"What is the chosen k (number of eigenvectors)? It says k was \\\"chosen to capture 99.5% of the variance within the dataset\\\", but I could not find how exactly it was chosen and what value of k was used (I apologize if I missed).\\\"\\n\\nThe chosen number of eigenvectors ranged from 512 to 2048. Some ShapeNet categories have small number of examples (such as phone, watercraft, or bench - approximately 1 000 examples) while others are substantially bigger (table, car, chair - close to 8 000 examples). We have added more details regarding this in Appendix A1.\\n\\n\\n\\\"Also, I think the PCA is category-specific (page 4, section 4.1). Is k dependent on the category or is it the same across all categories?\\\"\\n\\nYes, as mentioned before, since ShapeNet categories differ in size (from ~1k phones to ~8k cars), our choice for number of eigenvectors differs as well.\\n\\n\\\"Some of the other methods (if not all) used for comparison are not category-specific, so if this is true, I think the comparison may not be entirely fair and it should be made clearer.\\\"\\n\\nAmong methods in Section 4.1, only 3D R2N2 explicitly train their network jointly on all categories. However, our framework can be trivially generalized to a category-agnostic one. The only modification would be a larger number of eigenvectors. \\n\\nIn Section 4.4, all methods were trained per category.\\n\\n\\\"Minor typos\\\"\\n\\nThank you for noting the typos we have corrected them in the manuscript.\\n\\n\\\"Figure 2 not referred in the main text.\\\"\\n\\nIt is referred in the main text as \\u201cFigure 4.3\\u201d, this was latex referencing error and we have now corrected it.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the problem of learning the feature representation for predicting the 3D shape of objects, from a single image or a point cloud. The proposed approach performs PCA on the SDF field. And then the transformed feature map is learned and used as input to task-specific decoders for 3D shape prediction. The authors claims that this approach trains faster and is easier to scale, while showing competitive performance compared to state-of-the-art methods.\\n\\nI am leaning towards Weak Reject. The paper is generally easy to read, but with some details missing. And I found the discussion of the results to be insufficient. I think it can be an above-threshold paper if questions are addressed during rebuttal.\\n\\nBeing able to easily scale to higher resolutions is claimed to be one of the main advantages, but I am not convinced that this is useful under this setting. If I understand this correctly, the number of eigenvectors k is fixed, and projecting the SDF field to this space would remove the higher frequency components of the shape. So wouldn't the number of eigenvectors be the bottleneck in representational precision, not the resolution of the output space?\\n\\nWhat is the chosen k (number of eigenvectors)? It says k was \\\"chosen to capture 99.5% of the variance within the dataset\\\", but I could not find how exactly it was chosen and what value of k was used (I apologize if I missed).\\n\\nAlso, I think the PCA is category-specific (page 4, section 4.1). Is k dependent on the category or is it the same across all categories? Some of the other methods (if not all) used for comparison are not category-specific, so if this is true, I think the comparison may not be entirely fair and it should be made clearer.\\n\\nI think the writing could be polished as well, some minor typos:\", \"page_2\": \"anlaysis, enlightning\", \"page3\": \"under eigenSDF: reprsentation\", \"page_4\": \"section 4.1: refered, signficant\", \"page_5\": \"\\u201cseciton\\u201d\", \"page_7\": \"tranform\\n\\nPage 3, Section 3.2: Is N the number of training examples and M the resolution?\\nFigure 2 not referred in the main text.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Thank the authors for the response. I am still in favor of the idea -- applying simple, old-school method into a new problem, and I also agree with R1 and R2 that the paper is currently lack of details and experimental results. I will keep my score, but would not fight for the acceptance if R1 and R2 insist.\\n----------------------------------------\\nSummary\\nThis paper presents a new method for 3D shape reconstruction, based on SDF (Signed Distance Function) and PCA. The basic idea is to conduct PCA on all the shapes with SDF as feature, and encode a shape by the eigenvectors from PCA. The authors present experiments on 3D reconstruction from 2D view and point clouds, which demonstrate the effectiveness of the proposed method. I lean to vote for accepting this paper since the idea is simple but novel, and it achieves good performance.\\nStrengths\\n- The idea itself is simple and novel. The basic idea of this approach is simple -- keep most information / variance by using PCA, and it is also very novel, since I have not seen papers using PCA to encode 3D shapes.\\n- The idea is effective. As the authors demonstrated in section 4, this approach works well, and it outperforms all other methods by a large margin according to Chamfer distance. This is impressive since such a simple method can improve the performance this much.\\nWeaknesses\\n- More analysis could be provided about how do the authors choose SDF. Choosing SDF here is obviously a reasonable choice, but is it the best? More analysis could be provided, or more experiments could be included.\\nPossible Improvements\\nAs mentioned above, more analysis about why choosing SDF or more experiments about comparing SDF to other representations under this PCA approach could be provided.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper introduces a 3D object reconstruction/completion algorithm that utilizes a simple decoder from features generated using PCA of SDF. The approach was tested in a few experiments on public benchmarks and achieves competitive results.\\n\\nThe overall presentation of the paper is decent, and the network structure of the proposed approach is reasonable. It's interesting to see that using a simple PCA can help improve performance using a simple network structure. The experimental results make sense, and it's nice to see the performance is reasonable as well.\", \"i_have_a_few_questions_regarding_the_paper\": [\"Without looking at the code, I don't think I fully understand the formulation of the network structure just by reading the text. For example, what is the dashed line mean in Fig. 1? What's the meaning of the dot over \\\"E\\\" in Sec 3.2, is it \\\"derivatives\\\"? If so, why not use this symbol in Eq. (2) as well? In general, I find it's a bit hard to follow when only 2-3 paragraphs are used for describing the proposed approach. It'll be good if the authors can elaborate on the approach in a more thorough manner.\", \"Regarding experimental results, it's nice to see that eigenSDF is better than linearSDF, demonstrating that the approach is quite effective. However it is not always the best in several metrics (as indicated in experimental result tables). I wonder if authors can provide more analysis or discussions on why this could happen, either the metric may not make too much sense in their setting, or if there is potential room for improvement. A few failure case visualizations could also be helpful in understanding the issues of the proposed approach.\", \"Moreover, do authors have thoughts on eigenSDF vs deepSDF (cited in the paper, published in CVPR 2019)? It'll be interesting to compare those as well, as deepSDF has proven useful in a few papers already.\"]}"
]
} |
rJl5MeHKvB | Learning Through Limited Self-Supervision: Improving Time-Series Classification Without Additional Data via Auxiliary Tasks | [
"Ian Fox",
"Harry Rubin-Falcone",
"Jenna Wiens"
] | Self-supervision, in which a target task is improved without external supervision, has primarily been explored in settings that assume the availability of additional data. However, in many cases, particularly in healthcare, one may not have access to additional data (labeled or otherwise). In such settings, we hypothesize that self-supervision based solely on the structure of the data at-hand can help. We explore a novel self-supervision framework for time-series data, in which multiple auxiliary tasks (e.g., forecasting) are included to improve overall performance on a sequence-level target task without additional training data. We call this approach limited self-supervision, as we limit ourselves to only the data at-hand. We demonstrate the utility of limited self-supervision on three sequence-level classification tasks, two pertaining to real clinical data and one using synthetic data. Within this framework, we introduce novel forms of self-supervision and demonstrate their utility in improving performance on the target task. Our results indicate that limited self-supervision leads to a consistent improvement over a supervised baseline, across a range of domains. In particular, for the task of identifying atrial fibrillation from small amounts of electrocardiogram data, we observe a nearly 13% improvement in the area under the receiver operating characteristics curve (AUC-ROC) relative to the baseline (AUC-ROC=0.55 vs. AUC-ROC=0.62). Limited self-supervision applied to sequential data can aid in learning intermediate representations, making it particularly applicable in settings where data collection is difficult. | [
"Sequential Representation Learning",
"Self-Supervision",
"Function Approximation"
] | Reject | https://openreview.net/pdf?id=rJl5MeHKvB | https://openreview.net/forum?id=rJl5MeHKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QnYPVAe778",
"B1xd7CqhoH",
"BJlLNT5hjr",
"BJehm2qhiB",
"SJe7stcnjr",
"BklOzy2atr",
"BklmVCchYr",
"H1llIYPjFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742659,
1573854751734,
1573854509975,
1573854244417,
1573853595112,
1571827471708,
1571757610794,
1571678536323
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2184/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2184/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2184/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2184/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2184/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2184/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2184/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper addresses an important problem of self-supervised learning in the context of time-series classification. However, all reviewers raised major concerns regarding the novelty of the approach and the quality of empirical evaluation, including insufficient comparison with the state-of-art and reproducibility issues. The reviewers agree that the paper, in its current state, does not path the ICLR acceptance threshold, and encourage the authors to improve the paper based on the provided suggestions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thank you for your thorough review.\", \"in_response_to_your_concerns\": \"- Novelty of the proposed method compared with [1]:\\nLimited self-supervision uses a multitask framework that, critically, requires no external labels to improve accuracy on a single task. Most applications of multitask learning require additional \\u2018external\\u2019 labels, but our method is applicable even in situations where such labels aren\\u2019t available, hence its novelty.\\n\\nAlthough [1] also proposes a limited self-supervised framework, their method is applicable only in the EHR setting, since it requires additional labels from diagnosis and treatment codes. Still, we agree this is related, and have updated Section 2 to include a discussion of it. The relative novelty of our work is to examine limited self-supervision on general time-series tasks with a variety of different auxiliary tasks. In particular, we examine the relative merits of different auxiliary tasks, propose a novel auxiliary task (PLAE), and show the importance of including multiple forms of auxiliary supervision.\\n\\n- Insufficient baselines:\\nOur main goal was not to obtain state-of-the-art results on computational phenotyping tasks, but to investigate the utility of a limited self-supervision framework to sequence classification. To this end we compared to a fully supervised network (our baseline), and ran experiments investigating different types of auxiliary tasks. In order to present this frameworks applicability to a broader array of datasets, we have added an analysis of 7 datasets from the UCR repository (see Section A3 in the supplement). We showed that the addition of self-supervised auxiliary tasks offered sizable improvements over our baseline architecture on most datasets. Although we achieved state-of-the-art level performance on only one dataset, we find the consistent improvement in performance over the baseline indicates the general promise of this approach.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your review. We were glad that you found our paper well written and problem relevant.\", \"in_response_to_your_concerns\": \"- The difference between self-supervision and limited self-supervision:\\nWorks that use self-supervision generally assumes the availability of large amounts of unlabeled data that can be used for pretraining or feature extraction. In contrast, we do not make this assumption - hence the term \\u2018limited.\\u2019 We show that even when all of your data is supervised, self-supervision can still improve performance relative to a pure supervised baseline. \\n\\n- No clear benchmark with alternatives is provided (such as TLC or CPC):\\nThe main contribution of this work is to examine self-supervised learning in a setting without additional unlabeled data, where we instead extract additional supervision from the sequential structure inherent in the labeled data. We have explored the efficacy of various common forms of self-supervision on this broadly applicable setting, and proposed a novel form of self-supervision for this setting. The inclusion of additional forms of self-supervision, such as TCL or CPC could serve as an interesting direction for future work. We have updated our paper to mention these papers. \\n\\n- Does the approach achieve state-of-the-art results?\\nn two of our three sequence classification tasks (PLA and T1D) we are unaware of other published evaluations we could compare to. On the AF task, our results are not state-of-the-art. Our main goal was not to obtain state-of-the-art results on these sequence level classification tasks, but to investigate the utility of a limited self-supervision framework. In order to present this frameworks applicability to a broader array of datasets, we have added an analysis of 7 datasets from the UCR repository (see Section A3 in the supplement). We showed that the addition of self-supervised auxiliary tasks offered sizable improvements over our baseline architecture on most datasets. Although we achieved state-of-the-art level performance on only one dataset, we find the consistent improvement in performance over the baseline indicates the general promise of this approach.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your thorough review. In response to your major concerns:\\n\\n1) We have uploaded a revised version of the paper where we provide more complete and concrete descriptions of our architecture.\\n\\n2) We believe self-supervision can be viewed as an unsupervised learning approach for representation learning. We focus on a setting, limited self-supervision, that does not assume access to additional unlabeled data. One of the major contributions of our work is that we demonstrate self-supervision is useful in such situations. \\n\\n3) BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. It pre-trains on two tasks: next-sentence prediction and hidden token identification, but these tasks are somewhat specific to NLP. We focus on tasks that are applicable to time-series. Moreover, our setup isn\\u2019t a pre-training setup, in which one assumes a large amount of additional unlabeled data, but seeks to leverage the data at hand. \\n\\nBRITS aims to impute missing values while solving the sequence level task (e.g., predict in-hospital mortality). There are parallels between their work and ours, notably their RITS model is functionally equivalent to our Baseline+Forecasting model. However, our main goal was to improve the representation learned by a network on fully observed and supervised data. \\n\\n4) In our revised draft of the paper, we have attempted to better describe our experimental settings. Additionally, we have posted all of our code to an anonymous google drive account, we will upload this to the authors github account after the review process has concluded. Since all datasets we report results on are publicaly available and our source code is publicly available, we believe our results are reproducible.\\n\\n5) In Figure 2 we use all auxiliary tasks averaging across subsets. For example when # of auxiliary tasks =1 this corresponds to an average of the performance of all single-auxiliary task models. When # of auxiliary tasks=2 we consider all combinations of size 2 and average the resulting performance. This figure demonstrates that additional streams of self-supervision tends to help, as average performance increases with number of auxiliary tasks. To provide additional insight into the performance of all auxiliary task combinations, we have added the full results (not just the averages) in the supplement section A2.\\n\\n6) Our main goal was not to obtain state-of-the-art results on these sequence level classification tasks, but to investigate the utility of a limited self-supervision framework. To this end we compared to a fully supervised network (our baseline), and ran experiments investigating different types of auxiliary tasks. In order to present this frameworks applicability to a broader array of datasets, we have added an analysis of 7 datasets from the UCR repository (see Section A3 in the supplement). We showed that the addition of self-supervised auxiliary tasks offered sizable improvements over our baseline architecture on most datasets. Although we achieved state-of-the-art level performance on only one dataset, we find the consistent improvement in performance over the baseline indicates the general promise of this approach.\"}",
"{\"title\": \"Summary of our Main Contribution\", \"comment\": \"Though others have previously demonstrated that self-supervision can be used to learn useful discriminative representations from large pools of unlabeled data, we show that when used as auxiliary tasks, self-supervision can improve the representation learning without any additional data. To the best of our knowledge, we are the first to propose limited self-supervision as a general framework for representation learning. To this end, we investigated this framework by: 1) considering a range of different auxiliary tasks and 2) exploring the effect of combining these tasks. Based on comparisons across a range of datasets, we find that multiple simultaneous streams of auxiliary self-supervision improve performance over a single stream.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes the use of many auxiliary tasks to boost the performance on a target task by means of `'self-supervision'. Specifically, they considered auto-encoding, forecasting, partial-segment auto-encoding, and piecewise-linear auto-encoding.\\n\\nThere are major concerns that should be clarified or described in detail.\\n1) The overall architecture is not complete. the architectures used in the experiments are not described concretely.\\n2) To this reviewer, the idea of self-supervision is similar to the unsupervised learning for representation learning. \\n3) The methods of BERT (Bidirectional Encoder Representations from Transformers) [Devlin et al., 2018] or BRITS (Bidirectional Recurrent Imputation for Time Series) [Cao et al., 2018], although different for their target tasks in their original work, could be also regarded as self-supervision technique and could be interesting to compare with them.\\n4) The experimental settings are not described well, thus lack of reproducibility\\n5) It is unclear which aux-tasks were applied in Fig. 2. Further to better understand and analyze the results, it is required to conduct more rigorous ablation studies.\\n6) There is no comparison with recent work on the same datasets.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper propose an approach for self-supervised learning on time series.\\nThree datasets are considered (simulation and 2 healthcare datasets).\\nThe gist of the contribution is to both optimize prediction loss\\nof the true task and at the same time do a good job for a family\\nof auxiliary tasks. 4 auxiliary tasks are considered. While\\nthe first 3 auxiliary tasks are quite common, the 4th tasks\\ncalled piecewise-linear autoencoding appears novel. The idea\\nis that the hidden representation of the LSTM should be a good predictor\\nof the past using a piecewise-linear approximation.\\nThe author coin the term \\\"limited self-supervision\\\" for their approach\\nalthough it's not clear why it is fundamentally not just self-supervised\\nlearning as it has been proposed in the past.\\n\\nThe paper is overall well written and addresses the relevant issue\\nof learning from limited annotated data.\\n\\nMajor concerns\\n\\n- It is yet another way to do self-supervised learning on time series\\nand no clear benchmark with alternatives is provided (time contrastive\\nlearning (TCL) or Contrastive Predictive Coding (CPC) https://arxiv.org/pdf/1807.03748.pdf\\netc.)\\n\\n- On any of the applied problem it is not clear if the proposed\\napproach brings an improvement on the state-of-the-art or if it's\\njust an illustration of the method disconnected from the literature\\nof the application.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a so called self-supervised method for learning from time series data in healthcare setting. Specifically, here self-supervision is achieved via designing auxiliary tasks based on data's internal structure to create more labeled auxiliary training tasks.\\n\\nFrom both perspectives of methods and applications, the proposed model has very limited novelty. It is just one application of multitask learning. Also very similar idea has been implemented by [1]. In [1], the authors learn multi-level embedding to make disease/risk prediction, where the embedding was jointly trained by performing auxiliary prediction tasks that rely on this inherent EHR structure. The authors need to state what is the novelty of the proposed method compared with [1].\\n\\nIn addition, the performance evaluation missed many baselines. Table 1 seems more like a ablation study rather than a performance comparison. You need to compare with all state-of-the-art models in computational phenotyping in order to show the performance advantage brought by the proposed mode design.\\n\\n[1] Edward Choi, Cao Xiao, Walter Stewart, Jimeng Sun, MiME: Multilevel Medical Embedding of Electronic Health Records for Predictive Healthcare, NeuRIPS, 2018\"}"
]
} |
Byg5flHFDr | EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs | [
"Changmin Wu",
"Giannis Nikolentzos",
"Michalis Vazirgiannis"
] | Neural networks for structured data like graphs have been studied extensively in recent years.
To date, the bulk of research activity has focused mainly on static graphs.
However, most real-world networks are dynamic since their topology tends to change over time.
Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining.
Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature.
In this paper, we propose a model that predicts the evolution of dynamic graphs.
Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs.
Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology.
We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets.
Results demonstrate the effectiveness of the proposed model. | [
"temporal graphs",
"graph neural network",
"graph generative model",
"graph topology prediction"
] | Reject | https://openreview.net/pdf?id=Byg5flHFDr | https://openreview.net/forum?id=Byg5flHFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-jtws4dEnM",
"HylezQvAKB",
"rJl6gxxRYB",
"Bylnso_aKH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742631,
1571873544365,
1571844084725,
1571814308193
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2183/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2183/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2183/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a combination graph neural networks and graph generation model (GraphRNN) to model the evolution of dynamic graphs for predicting the topology of next graph given a sequence of graphs.\\n\\nThe problem to be addressed seems interesting, but lacks strong motivation. Therefore it would be better if some important applications can be specified. \\n\\nThe proposed approach lacks novelty. It would be better to point out why the specific combination of two existing models is the most appropriate approach to address the task. \\n\\nThe experiments are not fully convincing. Bigger and comprehensive datasets (with the right motivating applications) should be used to test the effectiveness of the proposed model. \\n\\nIn short, the current version failed to raise excitement from readers due to the reasons above. A major revision addressing these issues could lead to a strong publication in the future.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a framework to model the evolution of dynamic graphs for the task of predicting the topology of next graph given a sequence of graphs. Specifically, the paper uses a combination of recently proposed techniques in graph representation learning (Graph Neural Network) and Graph Generation (GraphRNN [You et. al. 2018]). Given a sequence of graphs as input, a GNN (to obtain low-dimensional representations of the graphs in this sequence) and LSTM (to model the sequence of these representations) based encoder is used to compute a vector representation of the topology of next graph in the sequence. The learned vector is then used as input to a GraphRNN decoder to generate a graph that would serve as a predicted next graph in the sequence. The proposed approach is validated with experiments on three synthetic datasets and one real-world dataset (Bitcoin is same dataset from two different resources with little difference in characteristics) and compared against random graph models.\", \"this_paper_should_be_rejected_due_to_following_reasons\": \"(1) The authors do not justify/discuss the motivation and importance of the task and corresponding applications that would require to predict topology of complete graph in the next step. \\n(2) The proposed techniques are an adhoc combination of existing techniques with major concerns (details below) but also with little novelty (if any) for achieving this combination.\\n(3) The empirical efforts are very limited and does not provide enough evidence about the efficacy of the method, miss several details and does not serve as motivation for designing such a method in first place. Please note that negative results on cycle graphs has no role to play in this assessment. In fact,\\nI appreciate the authors for reporting negative results as it provides a transparent insights into the effectiveness of model in different settings.\\nOverall, the paper needs lot of work on all aspects - motivation, technique and experiments to make it fit for a conference publication.\", \"major_concerns\": [\"(a) Motivation: The authors do not discuss or motivate the problem and why it is important to the community. The authors mention that many existing work on dynamic graphs focus on learning representations. This is the case because learned representations can then be used for various downstream applications and even future event predictions. When such methods can be used to do future predictions required for most applications, why does one need to predict the topology of complete next graph? The authors need to provide concrete justification for the problem they address, instances where such a task would be useful and discussion on other techniques that can do similar tasks but lack in aspects that such a method can capture. For instance, as a preliminary step, can the authors explain how solving this problem would be helpful to bitcoin?\", \"(b) Technical: The technical contributions of this paper lack novelty and has several flows:\", \"Figure 1 seems to show that graph only grows in size. While the authors do provide an experiment with removal process, that experiment does not seem to perform well. So, does the method is only good to support growing graphs?\", \"Authors mention that the edge and node attributes are considered to be fixed. However, if the number of nodes and edges change, X and L should also change in terms of dimensions and adding values for new nodes/edges. so why should it not be considered time-varying?\", \"What was the motivation for using GRU for update function in Eq 3? Was simple MLP tried and not useful? Was GRU used to capture some long term dependencies in structure? If so, the authors must explain how it is useful for this task.\", \"Why Set2Set was used for ReadOut function? This seems to be a particularly adhoc and odd choice. when sum did not work well, jump to Set2Set is not justified. Can the authors provide an explanation for the same?\", \"The authors claim that the embedding h_G_T incorporates topological information -- I find this claim highly unsubstantiated and needs justification. For instance, can you provide some rigorous analysis to demonstrate that this is the case? At the least, can the authors use this vector and pass it through a graph decoder to recover the original graph?\", \"What is novel in 3.2.3 as compared to You et. al.? Infact, it is hard to see any novelty in the entire combination. Was it challenging to achieve this combination? If so, what was the challenging part? It is not clear what the authors contributed to address such a challenge. Was the training challenging? If so, please explain. If not, why is this a novel approach?\", \"(c) Empirical: The empirical efforts are inadequate and raises more questions than answers.\", \"Synthetic datasets are simple and more datasets should be used e.g. You et. al. 2018 to validate the performance. Only one real-world dataset from two different sources is used. It is hard to understand author's motivation in doing so. Why not use various graph datasets available in papers that learn representations (e.g. cited by authors themselves) What is special about bitcoin dataset that makes it suitable for this task?\", \"Node/edge attributes are chosen in adhoc manner and it is unclear what role they perform. Do they help with prediction? If not, would it be useful to first show experiments without them? Or does this method absolutely need attributes? It is not clear why it is useful to set all attributes for edge as 1.\", \"How was window size of 10 chosen? Why is the same window size good for all graphs? What impact does window size has on performance?\", \"What is the motivation for using Graph kernel for similarity? The authors borrow the decoder from You et. al. 2018 which also provides a principled method to compare graphs using MMD based on statistics. Why not employ the same?\", \"GraphRNN (You et. al.) and other generative models can learn over multiple graphs? Did the authors try to feed the sequence of graphs to such models and then try to generate a new graph to see if they can produce similar results? It is true that those generative models do not specifically model temporal sequence, but such an experiment would help to distinguish the efficacy of the proposed method.\", \"-The technique of using MLP for generating predictions using random graph models seem to be highly unfair for the baselines. Can you elaborate more as it is difficult to understand why one should handicap those models by using learned information instead of data information?\", \"A rigorous discussion on insights explaining the results is required. The authors show high performance on Bitcoin dataset. However, it is not clear what part is contributing to the performance. Similarly, authors should dig deep into the failure cases and provide justification of why such a method would fail in particular cases and propose alternatives.\", \"Why was Graph size used as a statistic to report? Two graphs of same size can be entirely different and I do not see any merit in using such a metric. Again, something like MMD based metric may be useful.\"], \"improvements_that_would_make_future_revision_strong_but_has_not_impacted_current_assessment\": \"Overall, the presentation of the paper is very unpolished. The authors are missing many important details as described above while spending a lot of time in describing (repeating) known techniques verbatim as original works. This can be removed and condensed into very short preliminary section.\\n\\n- Notations: The authors must use clear notations. For instance, on Page 2, L is used to describe edge attributes but then it is replaced by E in Page 3. Also, both X and L are shown to have dimension d. Are edge and node attributes of same dimension? w is used for window-size of sequence used as input and also as neighbor node. When modeling evolution of graphs where a sequence is available over time points 0...T, it is not useful to use T to also represent time step of GNN propagation. Infact, authors should avoid using time steps to signify GNN iterations.\\n\\n- Empirical details: The details provided for datasets and experimental setup is inadequate. Why are the two Bitcoin datasets different from each other? What does Pos. Edges in Table 1 mean? What does Mean and 90th percentile in Table 2 signify? Authors only talk about train-test split but then mention\\nvalidation set for hyper-param tuning. How was this validation set obtained? Also, what hyper-params were tuned and what was sensitivity of those hyper-params? Authors use GNN and multiple RNN's, what was the model capacity used and how it impacted the performance? Figure 5 (c) what is a circle graph?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors propose a new neural network architecture for predicting the next graph conditioned on a past graph sequence. It seems that the proposed model is the first deep learning model for graph sequence prediction. The model consists of three major components: a graph encoder that maps a graph to an encoding represented as a vector, an LSTM for graph sequence embedding, and a graph decoder for generating an affinity matrix.\", \"there_are_two_main_concerns_i_have_with_this_paper\": [\"The model has some inherent limitations in the graph embedding step. First, the graph encoder embeds a graph into a feature vector that represents the topology of the graph. I assume that the feature vector has a small size, and it is hard to encode a large graph (i.e. 1000x1000). This representation is quite sub-optimal to me. The model will not be able to utilize the complete information of a large dense graph. Second, the model only takes 10 graphs as input and ignores other graphs in the input graph sequences. This sounds suboptimal to me.\", \"The performance of the proposed model is not satisfactory. The model does not output a graph with the right size for very simple synthetic graphs. The model completely fails for generating circles. A better model should be proposed to address this challenge. Evaluation is not convincing enough. Simply comparing the graph size between the output and ground truth is not sufficient. We can further predict where graph structure matches the ground truth exactly. I believe this can be done for simple graphs like circles, paths, and ladders.\"], \"other_comments\": [\"The authors claim that all the sequences in the datasets are fixed to 1000. However, in Figure 4, the graph index goes up to 1600. Why?\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a system for predicting evolution of graphs. It makes use of three different known components - (a) Graph Neural Networks (GNN); (b) Recurrent Neural Networks (RNN); (c) Graph Generator. A significant portion of the paper is spent in explaining these known concepts. The contribution of the paper seems to be a system of combining these to achieve graph evolution prediction. As stated, this system is effectively a recurrent auto-encoder of sorts.\\n\\nThe main objection I have in this paper is that they have only used two real datasets (both of which are from the same domain). There are several only available datasets that have temporally annotated graph evolution. It is not possible to conclude the empirical superiority of a system based on such little evidence.\"}"
]
} |
B1ltfgSYwS | Few-Shot One-Class Classification via Meta-Learning | [
"Ahmed Frikha",
"Denis Krompaß",
"Hans-Georg Koepken",
"Volker Tresp"
] | Although few-shot learning and one-class classification have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot one-class classification problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and explicitly trains for few-shot class-imbalance learning, aiming to learn a model initialization that is particularly suited for learning one-class classification tasks after observing only a few examples of one class. Experimental results on datasets from the image domain and the time-series domain show that our model substantially outperforms the baselines, including MAML, and demonstrate the ability to learn new tasks from only few majority class samples. Moreover, we successfully learn anomaly detectors for a real world application involving sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using only few examples from the normal class. | [
"meta-learning",
"few-shot learning",
"one-class classification",
"class-imbalance learning"
] | Reject | https://openreview.net/pdf?id=B1ltfgSYwS | https://openreview.net/forum?id=B1ltfgSYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"DUkmbaQhu",
"r1l2zKwjiB",
"SyxbAdwjoS",
"SyeAidDssr",
"SJephPPjjH",
"SkeA0UwssB",
"rJevDIwjsH",
"r1g2xUvooS",
"HkgyCrvjsr",
"r1lYrNPsoB",
"S1lVyVDosr",
"Byg7TQvssS",
"HyelYQDjir",
"BkecBmwjsr",
"S1lWpY-x5H",
"r1xMn6u6tB",
"H1eGv9N6Fr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742603,
1573775635608,
1573775560895,
1573775525714,
1573775284913,
1573775062180,
1573774942784,
1573774835609,
1573774791168,
1573774400583,
1573774299631,
1573774267078,
1573774199871,
1573774146106,
1571981752531,
1571814826280,
1571797593797
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2182/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2182/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2182/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors present a combination of few-shot learning with one-class classification model of problems. The authors use the existing MAML algorithm and build upon it to present a learning algorithm for the problem. As pointed out by the reviewers, the technical contributions of the paper are quite minimal and after the author response period the reviewers have not changed their minds. However, the authors have significantly changed the paper from its initial submission and as of now it needs to be reviewed again. I recommend authors to resubmit their paper to another conference. As of now, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"References\", \"comment\": \"[1]: Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126\\u20131135. JMLR. org, 2017\\n\\n[2]: Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2018 \\n\\n[3]: Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel M\\u00fcller, and Marius Kloft. Deep one-class classification. In International Conference on Machine Learning, pp. 4393\\u20134402, 2018 \\n\\n[4]: Jedrzej Kozerawski and Matthew Turk. Clear: Cumulative learning for one-shot one-class image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pp. 3446\\u20133455, 2018.\"}",
"{\"title\": \"Summary of our additional contributions\", \"comment\": \"Thank you for your detailed review and for recognizing that the few-shot one-classification is an under-studied problem and that the simplicity of the proposed method is a strength.\", \"we_summarize_our_additional_contributions_during_the_rebuttal_phase_in_the_following\": \"-Theoretical analysis of why OC-MAML works and why MAML and other first-order meta-learning do not, even when adapted to the OCC case (see Table 1 in the revised paper version). \\n\\n-Empirical comparison to other gradient-based meta-learning algorithms to validate our theoretical explanation (Table 2 in the revised paper version). \\n\\n-A modification that increases the performance of class-balanced meta-learning algorithms and first-order one-class meta-learning algorithms, when the test task is a OCC task. However, OC-MAML still yields the highest performance (Table 2 in the revised paper version). \\n\\n-A comparison to the classical OCC approaches OC-SVM and isolation forest, as well as the \\\"Finetune\\\" baseline (paper 1019 submitted to ICLR 2020), in the few-shot one-class classification scenario (Table 1 in the revised paper version). \\n\\n-Empirical evaluation of OC-MAML and all the baselines on an additional dataset, the Omniglot dataset, which is a classical benchmark for few-shot learning (see Tables 1 and 2 in the revised paper version).\"}",
"{\"title\": \"Our answer regarding weaknesses 1 and 2\", \"comment\": \"In the following we answer the weaknesses you mentioned one by one.\", \"weakness_1\": \"\\\"While I enjoyed reading the paper since it tackles an under-explored problem, it is hard to justify publishing the method/approach at a top machine learning conference. Changing the balance in meta-learning is a relatively obvious modification that one would do to better reflect the problem; I don't think it results in general scientific/ML principles that can be used elsewhere.\\\"\\n\\n\\nThe modification we made to MAML might be intuitive and simple from an algorithmic point of view. We added a new section to the revised paper version (section 2.3.2) where we provide a theoretical analysis showing that other general meta-learning frameworks, namely FOMAML [1] and Reptile [2], lack the ability to learn one-class classifiers from only few datapoints despite the \\\"simple\\\" modification. In addition, we further back these findings empirically with additional experiments (See Section 4.2 in the revised paper version or our answer for Reviewer 1). \\n\\nIn summary, For a given task, OC-MAML optimizes for a parameter initialization from which taking few gradient steps with one-class minibatches results in a performance increase on class-balanced data. This is done by maximizing the inner product of the gradients of different minibatches with different class-imbalance rates. We refer to section 2.3.2 in the revised version of the paper for more details. \\n\\nBesides the above mentioned finding, we empirically show that classical OCC methods, such as OC-SVM and isolation forest (IF), completely fail in the low data (few-shot) regime. This demonstrates that the few-shot OCC problem lacks baselines. We therefore believe that our approach will serve as a first, simple and strong benchmark method for future research in this area. \\n\\n\\n----\", \"weakness_2\": \"\\\"The relationship to out-of-distribution detection (which some of the experiments, e.g. Multi-task MNIST and miniImagenet essentially test) is not discussed or compared to. How are anomalies defined and is it really different than just being out-of-distribution?\\\"\\n\\n \\nIn the following we discuss the difference between out-of-distribution detection and one-class classification (or anomaly-detection). \\n\\nIn the out-of-distribution detection literature usually data from a completely different dataset is considered as out-of-distribution, e.g. a model is trained on the CIFAR-10 dataset (of which all data is considered in-distribution) and the test set partially includes data from the TinyImageNet dataset (which would be considered out-of-distribution). In this case, we can say that out-of-dataset-distribution detection is performed. In our experiments we essentially test out-of-class-distribution detection, i.e. the out-of-distribution examples belong to different classes than the normal class but come from the same dataset. Methods from the one-class-classification (and anomaly-detection) literature usually address this latter problem. The out-of-distribution examples (coming from other classes) are in our case are, therefore, closer to the in-distribution examples (coming from the normal class) than in the case of out-of-dataset-distribution case. \\n\\nIn the following we clarify how we define the anomalies and take MiniImageNet (MIN) as an example dataset. MIN reserves 20 classes for testing. In our experiments, a test task include normal examples coming from one of these 20 classes and anomalous examples coming from the other 19 classes. If you consider data belonging to these 19 classes as out-of-distribution, then yes we are performing out-of-distribution detection. However, as mentioned above, this would be out-of-class-distribution detection, i.r. one-class classification. \\n\\nWe address the one-class classification problem and conduct additional experiments to compare to classical OCC approaches (OC-SVM and IF) in the few-shot regime. We do not address the out-of-(dataset)-distribution detection problem and therefore do not compare to the classical methods to solve it.\"}",
"{\"title\": \"Our answer regarding weakness 3\", \"comment\": \"Weakness 3.1: \\\"The datasets are limited. The MNIST dataset seems to choose a fixed two specific categories for meta-validation and meta-testing, as opposed to doing cross-validation. Results on just one meta-testing seems limited in this case with just one class.\\\"\\n\\n\\nIn our Multi-Task MNIST (MT-MNIST) experiments, we fixed only the meta-validation task, in which we arbitrarily chose the digit 9 to be the normal class. We actually conductedcross validation , i.e. generated 9 different datasets from MNIST, in each of which one digit (from 0 to 8) is the normal class of the meta-testing task. In the main paper we presented the results of the dataset where the meta-testing task consists in differentiating the digit 0 from the others, but referenced to the Appendix where we presented the results on the other 8 datasets. We note that the results on all datasets were consistent.\\n\\nDuring the rebuttal phase we tested our method on a further benchmark dataset for few-shot learning, the Omniglot dataset, where we used the official split of 30 alphabets for meta-training and meta-validation and 20 alphabets for meta-testing. We found consistent results with the other datasets. The results can be seen in the results section of the revised paper version (section 4.2).\\n\\nIn total, we tested our approach on 6 different datasets, 3 image datasets: MT-MNIST (a classical benchmark dataset for One-class classification that actually includes 9 different datasets, one for each class), Omniglot (a classical benchmark dataset for few-shot learning) and MiniImageNet (a challenging benchmark dataset for few-shot learning), as well as 2 synthetic time-series datasets (one based on sine functions and on sawtooth waveforms) and a real-world time-series dataset. As your pointed out that our experimental evaluation is limited, it would be really helpful for us to understand on which datasets we should conduct additional experiments to further strengthen our contribution.\\n\\n----\\n\\nWeakness 3.2: \\\"In terms of time-series, anomaly detection has been studied for a long time; is there a reason that the authors create a new synthetic dataset?\\\"\\n\\n\\nThe only reason for this, is that most of the time-series datasets for anomaly detection include data from only one domain and only one normal class, which prevents us from creating different tasks out of these datasets. For the meta-learning problem formulation several different tasks are required.\\n\\nWe generated two synthetic datasets, one based on sine functions and one on sawtooth waveforms, in a way that is suitable for a meta-learning problem. Each of these dataset is composed of 30 different tasks, i.e. 30 different normal signals and anomaly types. Inspired by the impact of the simple toy dataset of sine functions proposed by [1] as a few-shot regression benchmark dataset, we aim for these datasets to be easy-to-use toy datasets for few-shot (one-class) classification on time-series. This latter problem is rather underexplored compared to few-shot classification on image data.\\n\\n----\\n\\nWeakness 3.3:\\\"For the milling example, how were anomalies provoked?\\\"\\n\\n\\nAnomalies were provoked by creating realistic scenarios for deficient manufacturing. Examples are using a workpiece that exhibits deficiencies which leads to a drop in the torque signal or using rather slightly decalibrated process parameters which induced various irritations to the workpiece surface which harmed production quality.\"}",
"{\"title\": \"Our answer regarding weakness 4\", \"comment\": \"Weakness 4: \\\"The baselines do not represent any state of art anomaly detection (e.g. density based, isolation forests, etc.) nor out of distribution detection; the latter especially would likely do extremely well for the simple image examples.\\\"\\n\\n\\nAs we mentioned in the paper, we did not compare to the classical one-class classification approaches, since they require high amounts of data and are therefore not applicable in the few-shot regime that we address. It should be noted that OC-SVM and other shallow approaches, e.g. isolation forest (IF) sometimes completely fail in one-class classification even when high amounts of data are available. For example in [3], OC-SVM yields a AUC-ROC of 50% on some of the CIFAR-10 classes, when 5000 datapoints from this class are used for learning.\\n\\nUpon your request and the request of Reviewer 1, we conducted additional experiments using OC-SVM and IF on the few-shot one-class test tasks. Hereby, we apply PCA to the data where we choose the minimum number of eigenvectors sothat at least 95% of the variance is conserved, as done in [3]. For OC-SVM, we additionally tune the inverse length scale (gamma) by using 10% of the test set, as done in [3]. This gives OC-SVM a supervised advantage, compared to the other baselines.\\n\\nFor a fairer comparison, where these methods also benefit from the data available in the meta-training tasks, we additionally run experiments on the embeddings inferred by the feature extractors of both the \\\"Finetune\\\" baseline (paper 1019 submitted to ICLR 2020) and the Multi-Task-Learning (MTL) baseline. The results are shown in Table 1 in the revised paper version.\\n\\nAs expected, the baselines fail in general to generalize to unseen examples when only few datapoints from the normal class are available. The only exception is the MT-MNIST dataset, where the shallow baselines trained on the extracted embeddings yield good performance K=10 examples of the normal class are available. We explain this by the fact that in the MT-MNIST dataset, the feature extractor models (MTL and \\\"Finetune\\\") are exposed to most of the anomalies of the test task (8 out of the 9 digit classes present in the test task) during training. Hence, useful embeddings are extracted. We note that, even on the MT-MNIST dataset, OC-MAML still outperfoms all baselines by a significant margin.\\n\\nWe note that in the case where only K=2 examples are available, IF is not applicable, since all trees will have a depth of 1.\"}",
"{\"title\": \"Our answer regarding weakness 5\", \"comment\": \"Weakness 5:\\\"There is no analysis of what the difference is in representation (initialization) learning due to the differences between the OCC and FS setup. What are the characteristics of the improved initialization?\\\"\\n\\n\\nWe adress the FS-OCC problem in our present work. We assume that your question is about the difference between the initialization yielded by FS-OCC meta-training (OC-MAML) and FS-class-balanced meta-training (MAML). We cover this concern directly in our revised version of the paper (Section 2.3.2):\\n\\nBy analyzing the approximated loss gradients used for the MAML and OC-MAML updates, we come to the following finding: For a given task, OC-MAML optimizes for increasing the inner product of the gradients computed on different minibatches with different class-imbalance rates, namely minibatches containing data from only one class and a class-balanced minibatch (meta-update). If the inner product of the gradients computed on two different minibatches is positive, taking one gradient step using one minibatch leads to an increase in performance on the other minibatch. Consequently, OC-MAML optimizes for a parameter initialization from which taking one (or few) gradient step(s) with minibatch(es) including only normal class data results in a performance increase on class-balanced data. In contrast, MAML optimizes for a parameter initialization that requires class-balanced minibatches to yield the same effect. When adapting to OCC tasks, however, only examples from one class are available. We conclude, therefore, that using minibatches with different class-imbalance rates for meta-training, as done in OC-MAML, yields parameter initializations that are more suitable for adapting to OCC tasks.\\n\\nWe also find that the second-order derivatives are essential to do so, which explains why OC-FOMAML (its first-order approximation) fails to adapt to FS-OCC tasks. Please refer to section 2.3.2 in the revised paper version for a more detailed theoretical analysis of the gradients of the different meta-learning algorithms, in the OCC case. In the answer to Reviewer 1, we further discuss how batch normalization after the last feature-producing layer can be used to partially counteract this shortcoming. However, despite this modification, the first order meta-learning methods are still outperformed by OC-MAML by a significanat margin.\\n\\n----\", \"minor_comment\": \"\\\"Exposition: Define the one-class classification problem; it's not common so it would be good to define in the abstract, or mention anomaly detection which is a better-known term.\\\"\\n\\nThank you for the comment. We did this in the revised version of the paper.\\n\\n----\\n\\nThank you again for taking the time to thoroughly read our paper. We noticed that you judged our work with the lowest score. This is really unfortunate since we believe that the topic of few/one-shot one-class classification really deserves a greater attention by the research community due to its wide applicability in many areas where data is naturally scarce and has an extreme class-imbalance. As we address most of your concerns and additional concerns of the other reviewers which clearly improved our paper and our contribution we would really appreciate if you could spare additional time and reevaluate our revised version of the paper and its contributions. Also, if you have further remarks and concerns please feel free to let us know so we can further improve our research contribution.\"}",
"{\"title\": \"Summary of our additional contributions\", \"comment\": \"Thank you for your review.\", \"we_summarize_our_additional_contributions_during_the_rebuttal_phase_in_the_following\": \"-Theoretical analysis of why OC-MAML works and why MAML and other first-order meta-learning do not, even when adapted to the OCC case (see Table 1 in the revised paper version). \\n\\n-Empirical comparison to other gradient-based meta-learning algorithms to validate our theoretical explanation (Table 2 in the revised paper version). \\n\\n-A modification that increases the performance of class-balanced meta-learning algorithms and first-order one-class meta-learning algorithms, when the test task is a OCC task. However, OC-MAML still yields the highest performance (Table 2 in the revised paper version). \\n\\n-A comparison to the classical OCC approaches OC-SVM and isolation forest, as well as the \\\"Finetune\\\" baseline (paper 1019 submitted to ICLR 2020), in the few-shot one-class classification scenario (Table 1 in the revised paper version). \\n\\n-Empirical evaluation of OC-MAML and all the baselines on an additional dataset, the Omniglot dataset, which is a classical benchmark for few-shot learning (see Tables 1 and 2 in the revised paper version).\"}",
"{\"title\": \"Our answer regarding your first concern\", \"comment\": \"In the following we answer your concerns one by one.\", \"concern_1\": \"\\\"The first is about the real requirement of this learning scenario. Although the authors have pointed out some real applications, I think they have been introduced separated. In other words, since this setting is the combination of two previous areas, i.e., one class classification and few-shot learning, I fell that the authors have introduced it by just a combination. What are the unique challenges of this problem? I think these problems should be clarified at first.\\\"\\n\\n\\nIn section 2.1 \\\"Problem statement\\\" we defined the few-shot one-class classification (FS-OCC) problem and mentioned its challenges. In the following we further clarify the requirements and challenges of the FS-OCC problem. We begin by explaining the requirements and challenges of the OCC problem and the few-shot class-balanced binary classification (FS-CBBC) problem separately.\\n\\nIn the OCC problem, data examples from only one class (the normal class) are available for training a model that has to differentiate between two classes, namely the normal class and the abnormal class. For that a learning algorithm is required, that enables the trained model to approximate a sufficiently generalized decision boundary for the normal class without overfitting to this class, i.e. approximating a too big decision boundary which results in predicting (almost) everything as normal. Classical OCC approaches, e.g. OC-SVM, usually require high amounts of data from the normal class to be able to learn such a decision boundary. We note that in the OCC scenario there are no restrictions with regards to the amount of data available from the normal class, i.e. access to high amounts of normal data examples is assumed. \\n\\nIn the FS-CBBC problem, few data examples from each of the two classes are available for training a binary classification model. For that a learning algorithm is required, that enables the trained model to approximate a sufficiently generalized decision boundary for the each class without overfitting to the few examples available, i.e. approximating a too tight decision boundary which prevents generalization to unseen examples from each class. Most of the few-shot classification approaches, e.g. MAML, require access to data examples from each class to yield such generalization with only few examples. We note that in the FS-CBBC scenario there are restrictions with regards to the class distribution of the available training data, namely access to data examples from each of the two classes is assumed. \\n\\nIn the FS-OCC problem, which we address in this work, only few data examples from only one class (the normal class) are available for training a binary classification model. In this scenario restrictions on the data amount (only few examples are available) and on the class distribution in the data (data from only one class is available) are imposed. To address this problem, a learning algorithm is required, that enables the trained model to approximate a sufficiently generalized decision boundary for the normal class using only few of its examples. The unique challenge of this scenario arises from the combination of the challenges of the OCC and the FS-CBBC problems, namely overfitting to the normal class, i.e. predicting (almost) everything as normal, and overfitting to the few available examples, i.e. not being able to generalize to unseen examples.\\n\\nWe updated the section 2.1 \\\"Problem statement\\\" in the revised version of the paper to include more details and clarifications about the requirements and unique challenges of the FS-OCC problem. We would appreciate, if you could give us some feedback on the detailed problem statement given above.\"}",
"{\"title\": \"Our answer regarding your other concerns\", \"comment\": \"Concern 2.1:\\\"The second one is the algorithms itself. Although I have not checked the details, I fell that the authors have prepared this paper in a rough way.\\\"\\n\\n\\nWe would be grateful for concrete examples of sentences or sections, where you felt that the paper was written in a \\\"rough\\\" way, so that we can improve them.\\n\\n----\\n\\nConcern 2.2:\\\"The authors have only described the method, without deep analyses answering the question why. For example, the method seems heuristic, without theoretical analysis.\\\"\\n\\n\\nWe added a section (section 2.3.2) in the revised version of the paper, where we give a theoretical explanation of why OC-MAML works. In the following, we briefly summarize our findings. Furthermore, we conducted additional experiments to validate our theoretical analysis empirically. For the results of these experiments please see Table 2 in the revised paper version.\\n\\nBy analyzing the approximated loss gradients used in the MAML and OC-MAML updates, we come to the following finding: For a given task, OC-MAML optimizes for increasing the inner product of the gradients computed on different minibatches with different class-imbalance rates, namely minibatches containing data from only one class and a class-balanced minibatch (meta-update). If the inner product of the gradients computed on two different minibatches is positive, taking one gradient step using one minibatch leads to an increase in performance on the other minibatch. Consequently, OC-MAML optimizes for a parameter initialization from which taking one (or few) gradient step(s) with minibatch(es) including only normal class data results in a performance increase on class-balanced data. In contrast, MAML optimizes for a parameter initialization that requires class-balanced minibatches to yield the same effect. When adapting to OCC tasks, however, only examples from one class are available. We conclude, therefore, that using minibatches with different class-imbalance rates for meta-training, as done in OC-MAML, yields parameter initializations that are more suitable for adapting to OCC tasks.\\n\\nWe also find that the second-order derivative term is essential to do so, which explains why OC-FOMAML (its first-order approximation) fails to adapt to FS-OCC tasks. Please refer to section 2.3.2 of the revised paper version for a more detailed theoretical analysis of the gradients of the different meta-learning algorithms, in the OCC case.\\n\\n----\\n\\nConcern 2.3:\\\"In summary, I think this paper likes a technical report, not a research paper.\\\"\\n\\n\\nOur work emerged from a practical and technical situation, where the few-shot one-class regime is common, namely industrial manufacturing. However, we developed a method that is applicable in multiple data domains, i.e. time-series (in particular, sensor data) and images, and therefore can be adopted beyond the initial technical problem. We aim for our approach to be a first and strong baseline for the, as recognized by both other reviewers, relevant and under-studied research problem of few-shot one-class classification in general, i.e. not to the specific case of few-shot anomaly detection on sensor data. We believe that our approach can be of great value to the research community that will explore this challenging and important problem further in the future. We hope that our extensions of the paper give it a stronger research character.\\n\\n----\", \"concern3\": \"\\\"Although I can catch the main meaning of this paper, it seems that the writing style is not so fluently. I suggested the authors to recognize the presentation.\\\"\\n\\n\\nAs mentioned above, we would be grateful if you could point out some concrete examples of sentences or sections, of which we should improve the writing style. Could you also please elaborate on the last sentence?\\n\\nSince you mentioned that you have not thoroughly read our paper, we would really appreciate if you could spare the time to reevaluate our revised paper version. Please feel free to raise any further concerns or add some additional comments. As we believe that we have addressed most of your concerns we would also be very grateful if you could reconsider the rather low scoring for our research contribution.\"}",
"{\"title\": \"Summary of our additional contributions\", \"comment\": \"Thank you for your review and for recognizing the relevance of the few-shot one-classification problem as well as the suitability of meta-learning as an approach to tackle it.\", \"we_summarize_our_additional_contributions_during_the_rebuttal_phase_in_the_following\": \"-Theoretical analysis of why OC-MAML works and why MAML and other first-order meta-learning do not, even when adapted to the OCC case. \\n\\n-Empirical comparison to other gradient-based meta-learning algorithms to validate our theoretical explanation. \\n\\n-A modification that increases the performance of class-balanced meta-learning algorithms and first-order one-class meta-learning algorithms, when the test task is a OCC task. However, OC-MAML still yields the highest performance. \\n\\n-A comparison to the classical OCC approaches OC-SVM and isolation forest, as well as the \\\"Finetune\\\" baseline (paper 1019 submitted to ICLR 2020), in the few-shot one-class classification scenario. \\n\\n-Empirical evaluation of OC-MAML and all the baselines on an additional dataset, the Omniglot dataset, which is a classical benchmark for few-shot learning (see results in the revised paper version).\"}",
"{\"title\": \"Our answer regarding weakness 1\", \"comment\": \"In the following we answer your concerns one by one.\", \"weakness_1\": \"\\\"MAML is quite a general meta-training framework, which can be used when parameterized base-learners are updated using gradient methods. Thus, when parameterized models for one-class classification are used, it is rather easy to meta-train one-class classifiers in the MAML framework\\\"\\n\\n\\nWe adress this concern by providing a theoretical analysis (section 2.3.2 in the revised paper version) and empirically demonstrating that other general meta-learning frameworks, namely FOMAML [1] and Reptile [2], fail to learn one-class classifiers from only few datapoints. For that, we adapt FOMAML and Reptile to the one-class classification scenario by using one-class classifiers for meta-training, i.e. we use examples from only one class for adaptation (in the inner loop) and class-balanced data for the outer loop, as it was done for OC-MAML. We will refer to these algorithms as OC-FOMAML and OC-Reptile, respectively. We note that for OC-Reptile, the first (N-1) batches contain only examples from only one class and the last (Nth) batch is class-balanced. We additionally compare to the class-balanced version of these meta-learning algorithms, where only class-balanced (CB) batches are used during meta-training (CB-FOMAML and CB-Reptile). The results on the two already used image datasets (Multi-Task MNIST and MiniImageNet) as well as on the during the rebuttal phase added Omniglot dataset are consistent. The results on all datasets can be seen in Table 2 of the revised paper version. \\n\\nWe find that OC-MAML substantially outperforms the other meta-learning algorithms, by a substantial margin on all datasets. As shown in the new section of our revised paper version (section 2.3.2), OC-MAML is the only meta-learning algorithm that, for a given task, optimizes for increasing the inner product of the gradients computed on different minibatches with different class-imbalance rates, namely minibatches containing data from only one class and a class-balanced minibatch (meta-update). If the inner product of the gradients computed on two different minibatches is positive, taking one gradient step using one minibatch leads to an increase in performance on the other minibatch. Hence, OC-MAML optimizes for an initialization from which taking a a few gradient steps using a minibatch including datapoints from only one class results in an increased performance on the class-balanced task, i.e. higher performance on both classes. In our analysis of the approximated gradients of the different meta-learning algorithms, we find that the second derivative term is essential to do so.\\n\\nIn an attempt to make the other meta-learning algorithms work in the few-shot one-class scenario, we add a batch normalization (BN) layer immediately before the output layer of the network. This BN layer standardizes the latent features using the mean and std. deviation of the few datapoints available for finetuning, which all belong to the normal class. As a result, this layer would output features with mean close to 0 and std. deviation close to 1 for normal class examples. Anomalous examples would yield features with other statistics, which simplifies their detection. We hypothesize that by enforcing a mapping of the data to a latent space standardized only by examples from the normal class, the detection of the anomalies would be easier, as these clearly fall out of distribution. \\n\\nThe added BN layer is of course used during meta-training as well. Hereby, we do not train the BN parameters of this layer, i.e. the scaling parameter (gamma) is fixed to 1 and the centring parameter (beta) is fixed to 0. We do so, to make sure that the network does not shift the standard distribution. The results are displayed in the Table 2 of the revised paper version. \\n\\nWe find that this simple modification substantially increases the performance of the other meta-learning algorithms, i.e. MAML, FOMAML, Reptile, OC-FOMAML and OC-Reptile, on all image datasets. However, OC-MAML without batch normalization still yields better results. We observe a higher increase in performance when K=10 than when only K=2 examples are available. This confirms our hypothesis that enforcing a mapping of the data to a latent space standardized only by examples from the normal class makes the detection of the anomalies easier. In fact, using more examples yields more accurate mean and std. Deviation measures, which enables a better approximation of the distribution of the normal class and therefore leads to an improved detection of the anomalies. \\n\\nWe also tested these algorithms on networks including a trainable batch normalization layer after each convolutional layer. Comparable results to just adding one non-trainable batch normalization layer before the output layer were yielded.\"}",
"{\"title\": \"Our answer regarding weakness 2\", \"comment\": \"Weakness 2: \\\"Regarding episodic training, in contrast to few-shot classification problems, support sets in episodes have similar positive examples. Thus, fine-tuning baseline method could work well, even without using MAML. Please compare it with the fine-tuning method.\\\"\\n\\n\\nWe are not sure, we understood the concern in the first sentence correctly. We clarify the episodic training of OC-MAML in the following. In each episode, a new support set is randomly sampled from the majority class data for each meta-training task that was sampled for this episode. Therefore, like in the few-shot class-balanced classification case, support sets differ from episode to episode. The only difference is that in the latter case the support sets include datapoints from all classes. We note that each meta-training task has a different normal class, which means that the support sets of the different tasks have different normal (non-anomalous) examples. Please elaborate on the first sentence if our clarification did not answer your concern. \\n\\nAs requested we compared to the \\\"Finetune\\\" method proposed in the concurrently submitted paper to ICLR 2020 \\\"Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples\\\" (paper 1019). The results are shown in the table 1 in the revised paper version, where we also compare to shallow OCC approaches. \\n\\nAs expected, the \\\"Finetune\\\" baseline performs well in adapting to few-shot class-balanced tasks, but it overfits to the majority class, when finetuned with examples belonging to only one class. As a result, it yields a test accuracy close to 50% on all datasets, like a predictor that always predicts just one outcome, in the one-class classification case.\"}",
"{\"title\": \"Our answer to your comments\", \"comment\": \"Comment 1: \\\"I assume that the query set in each episode include negative examples, while support sets have only positive examples. Right? What is the value of c (class imbalance rate) in the query set?\\\"\\n\\n\\nYes, support sets include only positive examples, i.e. belonging to the majority (non anomalous) class. Query sets are class-balanced (c=50%) in order to evaluate the model on both classes in the outer loop after adaptation, i.e. the inner loop updates. This way, we optimize the performance of the model on both classes equally after adaptation using only one class.\\n\\n----\", \"comment_2\": \"\\\"Wouldn't it be better to focus on experiments with c=0% since one-class classification requires the training with only positive examples?\\\"\\n\\n\\nWe showed the results of c=50% in order to give the reader some reference numbers from the class-balanced case. For example, these results answer the question \\\"How much accuracy gain would having data from both classes yield\\\". As for the experiments with c=1%, we conducted them to show that our approach is not only applicable in the extreme case of one-class classification, but also in the general class-imabalance case. We will focus more on the c=0% case in the revised version of the paper.\\n\\n----\", \"comment_3\": \"\\\"What was the baseline one-class classifier? One-class SVM?\\\"\\n\\n\\nAs we mentioned in the paper, we did not compare to the classical one-class classification approaches, since they require high amounts of data and are therefore not applicable in the few-shot regime that we address. It should be noted that OC-SVM and other shallow approaches sometimes completely fail in one-class classification even when high amounts of data are available. For example in [3], OC-SVM yields a AUC-ROC of 50% on some of the CIFAR-10 classes, when 5000 datapoints from this class are used for learning. \\n\\nUpon your request and the request of Reviewer 2, we conducted additional experiments using the classical OCC approaches OC-SVM and Isolation Forest (IF) on the few-shot one-class test tasks. Hereby, we apply PCA to the data where we choose the minimum number of eigenvectors sothat at least 95% of the variance is conserved, as done in [3]. For OC-SVM, we additionally tune the inverse length scale (gamma) by using 10% of the test set, as done in [3]. This gives OC-SVM a supervised advantage, compared to the other baselines.\\n\\nFor a fairer comparison, where these methods also benefit from the data available in the meta-training tasks, we additionally conducted experiments on the embeddings inferred by the feature extractors of both the \\\"Finetune\\\" baseline and the Multi-Task-Learning (MTL) baseline. The results can be seen in Table 1 of the the revised paper version. \\n\\nAs expected, the baselines fail to generalize to unseen examples when only few datapoints from the normal class are available. The only exception is the MT-MNIST dataset, where the shallow baselines trained on the extracted embeddings yield good performance K=10 examples of the normal class are available. We explain this by the fact that in the MT-MNIST dataset, the feature extractor models (MTL and \\\"Finetune\\\") are exposed to most of the anomalies of the test task (8 out of the 9 digit classes present in the test task) during training. Hence, useful embeddings are extracted. We note that, even on the MT-MNIST dataset, OC-MAML still outperfoms all baselines by a significant margin.\\n\\n----\", \"comment_4\": \"\\\"It was mentioned that CLEAR was an earlier work. Then, the empirical comparison with CLEAR should be included when image data is considered.\\\"\\n\\n\\nCLEAR [4] uses a feature extractor trained on ImageNet. Comparing to it on the MiniImageNet dataset would not be fair, as the feature extractor was trained on the test classes. The other datasets that we tested our approach on, MNIST and Omniglot, are composed of grey-scale images. We will not be able to run experiments on datasets that were tested in the CLEAR due to the short rebuttal time. We would like, however, to point out that OC-MAML is data-type-agnostic and was successfully validated on time-series data, to which CLEAR is not applicable.\\n\\nAs we have addressed most of your concerns which clearly improved the quality of the paper, we would really appreciate if you could spare the time to reevaluate our revised paper and adapt the current score accordingly. Please feel free to further comment our additions and raise further concerns.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"One of promising approach to tackle the few-shot problems is to use meta-learning so that the learner can quickly generalize to an unseen task. One-class classification requires only a set of positive examples to discriminate negative examples from positive examples. The current paper addresses a method of meta-training one-class classifiers in the MAML framework when only a handful of positive examples are available.\", \"---Strength---\", \"Few-shot one-class classification is a timely subject, which has not be studied yet.\", \"Meta-training one-class classifiers in the MAML framework seems to be sound.\", \"---Weakness---\", \"MAML is quite a general meta-training framework, which can be used when parameterized base-learners are updated using gradient methods. Thus, when parameterized models for one-class classification are used, it is rather easy to meta-train one-class classifiers in the MAML framework.\", \"Regarding episodic training, in contrast to few-shot classification problems, support sets in episodes have similar positive examples. Thus, fine-tuning baseline method could work well, even without using MAML. Please compare it with the fine-tuning method.\", \"---Comments---\", \"I assume that the query set in each episode include negative examples, while support sets have only positive examples. Right? What is the value of c (class imbalance rate) in the query set?\", \"Wouldn't it be better to focus on experiments with c=0% since one-class classification requires the training with only positive examples?\", \"What was the baseline one-class classifier? One-class SVM?\", \"It was mentioned that CLEAR was an earlier work. Then, the empirical comparison with CLEAR should be included when image data is considered.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors have investigated the few shot one classification problem. They have presented a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm. I think the topic itself is interesting and I have the following concerns.\\n(1) The first is about the real requirement of this learning scenario. Although the authors have pointed out some real applications, I think they have been introduced separated. In other words, since this setting is the combination of two previous areas, i.e., one class classification and few-shot learning, I fell that the authors have introduced it by just a combination. What are the unique challenges of this problem? I think these problems should be clarified at first.\\n(2) The second one is the algorithms itself. Although I have not checked the details, I fell that the authors have prepared this paper in a rough way. The authors have only described the method, without deep analyses answering the question why. For example, the method seems heuristic, without theoretical analysis. In summary, I think this paper likes a technical report, not a research paper.\\n(3) Although I can catch the main meaning of this paper, it seems that the writing style is not so fluently. I suggested the authors to recognize the presentation.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper tackles an interesting problem, one-class classification or anomaly detection, using a meta-learning approach. The main contribution is to introduce a parameter such that the inner-loop of the meta-learning algorithm better reflects the imbalance which occurs during meta-testing. Results are shown comparing a few simple baselines to both MAML and the modified variant, on a few datasets such as image-based ones (MNIST, miniImageNet), a synthetic dataset, and a real-world time-series example from CNC milling machines.\", \"Overall, the paper presents an interesting problem and awareness that meta-learning might be general enough to solve it well, but provides no real novelty in the approach. The datasets and comparison to other state of art methods (including both other anomaly detection methods and out of distribution methods) is lacking. I suggest the authors perform more rigorous experimentation and focus the paper to be a paper about an understudied problem with rigorous experiments/findings, or improve their method beond the small modification made. Due to these weaknesses, I vote for rejection at this time. Detailed comments are below.\", \"Strengths\", \"The problem is interesting and under-studied in the context of deep learning and transferable methods from similar ML problems (e.g. few-shot learning)\", \"The method is simple and adapts a state of art in few-shot learning (meta-learning, and specifically MAML)\", \"Weaknesses\", \"While I enjoyed reading the paper since it tackles an under-explored problem, it is hard to justify publishing the method/approach at a top machine learning conference. Changing the balance in meta-learning is a relatively obvious modification that one would do to better reflect the problem; I don't think it results in general scientific/ML principles that can be used elsewhere.\", \"The relationship to out-of-distribution detection (which some of the experiments, e.g. Multi-task MNIST and miniImagenet essentially test) is not discussed or compared to. How are anomalies defined and is it really different than just being out-of-distribution?\", \"The datasets are limited. The MNIST dataset seems to choose a fixed two specific categories for meta-validation and meta-testing, as opposed to doing cross-validation. Results on just one meta-testing seems limited in this case with just one class. In terms of time-series, anomaly detection has been studied for a long time; is there a reason that the authors create a new synthetic dataset? For the milling example, how were anomalies provoked?\", \"The baselines do not represent any state of art anomaly detection (e.g. density based, isolation forests, etc.) nor out of distribution detection; the latter especially would likely do extremely well for the simple image examples.\", \"There is no analysis of what the difference is in representation (initialization) learning due to the differences between the OCC and FS setup. What are the characteristics of the improved initialization?\"], \"one_minor_comment_not_reflecting_the_decision\": [\"Exposition: Define the one-class classification problem; it's not common so it would be good to define in the abstract, or mention anomaly detection which is a better-known term.\"]}"
]
} |
SyeKGgStDB | Training a Constrained Natural Media Painting Agent using Reinforcement Learning | [
"Biao Jia",
"Jonathan Brandt",
"Radomir Mech",
"Ning Xu",
"Byungmoon Kim",
"Dinesh Manocha"
] | We present a novel approach to train a natural media painting using reinforcement learning. Given a reference image, our formulation is based on stroke-based rendering that imitates human drawing and can be learned from scratch without supervision. Our painting agent computes a sequence of actions that represent the primitive painting strokes. In order to ensure that the generated policy is predictable and controllable, we use a constrained learning method and train the painting agent using the environment model and follows the commands encoded in an observation. We have applied our approach on many benchmarks and our results demonstrate that our constrained agent can handle different painting media and different constraints in the action space to collaborate with humans or other agents.
| [
"agent",
"constrained natural media",
"reinforcement learning",
"reinforcement",
"novel",
"natural media",
"reference image",
"formulation",
"rendering",
"human"
] | Reject | https://openreview.net/pdf?id=SyeKGgStDB | https://openreview.net/forum?id=SyeKGgStDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LyXwrtrXM2",
"rylc8Z7QaB",
"HkehXzoP9H",
"Hyl4lLkRKr",
"Byes_E3hYr"
],
"note_type": [
"decision",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742575,
1575330130042,
1572479523764,
1571841516145,
1571763315236
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2181/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2181/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2181/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2181/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Paper is withdrawn by authors.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Submission Withdrawn by the Authors\", \"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes an RL agent for generating a painting from a photograph by optimizing a sequence of brush strokes to match the target image.\\n\\nI believe the paper should be rejected because it does not have significant technical novelty for a first-tier conference, and the results do not show much aesthetic or technical advance. In terms of technical novelty, the paper seems to be applying a standard RL agent to an existing problem space, to optimize existing losses. The addition of constraints is technically very simple. The paper itself fails to articulate a compelling statement of novelty, for example, in discussing Huang 2019, the paper just says that a limitation of that method is that it uses OpenCV and that it doesn\\u2019t provide a system for control. Removing OpenCV is not a publishable contribution and shouldn\\u2019t even be mentioned; the control mechanism is not particularly novel.\\n\\nSome of the results do look nice, but it\\u2019s hard to say that the method has improved over the past 20 years of stroke-based rendering, e.g., many of the results look worse than those in Hertzmann 1998. The paper doesn\\u2019t offer any meaningful comparisons to the previous work, such as fair side-by-side comparisons on the same images, comparing computation times and aesthetics. The paper doesn\\u2019t articulate any aesthetic goals or state any meaningful standard by which the images might have improved over previous work. The control mechanism is not tested in any meaningful way (e.g., user studies), and the changes it makes to the results seem fairly trivial (e.g., contrast can be added to an image just as well via a post-process).\\n\\nI\\u2019m not sure what ICLR\\u2019s policy on citing previous unrefereed work is, but one paper that seems to have much more interesting results in a similar context is: https://arxiv.org/abs/1904.08410 . Reiichiro Nakano, Neural Painters: A learned differentiable constraint for generating brushstroke paintings.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a reinforcement learning agent trained to interact with a painting environment in order to reproduce target images. The novel aspect of this work seems to be a version of the agent producing partial actions (i.e., portion of the action tuple is clamped to pre-specified values). This new agent receives the clamped components of the action as an additional conditioning input.\", \"pros\": [\"In some cases, images produced by the system look appealing.\"], \"cons\": [\"The writing of the manuscript could be significantly improved. I had a hard time understanding certain parts of the paper (e.g., what \\u201cconstrained\\u201d means in the context of the present work). I got an impression that there was an effort to make things look more complex than they really are.\", \"The proposed model lacks novelty \\u2013 there seems to be only one non-trivial contribution and I\\u2019m not entirely sure how useful it is. The authors never compare their system against a simple baseline when one just overrides the actions of the agent.\", \"In general, the evaluation section of the paper leaves a lot to be desired. The authors report some numbers (most of which are not even for the proposed model) but I\\u2019m struggling to make anything out of that information. Why should I care about them? What interesting conclusions can I draw from them? The paper never discusses this. On top of that, there are no baselines.\", \"Notes/questions:\", \"At test time, the policy receives renders from a real environment. Could that create problems since it has only been trained on images synthesized by a neural surrogate?\", \"Abstract: \\u201con many benchmarks\\u201d -> \\u201con several benchmarks\\u201d\", \"Section 1: \\u201cwe use a constraint representation along with \\u2026\\u201d \\u2013 this sentence needs to be rewritten.\", \"Section 2: Missing reference \\u2013 Neural Painters (Nakano, 19). Considers a neural surrogate of the libmypaint environment.\", \"Section 2, last paragraph: \\u201cboth methods can work on either a small action space or a small observation space\\u201d \\u2013 how is the present model different? The action space is very similar (albeit continuous) to the existing approaches (i.e., relies on Bezier curves much like in (Ganin et al., 18)).\", \"Section 2, last paragraph: The last sentence needs to be rewritten (\\u201cuncontrollable agent\\u201d looks a bit strange)\", \"Section 3, first paragraph: \\u201cWe highlight all \\u2026\\u201d -> \\u201cWe describe all \\u2026\\u201d\", \"Section 4.1, first paragraph: The first sentence needs to be rewritten (\\u201cthe corresponding canvas by the given action\\u201d).\", \"Section 4.1, first paragraph: \\u201cUnlike the previous \\u2026\\u201d \\u2013 not true. (Ganin et al., 18) proposes to use this environment and (Nakano, 19) trains a neural surrogate for it.\", \"Section 4.4: \\u201cWGAN loss (Huang et al., 19)\\u201d \\u2013 this WGAN loss was introduced in (Ganin et al., 18).\", \"Section 5: I feel like the paper could do a better job at explaining what \\u201cconstraining\\u201d really means and justifying why it\\u2019s an interesting problem to solve. In my opinion, Eq. (3), for example, obscures rather than clarifies the notion of \\u201cconstraints\\u201d.\", \"Section 5, paragraph 2: \\u201cFor each different\\u201d -> \\u201cFor each\\u201d\", \"Section 5.1, paragraph 4: \\u201cpi\\u201d -> \\u201c\\\\pi\\u201d\", \"Figure 5: The caption almost overlaps with the text below.\", \"Section 6.3, paragraph 2: \\u201cl2\\u201d -> \\u201cl_2\\u201d\", \"Section 7, paragraph 2: \\u201cdiffering\\u201d -> \\u201cdifferent\\u201d\", \"I feel like the authors should perform a major re-writing of the manuscript before it\\u2019s ready for publication. Moreover, I failed to see any significantly novel aspects of the proposed system (maybe due to the poor presentation) and therefore I wouldn\\u2019t recommend the paper for acceptance.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors present the results of training a natural media painting agent using reinforcement learning for different types of strokes. The agent seems to be capable of learning how to pain under different types of constraints and produce visually interesting images.\", \"comments\": [\"Given that the authors give an implementation of the constrained RL agent as one of the key contributions of the paper, there is a glaring absence of mentioning related work on constrained reinforcement learning and reviewing the existing approaches in literature, in order to compare and contrast what the authors propose in this paper. This makes it hard for the readers to assess the novelty of the contribution.\", \"Similarly, the authors should discuss in more detail the limitations on the types of constraints that are possible to easily express in this framework, given the simplicity of the constraints that are shown in the experiments\", \"The readability of the paper seems like it could be improved. Apart from typos, there seem to be many long enumerations of approaches that other researchers have taken in this space, but for a reader it is not immediately obvious how these come together and relate to the work that is being presented.\", \"With the above in mind, one thing that was conceptually unclear to me is that one of the main advantages of the proposed approach, according to the authors, when compared to some of the cited related work, is that this approach can generate intermediate representations and not just the final output with most resemblance to the reference picture. There is a mention of this opening up new artistic possibilities. Yet, this particular use case is not given central stage in evaluation. The authors should provide more examples of why this particular capability is relevant and how it leads to interesting outcomes.\", \"The authors say: \\u201cwe use 5 strokes to reproduce hand-written digits images, 20 strokes to reproduce character images, 100 strokes to reproduce face and object images\\u201d - which does seem reasonable, but the actual numbers (5, 20, 100) are not well motivated. After all, why not (10, 50, 200) or (5, 40, 100) or (5, 40, 200)? It would be good to show experimentally (for one of these) that the choice is justified. I\\u2019m not familiar with KanjiVG - but there do exist kanji characters with more than 50 strokes. I\\u2019m guessing they are not present in this dataset?\", \"The authors should provide more details on the specifics of the model architecture and the hyperparameters that have been used / explored.\", \"In the results/discussion of the paper, the authors should compare to prior work and highlight their novel contributions. Given that multiple papers out there that have had at first glance similar-looking results, it is hard to otherwise qualitatively judge whether what is proposed in this paper is better, without any form of side-by-side comparison.\"]}"
]
} |
SkgOzlrKvH | The Role of Embedding Complexity in Domain-invariant Representations | [
"Ching-Yao Chuang",
"Antonio Torralba",
"Stefanie Jegelka"
] | Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn domain-invariant embeddings for both domains. In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain. In particular, this complexity affects an upper bound on the target risk; this is reflected in experiments, too. Next, we specify our theoretical framework to multilayer neural networks. As a result, we develop a strategy that mitigates sensitivity to the embedding complexity, and empirically achieves performance on par with or better than the best layer-dependent complexity tradeoff. | [
"domain adaptation",
"domain-invariant representations",
"model complexity",
"theory",
"deep learning"
] | Reject | https://openreview.net/pdf?id=SkgOzlrKvH | https://openreview.net/forum?id=SkgOzlrKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sq4LX4xL1",
"B1xoBL8DiH",
"rkljWLLPoS",
"H1eK_SIPsS",
"Byg2pWhqqr",
"SkxxRsGe9H",
"BJlNYB0RFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742545,
1573508675284,
1573508610656,
1573508465051,
1572680131760,
1571986375808,
1571902844491
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2178/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2178/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2178/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2178/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2178/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2178/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": [\"This paper studies the impact of embedding complexity on domain-invariant representations by incorporating embedding complexity into the previous upper bound explicitly.\", \"The idea of embedding complexity is interesting, the exploration has some useful insight, and the paper is well-written. However, Reviewers and AC generally agree that the current version can be significantly improved in several ways:\", \"The proposed upper bound has several limitations such as looser than existing ones.\", \"The embedding complexity is only addressed implicitly, which shares similar idea with previous works.\", \"The claim of implicit regularization has not been explored in-depth.\", \"The proposed MDM method seems to be incremental and related closely with the embedding complexity.\", \"There is no analysis about the generalization when estimating this upper bound from finite samples.\", \"There are important details requiring further elaboration. So I recommend rejection.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your constructive comments. We would like to address your concerns as follows:\\n\\n(a) Our bound is tighter in some conditions. As we point out in definition 3, theFG\\\\DeltaG-divergence is smaller than the FG\\\\DeltaFG-divergence. Therefore, comparing to (4), if the lambda in (4) and (6) are small enough and the latent divergence is sufficiently minimized, our bound can be smaller than the original bound from Ben-David. \\n\\n(b) Thank you for pointing this out. Quantifying the sample complexity of the upper bound is indeed an interesting question, and it would be interesting to add a discussion about it.\\n\\n(c) If the inequalities are strict, minimizing domain-invariant loss in different layers might achieves similar performance. Although we did not theoretically prove it, the experiments reveal that the monotonicity could be strict in practice.\\n\\n(d) The encoder is restricted in the sense that restricting the encoder to align the distributions in each layer implicitly restricts the set of feasible encodings. While we do not theoretically quantify this effect, we do validate the effect empirically, in the sense that we observe good empirical results. We will use careful wording and explain this in more detail. \\n\\n(e) As stated above, this approach implicitly restricts the feasible set of embeddings to those that align the distributions well in all layers. Empirically we find an effect on the performance that is well visible.\\n\\nThank you for the pointer to [A]. We will cite and discuss this paper. As opposed to that work, our goal here is to find the most simple method to solve the layer-selection issue. Though MDM is simple, it resolves the layer selection problem we propose. We also provide a theoretical motivation.\\n\\n(f) We do use CNNs which is stated in the digit classification and object classification paragraph in section 6. \\n\\nThank you again for your suggestions.\\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your helpful suggestions. We would like to address your concerns as follows:\\n\\n1. Comparison to Previous Works\\n\\nThey and we are addressing a common issue, label consistency in domain-invariant representations, hence the seeming similarities. But all take a different perspective to focus on.\\nIn [1], they propose a lower bound on the target error. Again, they do not explicitly consider the embedding complexity. We point out that restricting the encoder is a necessary condition to achieve good performance in unsupervised domain adaptation.\\nIn [2], they provide an upper bound by leveraging the \\u201cconnectedness\\u201d in input space and the Lipschitzness of the encoder. However, the input space distance between source and target domains has to be small to make the bound tighter, which might not hold in the common domain adaptation benchmarks (e.g. MNIST->MNIST-M).\\n\\nIn comparison, the embedding complexity is not discussed in [1,2] and we believe that the insights from our theory and experiments are unique. With respect to the example, in contrast to [1-2], our example is motivated by the \\u201cembedding complexity\\u201d (main difference). For instance, in Figure 1, we compare two encoders with different complexity which is not shown in previous works.\\n\\n\\n2. About the upper bound\\n\\nIn most of the domain adaptation bounds (including [1,2,3]), the goal is to use the source error to bound the target error. Therefore, without assuming the source error is small, not only our bound, the bounds in [1,2,3] are also large since the first term of the bound is the source error itself. As a consequence, assuming the additional two terms R_S(f^{\\\\prime}g) are small is reasonable. Also, in most of the papers [1,2,3], the proposed bound can neither be computed or approximated. \\n\\nComparing to [3], our bound is tighter if the source error and the latent divergence is sufficiently minimized. As we point out in Definition 3, the FG\\\\DeltaG-divergence is smaller than the FG\\\\DeltaFG-divergence. Therefore, comparing to (4) (the bound of [3]), if the lambda in (4) and (6) are small enough and the latent divergence is sufficiently minimized in (6), our bound (6) is smaller than the original bound (4) in [3] . \\n\\n\\n3. Layer-wise Tradeoff\\n\\nIn [4, section 6.4], the performance is actually increasing along with the layer number and only decreases in the last layer. These results by themselves may be misleading for the questions we address, since it may look like increasing the number of layers will always improve performance. In our bounds and experiments, we observe that this is not the case, and increasing the layer number in the encoder can make the performance significant worse than the optimal case (Figure 4, (c)). As opposed to [4], we connect the results to theoretical results. We also include a different set of experiments. We will discuss this in more detail. \\n\\n\\nThank you again for your suggestions.\\n\\nThanks,\\nAuthors\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your constructive comments. Below we would like to address your concerns.\\n\\n1. The upper bound\\n\\nOur bound is not looser. As we point out in the paper, the lambda in equation (3) can be arbitrarily large when the label is not consistent. It is not clear to us that why eqn (3) is \\u201cmore reasonable\\u201d? We propose a term that includes the embedding complexity explicitly. We admit that complexity bounds for neural networks can be high, theoretically. Yet, our experiments show that the trends indicated in our bounds indeed appear to occur in practice.\\n\\nComparing to [1][2][3], we analyze the problem from the perspective of embedding complexity.\\nThe bound in [1] does not explain the effect of the encoder, while our bound expresses it explicitly. In addition, [1] relies on common support, which does not hold for popular domain adaptation benchmarks (MNIST -> SVHN, Office31). We do not need such assumptions. \\nIn [2], they propose a lower bound on the target error. Again, they do not explicitly consider the embedding complexity. We point out that restricting the encoder is a necessary condition to achieve good performance in unsupervised domain adaptation.\\nIn [3], they address the label consistent problem with generates examples to fill in the gap between the source and target domains.\\nIt seems that in [1,2,3], the embedding complexity is not discussed and we believe that the insights from our theory and experiments are unique. We will add more discussion of related work to the paper. With regard to proposition 5, we admit that it is a generalization of it, and we will add that to the paper. However, we use the result to explain the layer-wise trade-off, which is different from [1].\\n\\n\\n2. Implicit regularization\\n\\nThe encoder is restricted in the sense that it has to learn aligned embeddings in all layers, instead of being free to choose arbitrary ones. While we do not theoretically quantify this effect, we do validate the effect empirically, in the sense that we observe good empirical results. We will use careful wording and explain this in more detail. \\n\\n\\n3. The proposed method\\n\\nIndeed, similar approaches can be seen in other papers. However, our goal here is to find the simplest method to solve the layer-selection issue. Though MDM is simple, it resolves this problem, with a theoretical motivation. We will add the references and discuss related work in greater detail.\\n\\nThank you again for your suggestions.\\n\\nThanks,\\nAuthors\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies the impact of embedding complexity on domain-invariant representations. By incorporating embedding complexity into the previous upper bound explicitly, the authors demonstrate the limitations of previous theories and algorithms. Based on their theoretical findings, the authors propose to control the embedding complexity with implicit regularization. Specifically, aligning source and target feature distributions in multiple layers controls both embedding complexity and domain discrepancy. The proposed algorithm can achieve similar performance as DANN with manual selection of embedding depth.\\n\\nBy noting that the hypothesis space can be decomposed in to the feature extractor and the classifier, the authors propose to address the domain discrepancy separately. D_H\\\\DeltaH is termed latent divergence, which the algorithm attempts to minimize. D_G\\\\DeltaG is treated as embedding complexity, which is the intrinsic property of the feature extractor. Thus, domain-invariant representations should seek a proper tradeoff between those two terms. \\n\\nThe paper is well-written and the contributions are stated clearly. The exploration on the layer division is really insightful.\\n\\nHowever, I have several concerns: \\n1.\\tThe proposed upper bound is insightful, but it has several limitations. Compared to the version applied to the feature space in equation (3), the proposed upper bound is looser. The embedding complexity terms includes two encoders, which are deep neural networks in practice, thus it can be excessively large. As the authors point out, in equation (3), the embedding complexity is not addressed explicitly, but it is implicit in the adaptability \\\\lambda in a more reasonable way. Previous works [1], [2], [3] have already taken them into consideration. Proposition 5 is a direct application of proposition 1 in [1].\\n2.\\tOn the claim of implicit regularization. By applying domain adversarial training to multiple layers, the authors claim that the encoder in higher layers is implicitly restricted. However, they do not validate this regularization effect. Is the embedding complexity controlled? Theoretical analysis or experimental results would be helpful.\\n3.\\tThe proposed MDM method seems to be incremental. [4] has probed into the effect of multi-layer adaptation strategy. Besides, applying domain adversarial training to many layers leads to more computational cost and may slow down training significantly. \\n\\n\\n[1]Fredrik D Johansson, Rajesh Ranganath, and David Sontag. Support and invertibility in domain- invariant representations. arXiv preprint arXiv:1903.03448, 2019.\\n[2]Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J Gordon. On learning invariant representation for domain adaptation. arXiv preprint arXiv:1901.09453, 2019.\\n[3] Hong Liu, Mingsheng Long, Jianmin Wang, and Michael Jordan. Transferable adversarial training: A general approach to adapting deep classifiers. In International Conference on Machine Learning, pp. 4013\\u20134022, 2019.\\n[4] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, volume 37, pp. 97\\u2013105, 2015.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the problem of domain adaptation via learning invariant representations. The main argument here is that when the total depth of layers in a neural network is fixed, tradeoffs exist between feature alignment and prediction power. Furthermore, the authors argue that richer feature extractor can sometimes significantly overfit the source domain, leading to a large risk on the target domain.\\n\\nOverall the paper is well-written and easy to follow. My major concern is that the paper, including the motivation and illustrative example, are too similar to previous work [1-2]. More detailed discussions are needed to highlight the difference of this work compared with [1-2]. The main contribution lies in Theorem 4. However, the upper bound is both loose and misleading. Compared with the original generalization upper bound [3], the one proposed in this paper contains a constant $\\\\lambda$ that contains FOUR optimal error terms. Note that the original one in [3] only contains two such terms. In fact even a bound containing 2 such terms could potentially be very loose, since it's perfectly fine that a hypothesis can have large risk on the source domain while still attaining a small risk on the target domain. The bound is misleading in the sense that this $\\\\lambda$ term cannot be computed or approximated, hence only the first two terms in (6) could be minimized in practice. However, this again can potentially lead to large target risk when the label distributions of source and target domains differ. \\n\\nThe experiments on using different number of layers of the network as feature extractors are quite interesting. The main message here is that general tradeoff exists with richer encoding function class. However, similar phenomenons have already been observed [4, Section 6.4], and it's not clear to me what's new here. \\n\\n[1]. On Learning Invariant Representations for Domain Adaptation, ICML 2019.\\n[2]. Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment, ICML 2019.\\n[3]. Analysis of representations for domain adaptation, NIPS 2007.\\n[4]. A DIRT-T APPROACH TO UNSUPERVISED DOMAIN ADAPTATION, ICLR 2018.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new theory for domain adaptation considering the complexity of representation extractors. This paper gives a new bound for target error in domain adaptation, which contains the classic distribution distance related to the hypothesis space of high-level classifiers and a new distribution distance defined on the embedding space. This paper also proposes Multilayer Divergence Minimization algorithm based on the theory and evaluates it on real-world dataset.\", \"positive_points\": \"(a) This paper proposes an interesting insight that the complexity of embeddings is also important in domain adaptation.\\n(b) This paper defines a new distribution divergence and build an interesting theory based on it.\\n(c) The proposed algorithm could automatically reach the best result of trying DANN on each layer.\", \"negative_points\": \"(a) There is no proof that this new bound is better than classic domain adaptation theory (Ben-David et al., 2010). Although this bound involves new insight, the novelty is limited if it is looser than existing upper bound. Furthermore, there are no creative tools in the mathematical proof part, which is a direct extension of the classic theory. \\n(b) There is no analysis about the generalization when estimating this upper bound from finite samples. It could be easily seen that the sample complexity of embedding complexity is at least of the same order than classic \\\\mathcal{H}\\\\Delta\\\\mathcal{H}-divergence (Ben-David et al., 2010).\\n(c) The analysis on the monotonicity of the divergences across the layers is very limited. It will be better if there is a discussion about when the monotonicity is strict.\\n(d) What is the role of embedding complexity in the algorithm? It seems that only high-level classifier divergence is minimized.\\n(e) Why minimizing the sum of divergences computed on all layers can control the proposed upper bound? It seems that if the embedding complexity of each layer is a constant, minimize divergence of a single layer can further minimize the minimum. Furthermore, there are previous method that minimizes divergences on all layers [A]. Please give a discussion on this method.\\n(f)The empirical evaluation is relatively weak. There is no experiment based on convolutional networks, which are widely used on the Digit and Office-31 datasets.\\n\\nAlthough the insight is interesting, the novelty of this paper is not enough for being accepted by ICLR. So I vote for rejecting this submission. \\n\\n[A] Zhang, Weichen, et al. \\\"Collaborative and adversarial network for unsupervised domain adaptation.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\"}"
]
} |
SklwGlHFvH | Learning Curves for Deep Neural Networks: A field theory perspective | [
"Omry Cohen",
"Or Malka",
"Zohar Ringel"
] | A series of recent works established a rigorous correspondence between very wide deep neural networks (DNNs), trained in a particular manner, and noiseless Bayesian Inference with a certain Gaussian Process (GP) known as the Neural Tangent Kernel (NTK). Here we extend a known field-theory formalism for GP inference to get a detailed understanding of learning-curves in DNNs trained in the regime of this correspondence (NTK regime). In particular, a renormalization-group approach is used to show that noiseless GP inference using NTK, which lacks a good analytical handle, can be well approximated by noisy GP inference on a related kernel we call the renormalized NTK. Following this, a perturbation-theory analysis is carried in one over the dataset-size yielding analytical expressions for the (fixed-teacher/fixed-target) leading and sub-leading asymptotics of the learning curves. At least for uniform datasets, a coherent picture emerges wherein fully-connected DNNs have a strong implicit bias towards functions which are low order polynomials of the input. | [
"Gaussian Processes",
"Neural Tangent Kernel",
"Learning Curves",
"Field Theory",
"Statistical Mechanics",
"Generalization",
"Deep neural networks"
] | Reject | https://openreview.net/pdf?id=SklwGlHFvH | https://openreview.net/forum?id=SklwGlHFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CCZJxNqrtG",
"HylTavYKor",
"Bkx25PKFsr",
"H1lEP8ttiS",
"HyeI7HFKiS",
"ByxzbYLB5r",
"Ske_7mrk5S",
"r1eXnS1RKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742514,
1573652421244,
1573652372357,
1573652060354,
1573651742237,
1572329722484,
1571930911777,
1571841451116
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2177/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2177/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2177/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2177/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2177/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2177/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2177/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies deep neural network (DNN) learning curves by leveraging recent connections of (wide) DNNs to kernel methods such as Gaussian processes.\\n\\nThe bulk of the arguments contained in this paper are, thus, for the \\\"kernel regime\\\" rather than \\\"the problem of non-linearity in DNNs\\\", as one reviewer puts it. \\nWhen it comes to scoring this paper, it has been controversial. However a lot of discussion has taken place. On the positive side, it seems that there is a lot of novel perspectives included in this paper. On the other hand, even after the revision, it seems that this paper is still very difficult to follow for non-physicists. \\n\\nOverall, it would be beneficial to perform a more careful revision of the paper such that it can be better appreciated by the targeted scientific community.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review Response\", \"comment\": \"We appreciate the time spend by the referee on reviewing our work. We are happy to he or she found it interesting. Indeed the presentation of the previous version has been less tailored for a machine learning audience. Accordingly we have made major changes to the presentation to make it more widely appealing. Specifically: We emphasize the results over the tools whenever appropriate and clarify all definitions. We also compartmentalized the field-theory section to a particular sub-section which can be skipped without comprising the message and tried show the derivation road map in smaller increments. We also included a clearer explanation of the main results, the experiment, and the intuition regarding the renormalized NTK.\\n\\nFollowing these changes, the basic results of our work should stand-out more clearly. We re-state them now to stress their wider-ML-audience appeal \\n\\n1. On uniform datasets, deep fully connected networks are not a black-box. One can understand what they do very accurately with pen and paper. \\n2. Noiseless GP inference can be mapped onto noisy GP inference by trimming the Taylor expansion, in what can be thought-of as a form of renormalization where one coarse grains the angular resolution. The NTK kernels are effective in creating a lot of noise following this normalization. \\n3. As now explained more clearly in Sec. 6, deep fully connected networks trained in the NTK regime (with infinite number of parameters) do not suffer from over fitting due to an implicit bias to low order polynomials. \\n4. Field theory tools can provide an accurate formalism to analyze DNNs. \\n\\nWe believe this work and other physics-style computations submitted to ML conference, can safely find their place more specialized physics journals. However we feel that it is important to try and prevent an \\\"eco chamber\\\" of physicists talking about ML and an ML community disconnected from physics tools and methodologies. Given the other reviews, the referee is in a unique position to make this call.\"}",
"{\"title\": \"Review Response Part II\", \"comment\": \"R. In appendix D, it is mentioned that you need averaging over about 10 samples to have a decent average. For a single realization of a N-sized training set, there is additional variation (Adding or subtracting to the error epsilon). Given actual experiments are typically performed for a single realization of the data, I think this point should be mentioned in the main text more explicitly. Ideally, you could add error bars to the data, accounting for the dispersion inherent to a single-realization case.\\n\\nA. The error bars would be of the order of 0.001\\\\% and completely invisible in our graph. The reason is that while we only take 10~ samples per dataset size (and consequently the dataset averaged generalization error does have a noticeable 5\\\\% relative error), we also perform the Poisson averaging over many datasets of similar size. This greatly suppresses the errors. We now address this point explicitly in Sec 6 (in addition to demonstrating it in appendix A).\\n\\nR. In early section 3, a short definition of a GP should be provided (there, or before).\\n\\nA. Done. See Sec. 3.1\\n\\nR. Around Eq. 2, you should specify the interpretation of $P_0[f]$\\n\\nA. Done. See Sec. 3.2\\n\\nR. After Eq. 3, \\u2018where F is some functional of f.\\u2019 I would add: \\u2018[where F is some functional of f,] for instance Eq. 2.\\u2019\\nThe derivation of eq 6 is not obvious. You do detail it in the appendix, but forgot to cite the appendix !\\n\\nA. Done. See Sec. 3.2.\\n\\nR. \\u2018Notably none of these cited results apply in any straightforward manner in the NTK-regime.\\u2019 : could you quickly explain why (no matched priors ? Noise ?) \\n\\nA. When preparing the original manuscript we tediously went through all these cited results as explained in the prior works section. The main issue is that very few treat the noiseless case beyond one dimension and two dimensional settings. The one work we know which makes predictions at higher dimension (Sollich (2001)) works in the teacher-averaged predictions (rather than fixed teacher), makes assumptions on some level of matching between the teacher-prior and the GP-prior, and doesn't lead to explicit expression. \\n\\nR. \\u2018\\u2018The $d^{-l}$ scaling of eigenvalues\\u2019\\u2019 : at this point, the variable \\u2018l\\u2019 had not been defined.\\n\\nA. Fixed. \\n\\nR. \\u2018\\u2018notably cross-talk between features has been eliminated\\u2019\\u2019 : has it been eliminated or does it simply become constant ? \\n\\nA. It has been eliminated since only $g_i$ affects the coefficient of $\\\\phi_i(x)$ in the predictions. Learning becomes diagonal in feature space. \\n\\nR. \\u20183% accuracy\\u2019 [relating to figure 1]\\nI understand the idea but accuracy seems misleading. I would replace everywhere with something like \\u2018relative mismatch\\u2019. OR explain better why you call this accuracy: usually a high accuracy is preferred, and here you are proud with this very low imprecision of 3%\\n\\nA. We thank for the referee for this comment. This has been changed throughout the text. \\n\\nR. \\u2018\\u2019Taking the leading order term one obtains the aforementioned EK result with N\\u2019\\u2019 : maybe (just a suggestion here) you could recall it here, given it was in page 1 (and in-line).\\n\\nA. Aiming to make it less dense, the new introduction doesn't include the explicit expression for the EK result. \\n\\nR. Appendix B: could you explain why this difference increases with N ? I would have expected this kind of quantity to decrease with N.\\n\\nA. As is explained more clearly in Sec. 5. While the off-diagonal terms are small there are more and more of them as the the dataset size increases (and hence the size of the matrix K(D)). Collecting all these omitted contributions to the prediction, they would appear, roughly, as a random matrix multiplying the target vector in Eq. (1). Therefore they would sum together (incoherently/with-alternating-signs) and give an error which increases with $N$. A different view point is that the large one takes $N$ the finer the features ones can resolve, and hence neglecting the high angular momentum spherical Harmonics becomes less and less adequate. \\n\\nR. All later comments (typos and related mistakes)\\n\\nA. We thank the referee for paying such detailed attention to our appendices. Most of these points have been addressed in the revised version.\"}",
"{\"title\": \"Review Response Part I\", \"comment\": \"We appreciate the time spent by the referee on reviewing our work. We are happy to he or she found it interesting. The referee, along with the other referees, made several important comments regarding presentation which we took very seriously. We believe the revised manuscript delivers the message much more effectively and for a wider machine learning audience. Specifically: We emphasize the results over the tools whenever appropriate and clarify all definitions. We also compartmentalized the field-theory section to a particular sub-section which can be skipped without comprising the overall message. We also included clearer explanation of the experiment and extended the manuscript by one page, to 9 pages.\\n\\nWe next address the referee's specific question/comments below. \\n\\nR. I would like the paper to present more explicitly how the regression target labels g(x) are generated. Maybe it is said but I couldn't easily understand, for sure, how they are generated.\\n\\nA. A far more detail explanation of the experiment now appears in Sec. 6. \\n\\nR. Also, please explain early enough what is meant by uniform dataset (I understood it simply means the data x is drawn uniformly at random over a manifold, here this manifold is often the d-dimensional hypersphere).\\n\\nA. This is correct and now appears in the introduction. \\n\\nR. Claim II states that \\u2018...lead to clear relations between deep fully-connected networks and polynomial\\nregression\\u2019\\u2019. This is, I believe, supported by theoretical proof and numerical double-check, however it is not discussed enough for the abstract\\u2019s promise to be fulfilled \\u2018a coherent picture emerges wherein fully-connected DNNs ...\\u2019.\\nI think this point deserves a more ample discussion in section 7.\\n\\nA. Section 6. includes an additional paragraph which explains this point by combining the ideas of previous section and the bound we obtain on the eigenvalues of any renormalized NTK. \\n\\nR. More generally, the claims in the introduction or at the end of section 3 are stated rather explicitly, but very densely, and the careful reader can get the hypothesis of each result from the text. \\nHowever for the sake of ease of read of less patient readers, I think it would be appropriate to have, somewhere, a more self-contained description of the results\\u2019 list. This paper is technical and some readers will be interested of simply knowing the hypothesis made and type of results obtained.\\nFor instance, the sentence \\u2018\\u2018They [results] hold without any limitations on the dataset or the kernel and yield a variant of the EK result along with its sub-leading correction.\\u2019\\u2019 is misleading: as stated in the previous sentence in the text, this is for the fixed-teacher learning curve, etc.\\n\\nA. We have improved our results list, disentangled the derivations from the results (see for instance sub-section 3.3). Regarding fix-teacher versus average, we agree with the referee on the technical point however note that fix-teacher learning curves are typically harder to obtain compared to ones derived from say a different Gaussian prior. More specifically it is very easy now to take our expressions for the MSE error and average them over any teacher prior for which $\\\\langle g_n g_m \\\\rangle_{Teachers}$ in known. This is because $g$ appears linearly in all of our predictions. R. Please try to explain a bit more the intuition behind renormalization / trimming terms q>r (r integer fixed, the higher the less approximated). More specifically, it is not very clear to me how it can be interpreted in terms of how we look at the data. You mention $(x\\\\cdot x\\u2019)^r$ being negligible or not depending on r,d etc, but I wounder if there is some kind of simple \\u2018geometrical\\u2019 interpretation (is it a coarse graining of the data in angular space, the \\u2018high energy\\u2019 eigenvalues corresponding to the high frequency, high resolution distinction between very close angles ?). On that point I am a bit lost and it\\u2019s a pity because your results are strong and rely on few, rather simple/elegant assumptions (which call for some intuitive understanding).\\n\\nA. We perfectly agree with referee's geometric interpretation. Section 5. now contains a paragraph discussing the intuition behind our renormalizaiton group approach. \\n\\nR. Could you explain intuitively, to the inexperienced reader, why the noiseless case is harder to deal with than the finite-noise one ?\\n\\nA. Such an intuition now appears in the text, below Eq. (4). The relies on the notion that hard constraints are typically less tractable than soft constraints and similarly that averaging makes problems easier as it reduces the amount of information.\"}",
"{\"title\": \"Review Response\", \"comment\": \"We appreciate the time spent by the referee on reviewing our work. It is unfortunate that we couldn't communicate it better. However the reviewer may get a sense from the other referees' views, that there is a unique result \\\"hidden\\\" in this text. This result required much analytical effort and several novel ideas (such as the renormalized NTK). Indeed it is often said that deep neural networks are a black-box whereas, in this fairly complicated setting, we understand almost exactly what they are doing using only pen and paper. The bigger promise here is that field-theory can deliver a concrete, detailed, and accurate formalism for reasoning about DNNs. The notion of adding noise to a GP by trimming its Taylor expansion may also resonate with a wider audience.\\n\\nRegarding presentation, we concede that the delivery of many parts was sub-optimal and emphasized techniques over results. In addition more attention should have been placed on making the definitions clearer. We have therefore made various changes to the presentation: We emphasize the results over the tools whenever appropriate and clarify all definitions. We also compartmentalized the field-theory section to a particular sub-section which can be skipped without compromising the overall message. \\nA more persistent hurdle in effectively communicating our results, is that we use physics tools and methodologies: We make reasonable assumptions which lead to experimentally verifiable/falsifiable theoretical predictions which we then proceed to test. We also use tools like field-theory, which have no axiomatic basis, because past experience in particle physics convinced us that they perform well and similarly important - that they are insightful. Although this puts strains on a reader coming from a different background, we believe it would at the long run, benefit the machine learning community as a whole. We hope that the referee accepts this as reasonable. \\n\\nAssuming we haven't used up all the referee's patience, we hope she or he would be willing to re-review the revised version. \\n\\nRegarding the referee's specific question \\n\\n\\\"It seems that the authors claimed that both EK and SL give approximation error O($1/N^3$). Then why SL is the \\\"sub-leading asymptotics\\\". \\n\\nThe expression we refer to as EK and SL have now been clearly defined in the text. To focus the discussion let's assume a target function which has a finite number of non-zero $g_n$'s. One can see that the ($g^{\\\\star}_{EK,\\\\eta}-g$) (the error in the EK results) has a leading power of $O(1/N)$, whereas the $SL$ term has a leading power of $O(1/N^2)$.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper used the field-theory formalism to derive two approximation formulas to the expected generalization error of kernel methods with n samples. Experiments showed that the sub-leading approximation formula approximates the generalization error well when $n$ is large.\\n\\n\\nThis paper is poorly written. Many mathematic notations and terminologies are not well-defined. The setup of the experiments are not given clearly. Here I gave some examples: \\n1) The authors claimed that they derived two approximation formulas, EK and SL. I didn't find a clear statement in the main text saying which formula is the EK approximation and which formula is the SL approximation. My conjecture is that, Eq. (10) gives the EK formula, and SL formula is not given in the main text. In addition, Eq. (10) is confusing because the authors wrote that Eq. (10) is a simplification of Eq. (8). However, Eq. (8) was an approximate equality, then Eq. (10) turned into an equality. \\n2) Figure 1 gives experimental results. However, the description of the experimental setup is completely vague. The authors described the kernel and the target function as \\\"the NTK kernel implied by a fully connected network of depth 4 with $\\u03c3_w^2 = \\u03c3_b^2 = 1$ and ReLU activations\\\" and \\\"a target function with equal spectral weights at $l = 1, 2$\\\", without other explanations. I don't think readers can figure out what is exactly the kernel and the target function from this description. \\n3) I am concerned about the writing style of this paper. I am OK with the physics jargon the authors used in the paper, as well as the non-rigorous of the result. But I think the authors should write equations in a clear way. For example, the definition of renormalized NTK should better be defined in equations such as $K_r(x, x') = \\\\sum_{k = 0}^r b_k <x, x'>^k$, rather than be described in words like \\\"trim after the r\\u2019th power\\\".\", \"here_is_a_technical_question\": \"- It seems that the authors claimed that both EK and SL give approximation error O(1/N^3). Then why SL is the \\\"sub-leading asymptotics\\\"? \\n\\n\\nI feel the content of the paper is somewhat interesting. However, the paper is poorly written. The authors failed to deliver effective scientific communication to the readers. The results cannot be reproduced after reading this paper. Therefore, I would give a clear reject. \\n\\n\\n---------\", \"after_reading_the_response_and_the_revised_paper\": \"I found that the authors modified and improved their manuscript a lot. They made much effort to address the issues I raised. This is why I think I can potentially raise my score to a weak rejection. \\n\\nHowever, the modifications made by the authors are still not sufficient. For example, I asked the authors in my review to clarify what is the target function for the experiments. The authors now write in the paper \\\"We consider input data in dimension d = 50 and a scalar target function $g(x) = \\\\sum_{l=1,2;m} g_{lm}Y_{lm}(x)$ such that $\\\\sum_{l=1, m} g_{lm}^2 = \\\\sum_{l=2, m} g_{lm}^2 = 1/2$, but otherwise iid $g_{lm}$\\u2019s.\\\" I believe that a (random) target function that satisfies all these conditions doesn't exists. I guess what the authors want to say is something like \\\"taking $g(x) = \\\\sum_{l=1, 2} \\\\sum_{m = 1}^{M_l} g_{lm}Y_{lm}(x)$, $(g_{11}, ..., g_{1 M_1}) \\\\sim Unif(S^{M_1 - 1}(1/\\\\sqrt 2))$, and $(g_{21}, ..., g_{2 M_2}) \\\\sim Unif(S^{M_2 - 1}(1/\\\\sqrt 2))$\\\". The problem of the statement of the authors is that, if $\\\\sum_{m=1}^{M_1} g_{1m}^2 =\\\\sum_{m = 1}^{M_2} g_{2m}^2 = 1/2$, $(g_{lm})_{l = 1, 2; m \\\\in \\\\{1, \\\\ldots, M_l \\\\}}$ cannot be i.i.d. (one choice is to make $g_{11} = ... = g_{1M_1} = \\\\sqrt{1/(2 M_1)}$ and $g_{21} = ... = g_{2 M_2} = \\\\sqrt{1/(2 M_2)}$ be deterministic, but they are unequal). This is just an example of the writing problem of the paper. There are many other issues. \\n\\nI doubt this paper can be easily accepted at a Physics venue. I used physics tools like replica methods and I knew some Physicists published in machine learning conferences. The papers these Physicists wrote deliver clear scientific communications, though also using jargons and non-rigorous tools. There are many papers using physics tools studying machine learning problems, which published at ML conferences like ICLR, ICML, and NeurIPS. This paper is far less as accessible as those papers. \\n\\nFinally, I want to point out that, the generalization of kernel methods have been intensively studied in the machine learning literature, for example using the RKHS theory. It would be nice to cite related literature and compare the results. It is my fault that I didn't bring this point up in my review. \\n\\nI agree that there could potentially be great ideas in this paper. The conference is a venue with quality control. I encourage the authors to submit this paper again after they make more efforts to improve its accessibility.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This theoretical paper exploits a recent rigorous correspondence between very wide DNNs (trained in a certain way) and Neural Tangent Kernel (a case of Gaussian Process-based noiseless Bayesian Inference). \\nA field-theory formalism was developed for Gaussian Processes (2001). Here it is thus extended to the NTK case. There are 3 important theoretical results which are both proven and backed by numerical confirmation. These results, in particular the first, provide a very accurate prediction for the learning curve of some models. The paper is well situated within this literature. I am not very knowledgeable about NTK or even Gps, however I understand the challenges of understanding DNNs and I am familiar with field theory and renormalization group.\\nGiven the importance and quality of the results, and the overall quality and clarity of this (dense) paper, I recommend acceptation without hestiation.\\n\\nThere are a couple of points however that could be improved, that would make the paper more useful for the community.\\nGiven the density of results in the paper, I would relax the length constraint, allowing up to 9 or 10 pages if possible, to add more explanations (not computations).\\n\\n\\nI would like the paper to present more explicitly how the regression target labels g(x) are generated. Maybe it is said but I couldn\\u2019t easily understand, for sure, how they are generated.\\n\\nAlso, please explain early enough what is meant by uniform dataset (I understood it simply means the data x is drawn uniformly at random over a manifold, here this manifold is often the d-dimensional hypersphere).\\n\\nClaim II states that \\u2018...lead to clear relations between deep fully-connected networks and polynomial\\nregression\\u2019\\u2019. This is, I believe, supported by theoretical proof and numerical double-check, however it is not discussed enough for the abstract\\u2019s promise to be fulflled \\u2018a coherent picture emerges wherein fully-connected DNNs ...\\u2019.\\nI think this point deserves a more ample discussion in section 7.\\n\\nMore generally, the claims in the introduction or at the end of section 3 are stated rather explicitly, but very densely, and the careful reader can get the hypothesis of each result from the text. \\nHowever for the sake of ease of read of less patient readers, I think it would be appropriate to have, somewhere, a more self-contained description of the results\\u2019 list. This paper is technical and some readers will be interested of simply knowing the hypothesis made and type of results obtained.\\nFor instance, the sentence \\u2018\\u2018They [results] hold without any limitations on the dataset or the kernel and yield a variant of the EK result along with its sub-leading correction.\\u2019\\u2019 is misleading: as stated in the previous sentence in the text, this is for the fixed-teacher learning curve, etc.\\n\\nPlease try to explain a bit more the intuition behind renormalization / trimming terms q>r (r integer fixed, the higher the less approximated). More specifically, it is not very clear to me how it can be interpreted in terms of how we look at the data. You mention (x.x\\u2019)^r being negligible or not depending on r,d etc, but I wounder if there is some kind of simple \\u2018geometrical\\u2019 interpretation (is it a coarse graining of the data in angular space, the \\u2018high energy\\u2019 eigenvalues corresponding to the high frequency, high resolution distinction between very close angles ?). On that point I am a bit lost and it\\u2019s a pity because your results are strong and rely on few, rather simple/elegant assumptions (which call for some intuitive understanding).\\n\\nCould you explain intuitively, to the inexperienced reader, why the noiseless case is harder to deal with than the finite-noise one ?\\n\\nIn appendix D, it is mentioned that you need averaging over about 10 samples to have a decent average. For a single realization of a N-sized training set, there is additional variation (Adding or subtracting to the error epsilon). Given actual experiments are typically performed for a single realization of the data, I think this point should be mentioned in the main text more explicitly. Ideally, you could add error bars to the data, accounting for the dispersion inherent to a single-realization case.\\n\\nIn early section 3, a short definition of a GP should be provided (there, or before).\\n\\nAround Eq. 2, you should specify the interpretation of P_0[f]\\n\\nAfter Eq. 3, \\u2018where F is some functional of f.\\u2019 I would add: \\u2018[where F is some functional of f,] for instance Eq. 2.\\u2019\\nThe derivation of eq 6 is not obvious. You do detail it in the appendix, but forgot to cite the appendix !\\n\\n\\u2018Notably none of these cited results apply in any straightforward manner in the NTK-regime.\\u2019 : could you quickly explain why (no matched priors ? Noise ?) \\n\\n\\u2018\\u2018The d^-l scaling of eigenvalues\\u2019\\u2019 : at this point, the variable \\u2018l\\u2019 had not been defined.\\n\\n\\u2018\\u2018notably cross-talk between features has been eliminated\\u2019\\u2019 : has it been eliminated or does it simply become constant ? \\n\\n\\u20183% accuracy\\u2019 [relating to figure 1]\\nI understand the idea but accuracy seems misleading. I would replace everywhere with something like \\u2018relative mismatch\\u2019. OR explain better why you call this accuracy: usually a high accuracy is preferred, and here you are proud with this very low imprecision of 3%\\n\\n\\u2018\\u2019Taking the leading order term one obtains the aforementioned EK result with N\\u2019\\u2019 : maybe (just a suggestion here) you could recall it here, given it was in page 1 (and in-line).\", \"appendix_b\": \"could you explain why this difference increases with N ? I would have expected this kind of quantity to decrease with N.\", \"appendix_f\": \"there are typos in the r.h.s. in the first line.\\n\\\\sum_j f_j \\\\phi_j (I think).\", \"appendix_g\": \"you denote \\\\partial / \\\\partial \\\\alpha for the functional derivative. I would replace with \\\\delta to stress out it is a functional and not regular derivative.\\n\\nBeyond\\t appendix G.1 : I confess I didn\\u2019t have time to read it.\\n\\nDespite the overall quality of the text, there are a number of wrong singular/plural matchings, which can easily be corrected. Here are some of them, with other typos as well: \\n\\u2018Furthermore since our aim was to predict what the DNNs would predict rather [THAN?] reach SOTA predictions\\u2019\\n\\n\\u2018a factor of a factor\\nof about 3.\\u2019\\n\\nas do for \\u2013 > as we do for\\n\\nuniformally - > uniformly\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper explores how tools from perturbative field theory can be used to shed light on properties of the generalization error of Gaussian process/kernel regression, particularly on how the error depends on the number of samples N. For uniform data on the sphere, a controlled expansion is obtained in terms of the eigendecomposition of the kernel. Although the expansion breaks when the noise term goes to zero, a renormalized kernel is introduced for which an accurate perturbative expansion is possible. A variety of empirical results confirm the theoretical analysis.\\n\\nThe results presented here are interesting and I particularly liked the introduction of the renormalized kernel to study the noiseless case. The agreement in Fig 1 is quite impressive, and the improvements relative to naive 1/sqrt(N) scaling as highlighted in App. C shows the power of the approach. The topic is salient and will interest most theoretically-minded researchers, and I think there is an abundance of new ideas and novel content. My only real concern is with the presentation.\\n\\nThis is a fairly technical paper that utilizes a substantial amount of physics jargon and many loose, hand-wavy arguments that rely on significant amount of prior field theory knowledge. I suspect that only a small fraction of the community will have the adequate background to get much out of this paper. For publication in a machine learning conference, I think more effort should be devoted to speaking to the machine learning audience. Some ways to achieve this might include reorganizing the technical points into bite-size chunks, laying out a roadmap for the main calculations and results, highlighting the important takeaways, including more figures, and concretely emphasizing the connections to practice and prior work.\\n\\nOverall, I am a bit on the fence, but leaning towards rejection for the above reasons. I could be convinced to increase my score if I am reassured that non-physicists are able to follow the arguments and find this paper interesting.\"}"
]
} |
HklPzxHFwB | Zero-Shot Policy Transfer with Disentangled Attention | [
"Josh Roy",
"George Konidaris"
] | Domain adaptation is an open problem in deep reinforcement learning (RL). Often, agents are asked to perform in environments where data is difficult to obtain. In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment. The gap between visual observations of the source and target environments often causes the agent to fail in the target environment. We present a new RL agent, SADALA (Soft Attention DisentAngled representation Learning Agent). SADALA first learns a compressed state representation. It then jointly learns to ignore distracting features and solve the task presented. SADALA's separation of important and unimportant visual features leads to robust domain transfer. SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments (Visual Cartpole and DeepMind Lab). | [
"Transfer Learning",
"Reinforcement Learning",
"Attention",
"Domain Adaptation",
"Representation Learning",
"Feature Extraction"
] | Reject | https://openreview.net/pdf?id=HklPzxHFwB | https://openreview.net/forum?id=HklPzxHFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"NEBj3SHKM0",
"HJgQJGREsH",
"SyxgXjaVir",
"BJxFF8pEjS",
"S1xiYtYZ9S",
"H1gu5-h6Yr",
"HyxtTsDjKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742475,
1573343706919,
1573341976275,
1573340800792,
1572080003331,
1571828111937,
1571679169084
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2176/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2176/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2176/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2176/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2176/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2176/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new method for zero-shot policy transfer in RL. The authors propose learning the policy over a disentangled representation that is augmented with attention. Hence, the paper is a simple modification of an existing approach (DARLA). The reviewers agreed that the novelty of the proposed approach and the experimental evaluation are limited. For this reason I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Responses to your comments & Revision uploaded\", \"comment\": \"Thank you for your review.\\n\\nI have uploaded a revision which addresses your comments and also respond to them here.\", \"limited_applicability_of_the_proposed_methods\": [\"I have now made my focus on this setting clear in the introduction. Other work has shown success in the problem of transferring to domains with differing dynamics. Thus, I focus on the orthogonal (and yet unsolved) problem of visual domain transfer.\", \"In the revised introduction and related work section, I show the importance of learning a dis-entangled representation for images. Not only has much work (now mentioned in related work section) focused on learning a dis-entangled representation, doing so would allow a (visual) Reinforcement Learning agent to learn a sufficient and transferable state representation. The question of how to best learn a state representation (often referred to as state abstraction) is an open problem in Reinforcement Learning\"], \"limited_technical_novelty\": [\"In the related work section, I discuss the problems with DARLA (preserving domain shift, including irrelevant information in the state representation). The attention mechanism directly addresses these problems by ignoring irrelevant information and preserving only information needed to solve the RL task (reducing domain shift).\", \"In the SADALA Training section (4.4), I now discuss the tradeoff between training in separate stages vs end-to-end and the importance of weight freezing.\"], \"insufficient_experiments\": [\"I am currently re-running the experiments to verify their outputs. Shortly, I will upload a revision with the updated results and discussion.\"], \"quantitative_results\": [\"Model Agnostic Meta Learning (MAML) and related model-agnostic methods do not solve the problem posed in my paper (zero shot transfer). They focus on meta-learning such that a model is able to learn a new task with few samples. This is few-shot learning, not zero-shot transfer.\", \"I am not sure which different methods and different parameters refer to. I have compared against DARLA with the parameters used in its paper. Additionally, the parameters for my RL algorithms are fixed across different approaches (DARLA, Domain Randomization, SADALA).\"], \"reproducibility\": [\"The original code was open sourced and a link was included in submission to openreview along with the paper. Once the experiments are re-run, I will update the link to the open source code.\"]}",
"{\"title\": \"Responses to your comments and Revision uploaded\", \"comment\": \"Thank you for your review.\\n\\nI have revised and re-uploaded the paper to address your comments.\\n\\n1) To my best knowledge, this paper is the first time in domain adaptation for RL that has explicitly learned a state representation that ignores irrelevant visual features, attention mechanism or not. I have made this clear in the introduction of the re-uploaded paper.\\n\\n2) \\n- In the (re-uploaded) related work section, I have made the problems with the original approach clear: DARLA preserves domain shift due to the encoding of its state representation. Since its beta-VAE is incentivized to reconstruct the image, it preserves differences between the source and target domains, making transfer difficult. Our approach eliminates this domain shift by learning to attend to only the features relevant to solving the RL task and ignoring all others.\\n- I am re-running the experimental results to verify their output (reconstruction and transfer performance). I will upload the results shortly.\\n\\n3) \\n- I have made clear which of the citations are related work and which are baselines in the (new) related work section. Many of the approaches cited require samples from both the source and target domains and can only transfer to that target domain. \\n- The only related works that do not have this problem are DARLA and Domain Randomization, both of which are compared against as baselines.\"}",
"{\"title\": \"Related work discussion has been added\", \"comment\": [\"Thank you for your review.\", \"I have re-uploaded the paper and addressed your concerns. I have added a related work section, placing my work in the context of exiting literature, adding other relevant work.\", \"-- DANNs, ADDA, PixelIDA/SimGAN, and CycleGAN are discussed in this section. They focus on visual domain adaptation where samples from the target domain are present during training time and transfer is only to that target domain.\", \"-- As mentioned in this section, I am solving a different problem, where samples from the target domain are not present during training time, similar to DARLA. Thus, it is good to discuss these approaches, but not necessary to empirically compare against them.\", \"I have added additional experimental details in the appendix. Specifically, methods requiring multiple source domains to train are evaluated (in fig 6) on one domain randomly sampled from the set of source domains. This sample is the same for the evaluation of all algorithms.\", \"I am currently running domain randomization for deepmind lab and will upload results shortly.\", \"I am re-running experiments (particularly for the reconstruction of figure 5) to verify their output and will upload results shortly.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summarize what the paper claims to do/contribute.\\n- The paper proposes a new method for zero-shot visual transfer for RL, SADALA. The method first learns a feature extractor with attention (to focus on realted features only) and then learns a policy in the source task and is able to transfer zero-shot int he target domain. The method is evaluated on two tasks: Cartpole-v1 (Gym) and \\\"Collect Good Objects\\\" (Deepmind Lab). It is compared against DARLA for both tasks and against Domain Randomization only for Cartpole. \\n\\nClearly state your decision (accept or reject) with one or two key reasons for this choice.\\nReject.\\n- The experiments of the paper were particularly weak. \\n--More standard visual adaptation techniques like DANNs,ADDA, PixelDA/SimGAN, CycleGAN were not considered. \\n--The results on domain randomization were not convincing: more details are necessary to determine what the experimental protocol was. One major question: what is the source domain in the case of domain randomization (for Fig. 6) In any case, I find it very hard to believe that simple domain randomization considered here can not fully solve this task for all visual pertrubations considered. \\n-- In Fig. 5 the reconstruction is not correct.\\n-- Domain randomization was not tried on the DeepMind Lab example because of compute. However, I'd encourage the authors to try this. Converging will surely not be linear to the number of perturbations considered as it seems to be implied. Also the OpenAI paper cited as an example where domain randomization took 100 years of simulation required for transfer is a problem of rather different scale: the domain gap there is between simulation and reality for an anthropomorfic robotic hand, and not a simple visual gap where the color of an identical environment are changed. \\n\\n-Related work discussion was insufficient\\n-- Related work section is missing and work is not adequately placed in the context of existing literature in the Introduction where some related work is indeed discussed.\\n-- Related work at the last sentence of the introduction is not discussed correctly. It is implied that all these works are on domain randomization which is not true. Also one work (Chebotar et al) is not relevant as from what I recall there was no visual gap. Finally most of these works deal with much more complex visual gaps so sample complexity is hard to be compared.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes adding an attention mechanism to the DARLA beta-VAE approach to transfer learning. The beta-VAE, soft attention and policy are trained on appropriate source tasks and evaluated zero-shot on target tasks, using two more difficult continuous control domains with RGB observations. Results indicate some improvements to compared to the immediate relevant baseline which may be statistically significant, but it is not clear whether over 10% in practice.\", \"i_cannot_at_this_stage_recommend_acceptance_for_the_following_reasons\": \"1) The paper augments an existing method with a well understood attention mechanism, so the novelty of the approach is relatively low.\\n2) The experimental results are interesting, but I don't find them compelling enough to recommend acceptance based on the results alone. The paper does not solve a major problem with the approach it is based on. In fact, the improvement seems to be smaller when the environment is more complex.\\n3) Several baselines which are cited in the paper are actually missing in the experiments, so it is hard to determine how important is that roughly 10% improvement compared to SOTA.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Pros:\\nThis paper proposed a new method for zero-shot transfer learning under the reinforcement learning setting. The use of attention weights to regularize the latent states was fairly interesting.\", \"cons\": [\"Limited applicability of the proposed methods\", \"The paper was restricted in a setting where rewards, actions, and true states were identical between source and target environments, and only the observed states differed due to differing renderers. Working under such a restricted setting was interesting in its own right, but it might also lead to limited applicability of the proposed method in the real-world setting.\", \"The proposed method focused on solving a very specific problem: learning a dis-entangled latent representation for images. As a result, the potential impact of the proposed methods could be minimal.\", \"Limited technical novelty\", \"The proposed method, SADALA, was built on top of Higgins et al., 2017 (DARLA). The only difference was an added attention layer to the learning of latent states. As a result, the novelty of the proposed method was very incremental and limited from a technology perspective.\", \"Even with additional attention layer, the paper could have performed a more thorough study to help the readers understand and appreciate the idea. For example, this paper didn\\u2019t discuss the tradeoff between training SADALA over separate stages, versus training it from end to end. For example, why the weights of the pre-trained beta-VAE had to be frozen and used as weights in the state representation stage.\", \"Insufficient experiments\", \"-More thorough discussion of the qualitative results should be helpful to understand whether the attention weights helped the model to focus on the right thing. For example, this paper did study the quality of reconstruction in Figure 3-5 of the proposed method. When comparing Figure 3 and Figure 5, it appeared to me that the reconstructed the angle of the pole was different from the original one. And it seemed like attention weights did successfully ignored the color of the cart and pole, but it ignored the angle of the pole, which should be important to the learning task. Unfortunately, the paper didn't further explain the implication of such misrepresentation.\", \"-Quantitative results\", \"It would be interesting to all compare the proposed methods against model-agonistic methods like MAML\", \"It would be useful to include confidence intervals over different tasks.\", \"It would be useful to compare different methods with different parameter settings\", \"The authors mentioned \\u201cVisual Pendulum tasks\\u201d but didn\\u2019t include them in the paper\", \"Reproducibility\", \"It's unclear to me how reproducible the research conducted in this paper was, and it would be useful to open source the code used to conduct the experiments.\"]}"
]
} |
BylUMxSFwS | Disentangled Cumulants Help Successor Representations Transfer to New Tasks | [
"Chris Grimm",
"Irina Higgins",
"Andre Barreto",
"Denis Teplyashin",
"Markus Wulfmeier",
"Tim Hertweck",
"Raia Hadsell",
"Satinder Singh"
] | Biological intelligence can learn to solve many diverse tasks in a data efficient manner by re-using basic knowledge and skills from one task to another. Furthermore, many of such skills are acquired through something called latent learning, where no explicit supervision for skill acquisition is provided. This is in contrast to the state-of-the-art reinforcement learning agents, which typically start learning each new task from scratch and struggle with knowledge transfer. In this paper we propose a principled way to learn and recombine a basis set of policies, which comes with certain guarantees on the coverage of the final task space. In particular, we construct a learning pipeline where an agent invests time to learn to perform intrinsically generated, goal-based tasks, and subsequently leverages this experience to quickly achieve a high level of performance on externally specified, often significantly more complex tasks through generalised policy improvement. We demonstrate both theoretically and empirically that such goal-based intrinsic tasks produce more transferable policies when the goals are specified in a space that exhibits a form of disentanglement. | [
"reinforcement learning",
"representation learning",
"intrinsic reward",
"intrinsic control",
"endogenous",
"generalized policy improvement",
"successor features",
"variational",
"monet",
"disentangled"
] | Reject | https://openreview.net/pdf?id=BylUMxSFwS | https://openreview.net/forum?id=BylUMxSFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1cjSIaigOJ",
"ByenD63fqH",
"B1xqEuK0Kr",
"H1x4DQURYS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742445,
1572158819561,
1571883058344,
1571869532349
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2175/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2175/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2175/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The author propose a method to first learn policies for intrinsically generated goal-based tasks, and then leverage the learned representations to improve the learning of a new task in a generalized policy iteration framework. The reviewers had significant issues about clarity of writing that were largely addressed in the rebuttal. However, there were also concerns about the magnitude of the contribution (especially if it was added anything significant to the existing literature on GPI, successor features, etc), and the simplicity (and small number of) test domains. These concerns persisted after the rebuttal and discussion. Thus, I recommend rejection at this time.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to pre-train policies on some goal-reaching tasks, and then leverage the associated successor features to improve the learning of a new task. The method heavily draws from the Generalized Policy Evaluation/Improvement framework without adding much to it. The only relevant point would be showing (as the title indicates) how to obtain disentangled cumulants, and whether they help transfer to new tasks. Nevertheless, both the definition, the full method, and the claimed benefits are quite ambiguous.\\n\\nAmong other concerns showing that the theory needs more formal treatment, the pillar definition of \\u201cOptimal independent controllability\\u201d is very confusing because it seems to depend on \\u201ca trajectory generated by following \\\\pi_i^*\\u201d. But what If the environment is stochastic? Then following that policy might give different trajectories! This definition needs to be revisited. More concerning examples are given at the end of this review.\\n\\nOn the experimental side, Fig. 4 is the only reported result, and it has an x-axis that is not clearly explained. What are the \\u201csteps (min)\\u201d?\\nIt is also not clear what they mean by the \\u201coff-diagonal trick\\u201d, which seems so important for the good performance.\\nFurthermore, it seems that their method doesn\\u2019t really learn anything new in most of the tasks, it just stays at the same performance that is started with after the whole pre-training steps. It is not clearly stated how much computation effort is required to obtain the desired cumulants, and this invalidates quite strongly any result they report. Even if there\\u2019s no \\u201creward\\u201d needed during the pre-training, which arguably is not even true because you do need the rewards related to whether you have achieved a specific change in a feature!\\nIn fact, it would be greatly appreciated if the \\u201cfinal\\u201d tasks could be expressed in a similar notation than the rest of the pre-training tasks, or vice-versa. As far as I understand, the pre-training tasks consist of making a certain feature fall into a certain subset of its possible values. Can\\u2019t the final tasks, like \\u201cmove the agent to the top right\\u201d be also expressed in that form. The link between the two kinds of tasks needs to be much more explicit to be able to assess the relevance of this work.\\n\\nFinally, they only test their algorithm on Spritworld, which is a small discrete state-action space environment. Even if they try different kinds of tasks in this environment, more detailed analysis or more extensive experiments are needed to assess the benefits of the proposed approach.\\nThis is particularly timely because their method relies on a discretization of some given features that represent the state, which will probably not be very practical in higher dimensional environments.\\nFinally, I would like a comment on how this method interacts with discrete versus continuous action-state spaces.\", \"misc_comments\": [\"Why do the authors introduce the terminology \\u201cEndogenous RL\\u201d, and then say it\\u2019s the same as doing RL with intrinsic motivation? This seems like introducing a new name for the same concept, which seems pointless and confusing.\", \"The connection with \\u201clatent learning\\u201d of Tolman 1984 is very unclear.\", \"There\\u2019s a \\u201cRepresentation Learning\\u201d section, but it\\u2019s not clear at all whether any features are actually learned, or whether the features are actually hand-defined. Is the number of features n also hand-defined?\", \"There might be a typo in the first sentence after equation (9): \\u201cWhile \\\\phi_w is not guaranteed to be optimal with respect to \\\\phi_w\\u201d.\", \"Because of all these concerns, I suggest the paper to be weakly rejected.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper tackles the challenging problem of transfer learning and few shot learning in RL setting and provides some theoretical guarantees for the downstream task coverage.\\n\\nThe paper structure can be further improved by adding a background subsection on successor representation (SR) in RL; SR is not a very well known representation in RL and a brief subsection on that can help the reader in understanding the motivation behind using it. In terms of related work ,another work which can also be mentioned (although not directly related) is \\u201cDARLA: Improving Zero-Shot Transfer in Reinforcement Learning\\u201d which also uses disentangled representations for zero-shot transfer learning. The paper also needs to be more clear in terms of contributions; it seems that there is a significant overlap between this work and (Barreto et al., 2017, 2018); some clarification would be helpful here. \\n\\nIn terms of empirical results; the authors can also compare with other transfer learning methods in deep RL such as Hansen 2019 or Nair 2018 or explain why these are not reasonable baselines. Also the results for DIAYN are a bit surprising to me since in all the experiments the performance of the method is underwhelming; this is especially surprising because in the original DIAYN paper the method performed well in reasonably complex tasks. Can you provide an intuition on why DIAYN performs poorly even in the agent tasks.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper addresses the problem of policy transfer in reinforcement learning, which is an extremely relevant open problem in RL, and is being actively studied by the community.\\nThe authors propose a framework for discovering a set of policies without external supervision which can then be used to produce reasonable performance on extrinsic tasks. \\nThe work exhibits originality in that it shows that disentangled representations, learned by intrinsic rewards, can lead to learn behaviours that are transferable to novel situations. \\n\\n\\nAlthough the problem talked here is of high relevance and the approach proposed is original and supported by theoretical results, I am leaning to reject for the following reasons:\\n\\n- Missing connection to some existing works in the literature. In particular, it seems that there is a link to previous works that focus on discovering reward agnostic options (such as [1]).\\n- Clarity. The method description is somehow difficult to read, mainly because some variables are introduced without explanation/definition. On page 2, please define n and explain the choices of m and k. The font of the axes and legends in Figure 4 is too small, not readable when printed.\\n- Experiments: My first concern is that I do not think I would be able to reproduce the results solely given the paper and supplementary material. It would be necessary to either have access to the code, or a very detailed implementation report.\\n- I would have loved to see either another experiment or at least an intuition on how the framework extends to a very different domain. If not, it should be made clearer that this framework works on 2d domains, where the tasks are navigation tasks. (I am specifically referring to the representation learning phase).\", \"minor_comments\": \"Page 1, first paragraph \\u201c(controlling the position of fruits and nuts)\\u201d\\npage 5, below (9): shouldn\\u2019t it be \\u201cWhile \\\\pi_w is not \\u2026\\u201d?\\nPage 8, last sentence of Sec 5, \\u201ccould only learn to perform\\u201d.\\n\\n1. Machado et al, EIGENOPTION DISCOVERY THROUGH THE DEEP SUCCESSOR REPRESENTATION, ICLR 2018\"}"
]
} |
SyeLGlHtPS | Learning vector representation of local content and matrix representation of local motion, with implications for V1 | [
"Ruiqi Gao",
"Jianwen Xie",
"Siyuan Huang",
"Yufan Ren",
"Song-Chun Zhu",
"Ying Nian Wu"
] | This paper proposes a representational model for image pair such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components. (1) The vector representations of local contents of images. (2) The matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Our experiments show that our model can learn to infer local motions. Moreover, the model can learn Gabor-like filter pairs of quadrature phases. | [
"Representation learning",
"V1",
"neuroscience"
] | Reject | https://openreview.net/pdf?id=SyeLGlHtPS | https://openreview.net/forum?id=SyeLGlHtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-D6vixOfr-",
"ByxsYwZhoH",
"r1ep7v-noB",
"H1xdTLbnsB",
"HJxNLIZ2oH",
"S1e5NBZhiB",
"HyghDmnsqH",
"r1gWl4Ksqr",
"HJecGU-oKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742418,
1573816194548,
1573816101223,
1573815999540,
1573815884205,
1573815601530,
1572746084041,
1572733929179,
1571653137858
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2174/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2174/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2174/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2174/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2174/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2174/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2174/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2174/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper received mixed reviews. On one hand, there is interesting novelty in relation to biological vision systems. On the other hand, there are some serious experimental issues with the machine learning model. While reviewers initially raised concerns about the motivation of the work, the rebuttal addressed those concerns. However, concerns about experiments remained.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1 (Part 2)\", \"comment\": \"\", \"q4\": \"\\u201cthe motivation of the proposed method\\u201d, \\u201cOr the authors simply take some ideas form V1 model and add a module to \\u201cexplain\\u201d motion? \\u201c\", \"a4\": \"One motivation is based on Fourier analysis as mentioned above. Please see our answer to Q2. Another motivation is from previous papers that use matrices to represent camera motion or self-motion. Specifically, in [1], the authors study the change of images when the camera undergoes motion. Each image frame is represented by a vector. The camera motion is represented by a matrix. This idea was also alluded to in [2]. In [3], the authors study the grid cells as forming a high-dimensional vector representation of the 2D position of the agent. The self-motion of the agent is represented by a matrix.\\n\\n[1] Jayaraman, Dinesh, and Kristen Grauman. \\\"Learning image representations tied to ego-motion.\\\" Proceedings of the IEEE International Conference on Computer Vision. 2015.\\n\\n[2] Paccanaro, Alberto, and Geoffrey E. Hinton. \\\"Learning distributed representations of concepts using linear relational embedding.\\\" IEEE Transactions on Knowledge and Data Engineering 13.2 (2001): 232-244.\\n\\n[3] Gao, Ruiqi, et al. \\\"Learning grid cells as vector representation of self-position coupled with matrix representation of self- motion.\\\" Seventh International Conference on Learning Representations (2019).\\n\\nAdding a motion module on top of existing V1 models was not a motivation of our work. But indeed our model relates the linear representations of consecutive image frames, so that our model complements existing models based on linear representations. Moreover, our motion model makes the concept of sub-vectors explicit, in the sense that the sub-vectors are what are rotated by the matrices.\", \"q5\": \"\\u201cthe experimental results are not sufficient to demonstrate the effectiveness of the proposed model.\\u201d\", \"a5\": \"To strengthen the experiments, we have added experiments on two more datasets. Please see Subsections 5.1, 5.3 and Appendices E, F for details. These experiments show that our method achieves competitive performances on optical flow estimation.\", \"q6\": \"About minor issues\", \"a6\": \"Thanks for your careful reading.\\n\\n(1) Thanks for pointing out the issue with the notation of Eq. 2. We now use the more generic notation I = {\\\\bf W}^T V in Eq. 2. Otherwise, we should have used I = \\\\sum_x W^T(x) v(x), where each column of W^T(x) is of the same dimension as I, where we translate the filters W to pixel x and zero-padding the pixels outside the filters. \\n\\n(2) We have changed the notation in Subsection 3.1 following your suggestion. \\n\\n(3) We have tried to replace the reconstruction loss with |I \\u2013 W\\u2019W|^2. However, the learned filters have no obvious pattern. W\\u2019W = I is a stricter constraint than the reconstruction loss, since it requires the reconstruction to hold for any I, whereas in the reconstruction loss, we only want the reconstruction to work for natural images, that is, the learned W captures statistical properties of natural images.\"}",
"{\"title\": \"Response to Reviewer #1 (Part 1)\", \"comment\": \"Thanks for your valuable comments and suggestions.\", \"q1\": \"\\u201cit is not clear why this approach sheds light on our understanding of motion perception. Is there any psychological evidence to support the proposed model?\\u201d\", \"a1\": \"In this paper, we seek to explain two important features of the simple cells in V1. One is that they can be approximated by Gabor filters. The other is that adjacent cells have quadrature phase relation. Our motion model gives simple explanations to the above two features.\\n\\nAbout motion perception, we did consult experts on the neuroscience and psychophysics of motion perception in V1. Existing neuroscience models are usually based on the spatial-temporal filters, such as the motion energy model of [1]. In Subsection 4.4, we connect our work to this model to explain the emergence of spatial-temporal filters. Moreover, we present a recurrent implementation of the spatial-temporal filtering. This recurrent implementation is more efficient and more biologically plausible than plain implementation of spatial-temporal filters which requires memorizing the past frames. \\n\\n[1] Edward H Adelson and James R Bergen. Spatiotemporal energy models for the perception of motion. Josa a, 2(2):284\\u2013299, 1985.\\n\\nIn our paper, we also follow the protocol of the neuroscience papers [2,3] to evaluate the learned filters. \\n\\n[2] Dario L Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of neurophysiology, 88(1):455\\u2013463, 2002.\\n\\n[3] Martin Rehn and Friedrich T Sommer. A network that uses few active neurons to code visual input predicts the diverse shapes of cortical receptive fields. Journal of computational neuroscience, 22(2):135\\u2013146, 2007.\", \"q2\": \"\\u201cwhy the motion between patches I_t[x] and I_{t+1}[x] can be approximated with linear transformation\\u201d\", \"a2\": \"Thanks for the insightful question. One motivation of our model is based on Fourier analysis: An image patch I can be expressed as I(x) = sum_k c_k e^{i<\\\\omega_k, x>} in a Fourier decomposition. If we shift it by dx, the shifted image patch J(x) = I(x - dx) = sum_k c_k e^{-i<\\\\omega_k, dx>} e^{i<\\\\omega_k, x>}. The change from the complex number c_k to c_k e^{-i<\\\\omega_k, dx>} corresponds to rotating a 2D vector by a 2 x 2 matrix. This is a simple example that the shift can be represented by a linear transformation in the frequency domain, as a change in phase.\\n\\nWe want to emphasize that our model does not assume Fourier basis or its localized version such as Gabor filters. Our model figures it out with generic vector and matrix representations.\\n\\nWe have added the above motivation to the introduction.\", \"q3\": \"\\u201cwhy the transformation M only depends on the displacement of the center pixel whereas different pixels in a patch I_t[x] could have different displacements\\u201d\", \"a3\": \"Thanks for the good question. We assume that the motion is smooth, so that within a relative small local patch, the motion is constant. Of course, the patch size or the filter size should be related to image resolution. For images with higher resolution, we may want to use smaller filter size to make this assumption hold. We have added a comment on this point in the introduction when discussing the Fourier analysis motivation.\\n\\nTo be continued in the next message.\"}",
"{\"title\": \"Response to Reviewer #5 (Part 2)\", \"comment\": \"\", \"q5\": \"\\u201cwhat guarantees are there that the model is learning to capture realistic motion behavior? Why not use adjacent video frames?\\u201d\", \"a5\": \"Following your comment, we create another dataset called V1FlyingObjects, which separately applies affine transformations to the background scenes and foreground objects. For each training image pair, we jointly simulate the camera motion and the motion of the objects. This simulates more realistic motion behavior as you suggested.\\n\\nThe V1FlyingObjects dataset consists of 14,411 image pairs. It is similar to the public dataset of FlyingChairs. The difference is that we use smaller motions and more types of objects. As mentioned above, the motions in FlyingChairs tend to be large and abrupt, which does not reflect typical motion behaviors. We shall release the V1FlyingObjects dataset to public. \\n\\nIn addition, we have also tested the learned model on a public dataset, the MPI-Sintel. This is also a simulated dataset, but with special attention to realistic motions. \\n\\nPlease see Subsections 5.1, 5.3 and Appendices E, F of the revised version for details of the new experiments. \\n\\nIn supervised learning, our method, similar to other methods on optical flow estimation, requires ground truth displacements. Adjacent real video frames such as those in MUG usually do not have such ground truth information. We add description of MUG dataset in Subsection 5.1 to clarify the issue.\", \"q6\": \"About comparison with existing pre-trained model.\", \"a6\": \"As we pointed out in Subsection 5.3, we did train state of the art models such as FlowNet2 on our dataset. But the trained model does not perform as well as the pre-trained model, possibly because our dataset is small. We thus reported the performance of the pre-trained model in order to be fair.\", \"q7\": \"About qualitative results on frame animation and frame interpolation.\", \"a7\": \"Existing methods on optical flow are discriminative or predictive in nature, i.e., they take image pairs as input and output the optical flow estimation. Thus they cannot be used for frame animation and frame interpolation. Our model is a representational model or a generative model in some sense, in that we can generate the image frame given its vector representation. We use these qualitative experiments to illustrate this fact. We have moved the two Subsections to Appendices E and F.\"}",
"{\"title\": \"Response to Reviewer #5 (Part 1)\", \"comment\": \"Thank you for your valuable comments and suggestions.\", \"q1\": \"\\u201c\\u2018The representation theory underlies much of modern mathematics and holds the key to the quantum theory (Zee, 2016).\\u2019 Can the relevance of this claim be elaborated on?\\u201d\", \"a1\": \"Yes. In representation theory in mathematics, for a group G, each element g is represented by a matrix M(g) acting on the vector v in a vector space. For two elements g1 and g2 in G, g1*g2 is represented by M(g1) M(g2). In our work, the displacements dx form a 2D Euclidean group. Each dx is represented by a matrix M(dx) acting on the vector v(x) that represents the local image content.\\n\\nIn quantum physics, a particle at position x is represented by a vector v(x) in a Hilbert space. If the particle undergoes a displacement dx, the vector is transformed by a displacement matrix (or operator) M(dx), so that v(x+dx) = M(dx) v(x). In our work, v(x) represents the local image content, and M(dx) represents pixel displacement. \\nMore generally, a particle of a certain momentum with a certain spin (as well as other properties) is represented by a vector in a Hilbert space. When the particle undergoes a Lorentz transformation (a more general notion of displacement in space-time), the vector is multiplied by a matrix (or operator) representing the Lorentz transformation. Different types of particles correspond to different schemes of representing the Lorentz transformations. \\nWe adopt such mathematical language in our work. More generally, we may use vectors to represent various objects in the image, and use matrices to represent the motions of these objects. \\nSuch a mathematical language was adopted by earlier papers before.\\nIn [1], the authors study the change of images when the camera undergoes motion. Each image frame is represented by a vector. The camera motion is represented by a matrix. This idea was also alluded to in [2].\\nIn [3], the authors study the grid cells as forming a high-dimensional vector representation of the 2D position of the agent. The self-motion of the agent is represented by a matrix. \\nUnlike vector representation that is common in deep learning models, the matrix representation is relatively rare. Our work is an example along this theme.\\n\\n[1] Jayaraman, Dinesh, and Kristen Grauman. \\\"Learning image representations tied to ego-motion.\\\" Proceedings of the IEEE International Conference on Computer Vision. 2015.\\n\\n[2] Paccanaro, Alberto, and Geoffrey E. Hinton. \\\"Learning distributed representations of concepts using linear relational embedding.\\\" IEEE Transactions on Knowledge and Data Engineering 13.2 (2001): 232-244.\\n\\n[3] Gao, Ruiqi, et al. \\\"Learning grid cells as vector representation of self-position coupled with matrix representation of self- motion.\\\" Seventh International Conference on Learning Representations (2019).\", \"q2\": \"About the motivation. \\u201cWhy do we care if the approach captures aspects of V1 for the tasks presented?\\u201d \\u201cWhy do we need this over other methods that can better capture larger motions.\\u201d\", \"a2\": \"This is an important question.\\n\\nFor tasks like optical flow estimation, current state of the art methods such as FlowNet2 use very complex deep neural networks, which are black box models. Our model is much simpler and is based on explicit vector and matrix representations. It is worthwhile to explore such models. Our new experiments also show that our method can achieve performances that are comparable to existing methods. \\n\\nFollowing your suggestion, we have strengthened the motivation of our work in the introduction. \\n\\nYour comment on evaluation is well taken. We have added evaluations on two more datasets in revision. One dataset is created in a similar manner as the public dataset of FlyingChairs. The other is the public MPI-Sintel dataset. See Subsections 5.1, 5.3 and Appendices E, F for details. \\n\\nAbout larger motions, the motions in the FlyingChairs dataset tend to be very big and abrupt, which does not really reflect typical motion behaviors observed in daily life. On the other hand, our model can be modified to a multi-resolution scheme to deal with larger motions. Currently we are exploring this direction.\", \"q3\": \"\\u201c\\u2018Figure 1 illustrates the scheme of representation.\\u2019 Please provide more detail here on what is happening in the figure. The caption and reference here are not informative to what the figure is representing.\\u201d\", \"a3\": \"Thanks for the suggestion. In the introduction of the revised version, we have included detailed explanation of Figure 1.\", \"q4\": \"\\u201c\\u2018We obtain the training data by collecting static images for (It) and simulate the displacement field\\u2019. This is not self-supervised learning.\\u201d\", \"a4\": \"Following your suggestion, we have removed the wording \\u201cself-supervised learning\\u201d, and changed the wording to \\u201clearning from image pairs with synthetic motions\\u201d.\\n\\nTo be continued in the next message.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"We are very grateful for your positive review and insightful comments.\", \"q1\": \"\\u201cI tend to conclude the V1-like receptive fields come from the implicit independence constraint.\\u201d\", \"a1\": \"This is a deep insight that we agree. We have added a comment that this constraint is necessary for the emergence of V1-like receptive fields in Subsection 3.3.\\n\\nIn Appendix G (Appendix D of the original version), we include an ablation study of subspace assumption. V1-like patterns also emerge when the dimensionality of subspace is higher (e.g., 4 or 6). \\n\\nFollowing your suggestion, we have added a result in Appendix E of the revised version, where we totally remove the assumption of sub-vectors. In this case, more blob-like patterns are learned. \\n\\nThe sub-vectors may correspond to columns or modules of neurons, or capsules, i.e., neurons that form sub-groups.\", \"q2\": \"\\u201cconnection between the suggested method and the Lie group approach.\\u201d\", \"a2\": \"Thanks for the insightful suggestion. We have added a comment on this connection in Section 2.\\n\\nIn our work, the displacements dx form a 2D Euclidean group. Our modeling of local motion dx is similar to the treatment of Lie group via Lie algebra by analyzing infinitesimal changes. \\n\\nThe objects in the image may undergo more complex motions which form more complex Lie groups (e.g., rotations and translations). We can again represent the objects (e.g., their poses) by vectors, and represent the motions of the objects by matrices. \\n\\n\\nWe have followed your suggestions to correct those minor errors in the revision. Thank you for careful reading.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The hypothesis in this paper is that the primary purpose of the cells of the V1 cortex is to perceive motions and predict changes in the local image contents. The authors show that by learning from image pairs from a continuous sequence, both V1-like features and motion operators can be learned. I found the hypothesis and formulation reasonable, the numerical results are supportive, it's actually interesting to see that the proposed model's motion prediction outperforms the other dedicated models. Further, the authors used inference to infer the motion during learning, I think this is quite a novel topic to work on. Overall, this makes a good submission.\", \"here_are_some_issues_could_be_addressed_further\": \"1. Section 3.3 introduces subvectors. This implicitly introduces an independence assumption when combined with a motion operator. Then in section 5, the authors studied the dimensionality of subvectors. If the subspaces are assumed to be 2, then this independence regularization is quite strong. This may not support the authors' claim that the prediction of motion is enough to achieve V1-like features and I tend to conclude the V1-like receptive fields come from the implicit independence constraint. I'd suggest an additional ablation experiment to verify the impact of the subspace assumption. \\n\\n2. To model the motion, we can directly use lie operators, the authors may want to discuss the connection between the suggested method and the Lie group approach. \\n\\n3. I found some minor issues, e.g.:\\n 3.1 In Section 3.2 it's normalized tight frame (Parseval frame).\\n 3.2 In Equation 2 I understand it's a deconvolution, however, the notation is still not ideal.\\n 3.3 In the section paragraph of Section 3.2, 'the representation has the isometry property' and 'the vector representation also preserves the angle' should be switched?\\n 3.4 small typos like 'mortar cortex' -> 'motor cortex'.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #5\", \"review\": \"The authors propose a model for learning local pixel motions between pairs of frames using local image representations and relative pixel displacements between agents and objects. The model learned is compared to the ability of the primary visual cortex where adjacent simple cells share quadrature relationships and capture local motion.\\n\\n\\\"The representation theory underlies much of modern mathematics and holds the key to the quantum\\ntheory (Zee, 2016).\\\"\\nCan the relevance of this claim be elaborated on?\\n\\n\\\"Figure 1 illustrates the scheme of representation.\\\"\\nPlease provide more detail here on what is happening in the figure. The caption and reference here are not informative to what the figure is representing.\\n\\n\\\"We obtain the training data by collecting static images for (It) and simulate the\\ndisplacement field ... We refer to this method as self-supervised learning\\\"\\nThis is not self-supervised learning. In self-supervised learning the training label/signal is generated by the system. In this case artificial data is being generated as the displacement between images is sampled.\\n\\nSince the motion between images is artificially generated what guarantees are there that the model is learning to capture realistic motion behavior? Why not use adjacent video frames?\\n\\n\\\"Note that those methods train deep and complicated neural networks with large scale datasets to\\npredict optical flows in supervised manners, while our model can be treated as a simple one-layer\\nnetwork, accompanied by weight matrices representing motions.\\\"\\nIs there a comparison on execution times of the different approaches?\\n\\n\\\"by obtaining the pre-trained models and testing on V1Deform testing data\\\"\\nIs this a fair comparison if the proposed approach was trained on V1Deform training data and the comparison methods were not. A more appropriate comparison would be to apply all the methods to infer the displacement fields between video frames which is also a more natural application. This can be controlled to contain small motions if needed. Why nt use the MUG dataset here?\\n\\n\\\"Displacements at image border are leaved out\\\" -> left out\\n\\nSections 5.4, 5.5 and 5.6 show only qualitative results with no comparison methods. Can the authors provide reasons that other methods could not be used for evaluation?\\n\\nI am not sure I understand the motivation for the approach. Why do we need this over other methods that can better capture larger motions. This needs to be more clear from the introduction. Why do we care if the approach captures aspects of V1 for the tasks presented?\\n\\nThe work is sensible and the approach is clear but I found the evaluation and motivation lacking in key areas that I mention above. The authors should revise and make it clear to the reader why we should care about this problem. Aligning with V1 is interesting but it does not come into play in the applications of the approach or the analysis so I am not sure why I should care. The evaluation also needs to be much more convincing before I could recommend acceptance.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThis paper proposes a representation model for describing local pixel displacement. The proposed model uses matrix multiplication for optical flow estimation, where an image is transformed into a vector and the local motion is modeled by a matrix. \\nThe recommendation of this work is based on the following reasons. First, the motivation of the proposed method is not convincing. While the proposed ideas are interesting, it is not clear why this approach sheds light on our understanding of motion perception. Is there any psychological evidence to support the proposed model? Or the authors simply take some ideas form V1 model and add a module to \\u201cexplain\\u201d motion? Second, the experimental results are not sufficient to demonstrate the effectiveness of the proposed model.\", \"major_issues\": \"First, while it is interesting to use matrix multiplication to model motion, it is not clear why the motion between patches I_t[x] and I_{t+1}[x] can be approximated with linear transformation (Section 3.4). Furthermore, it is not clear why the transformation M only depends on the displacement of the center pixel whereas different pixels in a patch I_t[x] could have different displacements. \\nSecond, the proposed model for optical flow estimation is only evaluated on the proposed V1Deform dataset. If the authors position this paper \\u201cmay shed light on our motion perception in primary visual cortex\\u201d, the authors certainly need to carry out sufficient psychophysical experiments.\", \"minor_issues\": \"First, Eq. 2 does not seem correct to me. The left and right sides of Eq. 2 have different dimensions.\\nSecond, the authors may consider using {} instead of () to define a set of pixels or vectors in Section 3.1.\\nThird, while the reconstruction loss (Eq. 7) is used in this paper, I wonder what the results would be like if the authors simply enforce W\\u2019W=I instead.\"}"
]
} |
S1xHfxHtPr | Online Learned Continual Compression with Stacked Quantization Modules | [
"Lucas Caccia",
"Eugene Belilovsky",
"Massimo Caccia",
"Joelle Pineau"
] | We introduce and study the problem of Online Continual Compression, where one attempts to learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. This problem is highly relevant for downstream online continual learning tasks, as well as standard learning methods under resource constrained data collection. We propose a new architecture which stacks Quantization Modules (SQM), consisting of a series of discrete autoencoders, each equipped with their own memory. Every added module is trained to reconstruct the latent space of the previous module using fewer bits, allowing the learned representation to become more compact as training progresses. This modularity has several advantages: 1) moderate compressions are quickly available early in training, which is crucial for remembering the early tasks, 2) as more data needs to be stored, earlier data becomes more compressed, freeing memory, 3) unlike previous methods, our approach does not require pretraining, even on challenging datasets. We show several potential applications of this method. We first replace the episodic memory used in Experience Replay with SQM, leading to significant gains on standard continual learning benchmarks using a fixed memory budget. We then apply our method to compressing larger images like those from Imagenet, and show that it is also effective with other modalities, such as LiDAR data. | [
"continual learning",
"lifelong learning"
] | Reject | https://openreview.net/pdf?id=S1xHfxHtPr | https://openreview.net/forum?id=S1xHfxHtPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"v0ccgYOJZR",
"rclwaBjUar",
"Byl22PIRnr",
"Hygao_4niB",
"S1pLOVhjB",
"SkelgPwqsB",
"H1xKiHDqiB",
"H1g1GUrr9B",
"SkgU6JAW5r",
"rJlpS_iRYH",
"BJlQThKDKH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798742387,
1575560638846,
1575016372411,
1573828772530,
1573828692517,
1573709544130,
1573709216782,
1572324870519,
1572097981832,
1571891269258,
1571425466798
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2173/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2173/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2173/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2173/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2173/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2173/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2173/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2173/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2173/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2173/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a new problem setup as \\\"online continual compression\\\". The proposed idea is a combination of existing techniques and very simple, though interesting. Parts of the algorithm are not clear, and the hierarchy is not well-motivated. Experimental results seem promising but not convincing enough, since it is on a very special setting, the LiDAR experiment is missing quantitative evaluation, and different tasks might introduce different difficulties in this online learning setting. The ablation study is well designed but not discussed enough.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": [\"I am not familiar with the generative model and continual learning. Thus, I can only give my review based on the authors writing and other reviewers' comments.\", \"The paper proposes a new problem setup as \\\"online continual compression\\\".\", \"The paper gives a combination of many existing techniques to address the new problem. (I agree with Review #4)\", \"I think the presentation and the organization of this paper should be improved in order to properly place their contributions in the literature.\", \"Since the authors try to promote a new problem set up with a solution containing little technical breakthroughs, I suggest the authors put more space on motivating the application and showing its impotence. Currently, their presentation focuses too much on methodology parts.\", \"Thus, it is better to organize the paper as an application from LiDAR and convince reviewers why their method is good for such an application (Review #1 also thinks the presentation is poor).\", \"If the authors insist on keeping their paper as a methodology one, at least one more experiment (not the synthetic one on ImageNet) from real applications are needed (same as Review #2).\", \"Overall, I think the paper is not ready for being published.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This work contributes to introducing a problem called Online Continual Compression. This problem requires to avoid catastrophic forgetting and learn in an online way. Generative methods should be one of the popular ways to do continual learning. This work\\u2019s model can be categorized into this clue since it also aims to save samples from old tasks by learning a generative model. In this way, the generator plays a similar role Experience Replay (ER) (here is called Generative Replay). The main core of this work should be the stacked quantization modules (SQM) which can be regarded as a hierarchical variant of the VQ-VAE model. In their SQM, hidden encodings z_q^i will be encoded and its input is z_q^{i-1} which is from previous layer.\\n\\nThis works covers related works very well. However, there are some questions I am really concerned:\\n1)\\tAbout the studied problem \\u201cOnline Continual Compression\\u201d, what\\u2019s the difference between \\u201conline\\u201d and \\u201ccontinual\\u201d? In continual learning, tasks will be learned sequentially, right? If so, continual learning should run in an online learning way. \\n2)\\tThe motivation of the hierarchy in this work is unclear. What I mean is that the hierarchical model should be expected to capture higher-level semantic features. But in this work, the index outputs z_q^{i-1} is encoded by its subsequent layer. It seems a bit weird since the z_q^{i-1} is not an image and its elements are index values. So what is the higher-level semantic information? By the way, it seems that there is an error in the model figure 1. The last MSE from Block 1 should be connected to the block before decoder 1 in Block 1, rather than the reconstructed one from decoder 1. Therefore, I strongly suggest authors give more insights and clarify the motivation of hierarchy. Writings in the METHODOLOGY part is unclear. More details about the SQM model should be described in a mathematical way. \\n3)\\tAnother question about the details of generative replay. How do you do the replay? Details about this can\\u2019t be found in this work? In Alg.1, what is the \\\\theta? Is the \\\\theta_{ae} at line 14 of Alg.1 wrong? It should be \\\\theta_{gen}, right? \\n4)\\tYou use the data-stream technique reservoir sampling to add and update the memory buffer (alg. 4). Will it lead to some information loss? Can we just update memory without reservoir sampling? Please give more insights about this.\\n5)\\tHow to find the distortion threshold d_th in Alg.2? \\n6) the part of ablation studies is good. But I suggest authors should consider a baseline with the same proposed framework but using a single-layer VQVAE with the same memory capacity as the hierarchical models.\"}",
"{\"title\": \"Manuscript Revisions\", \"comment\": [\"Dear Reviewers, we thank you for your reviews that have helped us to revise the paper. We respond to each of your comments individually. Here we would like to highlight the new material in the paper and also note the main changes we have made to improve the clarity in the manuscript:\", \"-As noticed by all the reviewers, Algorithm 1 was misreferenced as Algorithm 4, this has been now corrected. Our latex file unfortunately had an error which caused this.\", \"-We have added an additional section to further clarify the relation between all the algorithms (sec 3.6\", \"Through communication with the authors of \\u201cScalable Recollections for Continual Lifelong Learning\\u201d we have been able to reproduce their results on the Split-CIFAR100 task and include it in the paper a direct comparison (Parag 5 of sec 4.1), obtaining far better results than this related work.\", \"We have revised the images for LIDAR in Fig 3 to display further images with highlighting of the key parts of the reconstruction. We have also added additional quantitative analysis of the LIDAR compression in Fig 2 and end of Sec 4.\", \"We have added additional analysis to illustrate how the distribution of samples stored at the different levels\", \"We have made all minor grammatical/spelling corrections noted by the reviewers.\"]}",
"{\"title\": \"Response to Reviewer\", \"comment\": \"Thanks for your time reviewing and help improving our paper!\\n\\nWe agree that further experiments in settings besides the standard continual image classification settings would be valuable. We have thus now expanded the evaluation of the LIDAR adding experiments in Fig 2 and the last paragraph of Sec 4. We would like to note also that the offline imagenet evaluations performed in Sec 4.2 are a distinct application from those typically considered in the literature (e.g. those in Sec 4.1). Indeed the approach shows that non-iid data can be collected and compressed online and used in subsequent downstream applications at a later point. \\n\\nWe believe this approach can also be very useful in applications in reinforcement learning, particularly ones that already rely on replay memory particularly ones with changing environments (e.g. Rolnick 2019). This however is beyond the scope of the current work\\n\\nRegarding the ablations we have extend the paragraph discussing this to give more insight. We have also corrected the typo you mentioned along with generally revising the text in the manuscript (see General comments for more details). Note the issue regarding Algorithm 1 reference: it was referenced as Algo 4. This has been corrected, see general comments.\"}",
"{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for your review. We respond to each point in turn.\", \"re\": \"Imagenet class selection, we refer to the mini-imagenet dataset. We use the same setup as in the Chaudry et al paper. This is the most complex dataset considered in previous work for continual classification over long sequences. We also note for online continual classification in the shared-head setting (task id not available at test time) most existing methods fail (see e.g. https://arxiv.org/abs/1908.04742). We do not see any reason our results would not extend to larger number of classes and longer task sequences. Indeed for continual classification SQM combined with ER or ER-MIR should scale better than other method for longer sequences, as the representations and encoder/decoder learned online become more stable with longer data streams, and thus representational drift due to changing decoder becomes less of an issue.\\n\\nWe have corrected many typos and generally revised the manuscript. Note that Algo 1 was mis-referenced as Algo 4, due to a typo in latex.\"}",
"{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for your review. We respond to each concern in turn below:\", \"re\": \"For the previous Fig 3, we note that while the edges of the reconstructions as not as smooth, the key components, such as cars and other obstacles, are fully visible and placed correctly.\\n\\nWe added the definition of BITS \\n\\nPlease let us know if there is further clarifications that can be made.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The study tackled the problem of limited storage for ever-growing data for a long-term learning scenario. The authors proposed to stack Quantization Modules while separating them during training to obtain an online compression system that has multiple resolutions, different memory horizons, and reduced catastrophic forgetting. They also proposed a modified reservoir sampling to accommodate this architecture.\\n\\nThe idea is very simple yet interesting, the paper is a good read, and the results seem promising. The ablation study is well designed but not discussed enough. Additionally, the experiments cannot support the idea well since it is on a very special setting, the LiDAR experiment is missing quantitative evaluation, and different tasks (such as text classification, or visual tracking with only one labeled sample) might introduce different difficulties in this online learning setting. I recommend a weak accept for this paper to encourage the idea. \\n\\nTherefore, I would recommend the authors to explore other tasks and see if their idea applies to different domains and tasks. Also, a quantitative evaluation for the LiDAR experiment with enough details and some explanation of the inner dynamics of the system during learning seems essential. \\n\\nThe paper could enjoy a pass of proofreading and typesetting (especially please pay attention to the correct use of \\\\cite{} and \\\\citep{}). Algorithm 1 is not mentioned in the body of the manuscript.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper presented a Stacked Quantization Modules (SQM) for the problem of Online Continual Compression, based on the VQ-VAE framework by van den Oord et al. (2017). Experiments were conducted on online continual image classification benchmarks to show the effectiveness of the proposed SQM. In general, the novelty of the paper is a little bit limited and the writing of the paper is not very easy to follow.\", \"The SQM was constructed by stacking the known VQ-VAE. It is unclear why the stacking works for online continual compression. How many stacks should be used? What are the yellow rectangle parts in Figure 1?\", \"What are the relationship between Alg 1-4 ? More explanations or discussions are necessary.\", \"In Section 3.1, \\\"The high level training of the online learned compression is described in Alg. 4.\\\". It is very confused. I can't see the related content in Alg.4.\", \"In Section 4.1, \\\"In short, we apply Algorithm 4, with an additional online classifier being updated at line 13.\\\" I don't understand it. I cannot see line 13 in Algorithm 4, because there is only 10 lines in Algorithm 4.\", \"In Section 3.3, BITS(.) needs definition.\", \"In Section 4.1, \\\"Here we consider the more challenging shared-head setting, where the model is not informed of the task (and thereby the subset of classes) at test time. This is in contrast to other (less realistic) CL classification scenarios where the task, and therefore subset of classes, is provided explicitly to the learner Farquhar & Gal (2018).\\\" It is very difficult to understand what the above experimental settings are.\", \"For Figure 3, the textures or lines of the bottom reconstructed one are not so smoothed or straight as the top one.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper focuses on the problem of continual learning with limited memory storage. Specifically, the training data is arrived sequentially (might not be i.i.d.) for a model to exploit and there is not enough storage capacity to keep all the data without compression. This problem is important in many real-world applications with massive amount of data collected. The authors propose an approach named Stacked Quantization Modules to compress the data so that they can be stored efficiently. Each module is an auto-encoder with quantized latent representations. Several aspects including the communication between these stacked modules, and which level will a specific sample be compressed at, are taken into account in the algorithm design. In the experiments, the authors show some quantitative evaluations on CIFAR10 and ImageNet that the proposed method surpass several baseline methods. A qualitative visualization of LiDAR data reconstruction is also demonstrated. Overall I think the paper is tackling an interesting problem with an effective and novel solution.\\n\\nI have a few concerns that I wish the authors could help to clarify. First, in the VQ-VAE, each image is quantized to be H*W*D, where each D-dimensional vector is represented by the index of the nearest neighbor in the embedding table of each module. I checked the paper but could not find a place that discuss how this embedding table comes from. It is pre-defined with some pattern or is it learnt somehow? \\n\\nWhat is the latent space size of each module when trained on CIFAR10 and ImageNet? \\n\\nThe experiments on ImageNet only select 100 classes out of the 1000 classes. Would this method extends to large-scale datasets? How would the form of the tasks (in case of number of classes per task) affect the results? \\n\\nThere seems to be some typos. For example, the end of the first paragraph of Sec. 4.1 mentioned \\\"line 13\\\" of Alg. 4, which is not referred correctly as Alg. 4 only has 10 lines.\"}",
"{\"comment\": \"Hi,\", \"you_can_find_the_anonymized_code_to_replicate_our_experiments_here\": \"https://github.com/StackedQuantizationModules/stacked-quantization-modules\", \"title\": \"Code Release\"}"
]
} |
S1lHfxBFDH | Gumbel-Matrix Routing for Flexible Multi-task Learning | [
"Krzysztof Maziarz",
"Efi Kokiopoulou",
"Andrea Gesmundo",
"Luciano Sbaiz",
"Gabor Bartok",
"Jesse Berent"
] | This paper proposes a novel per-task routing method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, routing networks can be applied to learn to share each group of parameters with a different subset of tasks to better leverage tasks relatedness. However, this use of routing methods requires to address the challenge of learning the routing jointly with the parameters of a modular multi-task neural network. We propose the Gumbel-Matrix routing, a novel multi-task routing method based on the Gumbel-Softmax, that is designed to learn fine-grained parameter sharing. When applied to the Omniglot benchmark, the proposed method improves the state-of-the-art error rate by 17%. | [
"routing",
"parameters",
"flexible",
"novel",
"learning",
"applications",
"neural networks",
"knowledge",
"different tasks",
"parameter sharing"
] | Reject | https://openreview.net/pdf?id=S1lHfxBFDH | https://openreview.net/forum?id=S1lHfxBFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Uuy-JaHvi",
"B1ltq287sH",
"B1lHxjIXir",
"rJlVz_rXor",
"rker7VmRKr",
"SJlvY-bRYS",
"Syl8rm7atB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742360,
1573248145506,
1573247725484,
1573242891590,
1571857436588,
1571848574616,
1571791678456
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2172/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2172/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2172/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2172/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2172/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2172/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposed to use Gumbel softmax to optimize the routing matrix in routing network for multitask learning. All reviewers have a consensus on rejecting this paper. The paper did not clearly explain how and why this method works, and the experiments are not sufficient.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for valuable comments and suggestions. Our responses to specific points are provided below.\\n\\n1) Extensiveness of experiments\\n\\nWhile our method is compared with the SotA only on Omniglot, we also included several other experiments (MNIST, synthetic data), which were aimed at better understanding the behavior of our approach. We believe that these three lines of experiments together paint a relatively broad and convincing picture.\\n\\n2) Sparsity of different routing methods\\n\\nWe can control the sparsity level learned by our method by using the budget penalty, similarly to how it can be controlled in the sparse Mixture-of-Experts paper (P. Ramachandran et al, ICLR 2019) by varying the value of 'k'.\\n\\nFor Omniglot, we do not use the budget penalty, so the learned solution is indeed not very sparse. The previous SotA based on a Mixture-of-Experts imposed a sparsity level of activating approximately 60% of connections, which is again not very sparse, although on average a little more than the solutions found by our method. However, note that this prior work did not mean to trade-off sparsity for accuracy, and neither did we; the other results reported for Omniglot are not sparse and not even based on routing. Therefore, results that we list in our paper (Table 2) include a variety of methods, none of which tried to trade-off accuracy for anything. We believe this constitutes a fair comparison.\\n\\n3) Definition of different static sharing patterns\\n\\n\\u201cFull sharing\\u201d essentially means the shared bottom pattern i.e., all tasks share the same bottom layers and the latter are followed by task-specific heads. In our allocation matrix view, this corresponds to setting the matrix to be all ones (i.e. every task uses every component).\\n\\n\\u201cNo sharing\\u201d means that the network is divided, with each task getting to train a separate set of parameters. In particular, the reviewer asked about the single-task training for MNIST: the non-sharing pattern is essentially that, since each task gets to independently train 1/4 of the network.\"}",
"{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for the valuable comments. Our responses to specific comments are provided below.\\n\\n1) Novelty of our method\\n\\nWe agree that the Gumbel trick and the Gumbel-Softmax routing method is not new. In this work, we propose a new method for multi-task learning and not a new routing method.\\n\\nWhile Gumbel-based routing has been already applied to multi-task learning, we claim that our formulation (in its full form) is novel for the following reasons:\\n- We learn flexible parameter sharing among tasks by learning binary allocation matrices indicating how each component is allocated to each task. This is in contrast to previous works, which typically consider routing with a sequence of decisions \\u201cwhere to route\\u201d. We argue that our formulation is more natural for multi-task learning, as it provides an explicit way to control parameter sharing between tasks (depending on their relatedness). Right now we condition on the task id, but in the future we envision conditioning on task embeddings, which can better capture the relatedness of the tasks.\\n- Moreover, we also introduced ways to regularize our method such as the budget penalty (see Section 4.4) that promotes sparsity of the allocation solution.\\n\\nSince the proposed method is a new method for multi-task learning (and not a new routing method), we argue that evaluating different routing solvers (such as REINFORCE or RELAX) goes beyond the scope of this work. However, it is a very interesting direction and it will definitely be the focus of our future work. As per reviewer\\u2019s suggestion, this may further improve the results.\\n\\n2) Hard vs soft routing decisions\\n\\nPlease note that our method does use hard decisions, since we use the Straight-Through variant of the Gumbel-Softmax trick (the original Gumbel-Softmax paper introduces both the soft variant, and the Straight-Through variant). If a connection is sampled to be inactive, the corresponding component will not contribute to the output and therefore will not get gradients. It will only be used to compute the gradient for the per-connection routing probability.\\n\\n3) Comparing apples to apples\\n\\nWe believe that the Omniglot experiment is an apples-to-apples comparison, since we re-used the same architecture that achieved the previous SotA (\\u201cDiversity and Depth in Per-Example Routing Models\\u201d, ICLR 2019). We made sure that we reproduced all the details by contacting the authors of that prior work; we also used the same regularization strategies and the same optimizer. The only difference is the routing method.\\n\\n4) Scalability\\n\\nIt is indeed the case that due to the use of Gumbel-Softmax, the backward pass needs to activate all of the components of the model. Hence, the training phase of our method is more expensive than for other sparse baselines (such as the sparsely-gated mixture-of-experts). \\nHowever, it is important to note that inference phase of our method is pretty scalable, since it uses hard decisions (and the budget penalty promotes even sparser solutions). Therefore, we argue that our method is practical for many multi-task learning applications.\"}",
"{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for the valuable comments. We are open to any further suggestions for improving our paper. Our responses to specific comments are provided below.\\n\\n1) Routing patterns for Omniglot\\n\\nFor the Omniglot experiment, it was indeed the case that discarding unwanted pooling layers was one of the clearest trends learned by our method. However, as pointed out in the paper, there were still important differences in allocation patterns corresponding to different tasks. Specifically, we grouped the tasks based on the pattern (i.e. the concatenation of all binary routing matrices), and we found around 10 groups on average, while the number of tasks was 20. Notice that differences in patterns may result in arbitrarily large differences in outputs.\\n\\n2) Routing patterns for MNIST\\n\\nIn the case of no budget penalty, the routing commonly converged to the pattern of the following form: one pair of MNIST tasks would use all components (12 components, since there were 3 layers of 4 components each), while the other pair would use all but one component (11 components). Since MNIST and MNIST-rot are still highly related, this shows that the model preferred almost full sharing, except for dropping a single connection to allow for processing the first pair of tasks differently than the second.\\n\\nWith budget penalty enabled, each pair would usually use three out of four components in each layer, exactly matching the budget of 75% active connections. Note that the resulting accuracy was the same with and without the budget penalty.\\n\\n3) Magnitude of gains on Omniglot over the full-sharing baseline\\n\\nEven though the improvement on top of full sharing for Omniglot is not very large, full sharing is actually a pretty strong baseline; even stronger than previous SotA based on a sparse Mixture-of-Experts (P. Ramachandran et al, ICLR 2019). Our interpretation of this result is that in the case of limited data (Omniglot has very few samples per class), it is hard to learn task-specific routing without incurring an accuracy drop due to optimization difficulties. Since our routing method managed to learn task-conditioned routing and improve the accuracy, while the methods from previous works did not, we consider our Omniglot result to be a strong one.\\n\\n4) Other comments\\n\\nWe are happy to move the result from Appendix A.3 to Section 2, if that helps the paper.\\n\\nAlso, the reviewer proposed considering the case of limited data and generalization. However, note that Omniglot might already be seen as such a case, and our experiments show that Gumbel-Matrix routing does produce solutions that generalize well.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper applies the Gumbel-softmax to optimizing task-specific routing in deep multi-task learning. Experiments demonstrate improvements of the method over no sharing or full sharing, and it is used to achieve s-o-t-a results in the Omniglot MTL benchmark.\\n\\nAlthough the end results are good, and the approach is well-motivated, I am leaning to reject, because the experiments have not made clear when the method works and how it behaves. The improvements over the full-sharing baselines appear fairly small, and in the analysis it appears the model is mainly discarding unneeded pooling layers. Is there some real task-specific routing that the method is able to take advantage of? Maybe an experiment where full-sharing is detrimental, i.e., because there are some highly unrelated tasks, would help to highlight how the approach selects appropriate module subsets for each task. E.g., what are the routing patterns in Section 6.1 that are the same within each pair of MNIST tasks, but different across task pairs? Is there a way to visualize differences between routing of different Omniglot tasks?\\n\\nSimilarly, the experiment in Section 2 is interesting, but the conclusion that negative transfer exists is not novel. Is there a way to include the Gumbel approach in these synthetic experiments to show that it addresses this issue? E.g., something like the result in A.3 could be promoted to Section 2. More compelling synthetic datasets could be generated by the method in A.1. for the case where tasks are somewhat related, in which case we can actually see if how the sharing occurs. Could Gumbel see a bigger boost in these synthetic experiments if training data were limited and generalization was tested instead of training loss?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In many ways this work is well presented. However, I have major concerns regarding the novelty of the proposed method and the theoretical rationale for the key design choices. Although the authors do cite and discuss (Rosenbaum et al., 2019), what is very much not clear to me is how the Gumbel-Matrix Routing proposed in this work differs from past work using the Gumbel Softmax within routing networks. It seems like past work even focused on using only the task for routing, so it is not clear to me how the approach here is really novel in comparison. Even if there is some distinction I am missing, the high level idea is clearly not that new. Additionally, there is not much theoretical discussion about what the Gumbel Softmax adds to routing networks.\\n\\nThe bias/variance tradeoff of Gumbel Softmax / RELAX / REINFORCE was already highlighted in (Rosenbaum et al., 2019). Can the performance of the model on the settings tested be attributed to this tradeoff? If so, would a RELAX model perform even better? Moreover, there is not much discussion of important implications of using the Gumbel Softmax trick in the context of routing. First, as the authors acknowledge, but don't really elaborate on, using the Gumbel Softmax means we must backprop through every possible routing choice in each layer. As a result, the Gumbel approach results in a large scaling of computation with the number of modules, limiting the applicability to more ambitious settings. Moreover, while a clear motivation of this work is eliminating interference between tasks, it is not really explained how Gumbel Softmax does this and how it compares to hard routing decisions in this respect. During backprop, the computation it very similar to mixtures of experts models, and should contain more interference than hard routing. Can you explicitly show that the shape of the Gumbel distribution results in less interference between modules during learning than the standard mixtures of experts softmax approach? \\n\\nFurthermore, (Rosenbaum et al., 2019) found that a number of RL based models outperform Gumbel Softmax when routing on multi-task settings of CIFAR-100 and the Stanford Corpus of Implicatives. The authors do not provide any explanation for why this approach did not succeed in their settings. This also leads me to doubt how impressive the results presented here are as there is really not any apples to apples comparison with the same architecture and different routing decisions. In Tables 1 and 2 the best baseline is full sharing. This indicates to me that the performance difference with other cited baselines has to do with different architecture choices and not changes in the routing policy itself. The experiments can be much improved by discussing why past approaches to Gumbel based routing have failed and by thoroughly comparing to other methods for just the routing decisions with the same base architecture as done in prior work. Unfortunately, in its current form, there is not enough context provided for the community to understand the implications of the proposed approach in the submitted draft even though it achieves good performance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes to learn the routing matrix in routing networks for multi-task learning (MTL) using the gumbel softmax trick for binary random variables. It makes the model amenable for training the network and the routing matrix simultaneously, which is a relatively easier and unified training procedure compared to the original routing networks. The gumbel softmax trick technique is pretty standard. The proposed method is evaluated on two MTL datasets with comparisons to baselines on one of them.\\n\\nIn terms of methodology, using gumbel trick for learning routing matrix seems new to my knowledge. Although the trick has been applied to other problems and is used in a standard way. I like the idea of using this trick to make the learning of routing network unified under optimization compared to the learning in the original routing network. \\n\\nHowever, the experiments seem not extensive enough to demonstrate its superiority and efficiency. The method is only compared with other state of the art methods on one dataset. More experiments on various datasets and neural network architectures will be more convincing to me. I am also interested in how does the sparsity of the different routing models compare to each other? It would be unfair if some models trade performance for sparsity compared to the method proposed in this paper. Also it would be interesting to see how the learned routing matrix pattern could say something about the relatedness of different tasks.\\nRegarding \\\"full sharing\\\", is it different tasks trained together with the same network? \\nAnd another minor question for the experiments on MNIST, what are the accuracies for single task learning using same architecture?\\n\\nOverall, I find the idea of using gumbel trick for learning routing networks interesting. However, I feel the experiments are not sufficient and I would encourage the authors to conduct more experiments and comparisons.\"}"
]
} |
SJgSflHKDr | The Frechet Distance of training and test distribution predicts the generalization gap | [
"Julian Zilly",
"Hannes Zilly",
"Oliver Richter",
"Roger Wattenhofer",
"Andrea Censi",
"Emilio Frazzoli"
] | Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set. Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes. | [
"Generalization",
"Transfer learning",
"Frechet distance",
"Optimal transport",
"Domain adaptation",
"Distribution shift",
"Invariance"
] | Reject | https://openreview.net/pdf?id=SJgSflHKDr | https://openreview.net/forum?id=SJgSflHKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Mrprk-YZZ6",
"B1goCQ9J9B",
"ryx1DuaTKr",
"H1euOzraKS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742331,
1571951571384,
1571833942564,
1571799664291
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2171/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2171/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2171/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors discuss how to predict generalization gaps. Reviews are mixed, putting the submission in the lower half of this year's submissions. I also would have liked to see a comparison with other divergence metrics, for example, L1, MMD, H-distance, discrepancy distance, and learned representations (e.g., BERT, Laser, etc., for language). Without this, the empirical evaluation of FD is a bit weak. Also, the obvious next step would be trying to minimize FD in the context of domain adaptation, and the question is if this shouldn't already be part of your paper? Suggestions: The Amazon reviews are time-stamped, enabling you to run experiments with drift over time. See [0] for an example.\\n\\n[0] https://www.aclweb.org/anthology/W18-6210/\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers the problem of how the mismatch between distributions of training data and test data would affect the generalization gap in machine learning tasks. This phenomenon has been observed many times in previous literature and has gathered significant attention in the machine learning community.\\n\\nThe paper took a step in relating the change in the performance of the learned function to the Frechet distance (FD), also known as 2-Wasserstein distance, between the input and output distributions and proved that the former is lower bounded by the latter multiplied by a term related to the sensitivity of learning algorithm to distribution shift. The paper also provides empirical evidence that the testing error is correlated with the FD between input and output distributions based on tasks including text classification, image classification, and speech separation.\\n\\nI find the idea of the paper interesting but the content not convincing enough. The theory proved in the paper does not provide additional quantitive insight beyond intuition. Specifically, the term about the sensitivity of the algorithm is not justified enough in the paper. The experiments provide some evidence but not convincing, especially for the part about image classification.\\n\\nI also find the statement about the generalization gap a bit misleading. Generally, the generalization gap refers to the gap between the expected error and the empirical error. But the experiments are mostly presenting the performance on the test data. \\n\\nOverall, I don't think the paper meets the standard for publication at ICLR.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose to relate the performance of a classifier under distribution shift using a quantity called Frechet distance. It is common belief that the further apart the training and test distributions are, the more difficult it is to transfer a learned classifier. They give simple bounds via gradient norm/Lipschitz constants and distribution distance in Theorem 1. The authors try to capture it with Frechet distance, but I struggle to understand what is new in this work.\\n\\nFirst, there are a lot of assumptions in the computation of the Frechet distance: \\n 1. The authors use the embeddings given by the neural networks instead of the raw data since density estimation is hard. This makes the distance model-dependent \\n 2. The authors assume the embeddings are normally distributed in their computation, which have not been justified. \\n\\nMost importantly, they do not relate the Frechet distance to the lower bound in Theorem 1. There is no estimation on how the learned changes across distributions in the gradient norm term. This makes the evaluation nothing more than a confirmation of the general idea that the closer the distribution, the better the transfer. The lower bound is not used in any quantitative manner. \\n\\nThe authors should make the connection of the bound and its computation clear, with proper connections to the experiments. The current paper looks like separate theoretical and experimental results that do not tie together.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors consider the relation between Frechet distance of training and test distribution and the generalization gap. The authors derive the lower bound for the difference of loss function w.r.t. training and test set by the Wasserstein distance between embedding training and test set distribution. Empirically, the authors illustrate a strong correlation between test performance and the distance in distributions between training and test set.\\n\\nThe motivation to find the relation between generalization gap and the Frechet distance of training and test distribution is sound. However, I am not sure that the lower bound as in Equation (1) is enough. I am curious that one can derive the upper bound for the relation or not. The finding about choosing a training data distribution should be close to the test data distribution seems quite trivial in some sense. I am not clear about its important since it is quite popular that the distribution shift affects the performance and many learning approach assumes same distribution for training and test data. Overall I feel that the contribution may be quite weak, and I lean on the negative side.\", \"below_are_some_of_my_concerns\": \"1) About the lower-bound in Equation (1), it seems unclear to me that when the W_2(p1, p2) = 0, we can inference any information about the test performance (It seems quite trivial for this case, the left hand side time is greater than or equal 0?) In my opinion, the upper-bound is more important which one can inference much information about the difference of generalization gap.\\n\\n2) In the proof of Theorem 1, it is quite hard to follow with the current notation, for the integral in (i), (ii) as well as in the proof using the intermediate value theorem, which variables are used? I am confused which one is variable, which one is constants in those integrals.\\n\\n3) In page 5, at the interpretation (1), for W2(p1, p2) = 0, the learned function fits training distribution perfectly and is not ill-conditioned ==> why one can deduce that the test distribution is fit perfectly? What we have in Theorem 1 is the lower-bound only?\"}"
]
} |
SJxNzgSKvH | Selective sampling for accelerating training of deep neural networks | [
"Berry Weinstein",
"Shai Fine",
"Yacov Hel-Or"
] | We present a selective sampling method designed to accelerate the training of deep neural networks. To this end, we introduce a novel measurement, the {\it minimal margin score} (MMS), which measures the minimal amount of displacement an input should take until its predicted classification is switched. For multi-class linear classification, the MMS measure is a natural generalization of the margin-based selection criterion, which was thoroughly studied in the binary classification setting. In addition, the MMS measure provides an interesting insight into the progress of the training process and can be useful for designing and monitoring new training regimes. Empirically we demonstrate a substantial acceleration when training commonly used deep neural network architectures for popular image classification tasks. The efficiency of our method is compared against the standard training procedures, and against commonly used selective sampling alternatives: Hard negative mining selection, and Entropy-based selection.
Finally, we demonstrate an additional speedup when we adopt a more aggressive learning-drop regime while using the MMS selective sampling method. | [
"training",
"deep neural networks",
"selective sampling",
"mms measure",
"end",
"novel measurement",
"minimal margin score",
"mms",
"minimal amount",
"displacement"
] | Reject | https://openreview.net/pdf?id=SJxNzgSKvH | https://openreview.net/forum?id=SJxNzgSKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"XcWSXnGZZ",
"B1lvRyksiB",
"H1lCXxqqsB",
"HkgDJXI-5B",
"BJeWmNn3KS",
"rJglmTn5tS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742302,
1573740495220,
1573720102393,
1572066014769,
1571763225134,
1571634455602
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2169/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2169/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2169/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2169/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2169/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a method to speed up training of deep nets by re-weighting samples based on their distance to the decision boundary. However, they paper seems hastily written and the method is not backed by sufficient experimental evidence.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer\", \"comment\": \"Dear reviewer #3:\", \"we_would_like_to_thank_you_for_the_feedback_and_will_answer_the_questions_raised\": \"1) We used CIFAR10 and CIFAR100 to prove our main concept which aims to reduce the number of training steps. We didn't have enough time to include other datasets for the deadline but we plan to add ImageNet to the paper.\\n\\n2) Figure 5 and Table 1 show that for CIFAR10 we reached the final accuracy (with a minor drop of 0.25%) after 28% of required training steps using our method. For CIFAR100 we reached the final accuracy (with a minor drop of 0.07%) after 51% of required training steps using our method. This is a very substantial speed up with a very minor drop in final accuracy.\"}",
"{\"title\": \"Training vs. inference and other comments\", \"comment\": \"Dear reviewer #2:\\n\\nWe would like to thank you for the feedback and the effort involved in running the performance example you presented. We will answer the questions and reply to the comments raised:\\n\\n1) As for the answer regarding our central premise. In order to select the samples using our MMS scheme, we leverage inference concepts that are entirely different from training. Some of the prominent ideas are low precision arithmetic operations when applying quantization, layers fusion like Convolution-BatchNorm (applying the BN running statistics into the convolutions and eliminating the need of performing BN) and weight compression. These concepts are not theoretical as they are being used when building specialized hardware accelerators as T4 (by Nvidia), Goya (by Habana) and TPU (by Google), allowing these devices to be ~10X faster at inference than running training step on a modern GPU. A detailed explanation and performance charts can be seen in Google\\u2019s TPU paper \\u201cIn-Datacenter Performance Analysis of a Tensor Processing Unit\\u201d.\\n\\nAdditionally, for distributed training on large-scale hardware, the advantage of the inference devices is even greater. As the training instances must wait to a gradient reduction across all instances, the inference devices can perform forward passes on multiple instances in full parallelism (i.e. inference is embarrassingly parallel), without the need to wait to any other instance in the system. Thus, it can potentially select more offline examples for our MMS scheme. \\n\\nFinally, more performance benchmarks can be found when referring to Habana\\u2019s site (https://habana.ai/inference/ and https://habana.ai/training/) as the training vs. inference throughput on their hardware is 1650 vs. 15453 images/sec.\\n\\n2) As for \\\"In Figure 2 all methods seem to have similar final performance.\\\". Our main goal is not to improve final accuracy but rather to train less steps than the vanilla training regime. For that purpose, we introduced the early LR drop regime (as seen in Figure 5). We presented the plots in Figure 2 in order to show the relatively large deviation in the validation error as well as the training error. The validation error decrease using our MMS method implies a faster convergence and that a faster regime can be used. We further accept the comment and will move these plots to the appendix as well as mention their purpose in the paper.\\n\\n3) As for cutting CIFAR100 validation error for the early LR drop regime. We decided to end this experiment when the error reaches a sufficient performance w.r.t the baseline training (red). Moreover, we explicitly stated the final accuracy of the baseline and our MMS method in Table 1, showing a drop of 0.07% with almost halving the baseline required steps (from 156K to 80K steps).\\n\\n4) We kindly accept your comment regarding the entropy experiment and will include it with the other methods plot.\\n\\n5) We couldn't run many experiments due to time limitations, but we will make the effort to add STD bars.\\n6) We will expand Table 1 as suggested with the entropy experiment and will add more relevant step information for clarity.\\n7) We will add the ImageNet experiment.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"A new approach is proposed to speed up training in deep models.\\n\\nThe idea is to select sample batches when back propagating the error based on the distance of the prediction foe the sample from the decision boundary. Specifically, we pick points closer to the boundary, i.e., ones that we are less confident about for backpropagation.\\n\\nExperiments are performed comparing the method with Hard negative sampling (HNM) , entropy-based sample selection as well as regular training. Experiments are performed on Cifar10 and Cifar100 datasets. \\nWhy only two datasets, the method is general so there should be more datasets to verify its performance.\\n\\nThe results on Cifar100 in Fig 5 c seems to show that we cannot reach the training accuracy using the proposed method as compared to the other methods. What is the intuition here as to why it happens? In general though since the main goal is to speed up training I do not see very convincing evidence of this in the limited evaluation which seems to be the main weakness here.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a minimal margin score (MMS) criterion to speed up the training of the deep networks.\\n\\nI would vote for a clear rejection of this paper. This submission is a clearly unfinished one. The two biggest problems are as follows\\n\\n1. Lack of a comprehensive discussion on rules for sampling section, please see \\\"Automated Curriculum Learning for Neural Networks\\\". Why previous methods are worse than the proposed one is not clear.\\n\\n2. All experiments are only compared with baseline approaches. In some experiments, the improvements are really marginal (e.g., Figure 2). In these cases, the STD of these curves is not shown, it is not clear whether the improvements are significant or not.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"### Summary of contributions\\n\\nThis paper aims to accelerate the training of deep networks using a selective sampling. \\nThey adapt ideas from active learning (which use some form of uncertainty estimation about the class of the label) to selectively choose samples on which to perform the backward pass. Specifically, they use the minimal margin score (MMS). \\nTheir algorithm works by computing the forward pass over a batch of size B (which is much larger than the regular batch of size b), compute the uncertainty measure for each sample, and only perform the backward pass over the b samples with the highest uncertainty. The motivation is that the backward pass is more expensive than the forward pass, and that by only performing this pass on a subset of samples, computations are saved. \\n\\n\\n### Recommendation\\n\\nReject. The central premise of the paper is unclear, the writing/presentation needs improvement, and the experiments are not convincing. \\n\\n\\n### Detailed comments/improvements: \\n\\n\\nThere is a central premise of the paper that I don't understand: that the forward pass is much cheaper than the backward pass. \\nThis is claimed in the intro by referring to charts that hardware manufacturers publish (but there are no references included), but I don't see why this should be the case. \\nFor a linear network with weights W, the forward pass is given by the matrix-matrix product (rows of X are minibatch samples):\\nY = XW^T\", \"and_the_backward_pass_is_given_by_the_two_matrix_matrix_products\": \"dL/dX = dL/dY*dY/dX = dL/dY*W\\ndL/dW = dL/dY*dY/dW = dL/dY*X^T\\n\\nSimilarly the two operations in the backward pass for convolutional layers are given by a convolution of the output gradients with the transposed weigtht kernels and the input image respectively. \\n\\nPoint being, I don't see why the backward pass should be more than 3x more expensive than the forward pass. A simple experiment in PyTorch confirms this: the code snippet pasted at the bottom shows that the backward pass takes only around 2.6x longer than the forward pass.\", \"fprop\": \"0.009286s\", \"bprop\": \"0.0240s\\nbprop/fprop: 2.5893x\\n\\nIn algorithm 1, it is assumed that b << B. For this to be effective the forward pass would have to be *much* faster than the backward pass for this method to yield an improvement in computation. Can the authors comment on where this justification comes from?\\n\\nI am unclear on what the purpose of Section 4.1 is. This shows that the MMS of the proposed method is lower than the other two, but this should be completely expected since that is exactly the quantity being minimized.\", \"there_are_also_several_unsubstantiated_claims\": \"\\\"Lower MMS scores resemble a better...batch of samples\\\", \\\"the batches selected by our method provide a higher value for the training procedure vs. the HNM samples.\\\", \\\"Evidently, the mean MMS provides a clearer perspective...and usefulness of the selected samples\\\". What does higher value, usefulness, clearer perspective mean?\\n\\nMore generally, it is unclear if there is really any improvement in the final performance from using the proposed method.\\nIn Figure 2, all methods seem to have similar final performance. \\nIn Figure 5, is there a reason why the curve for MMS is cut off? How does its final performance compare to that of the baseline method in red? It looks like the baseline might be better, but it's hard to tell from the figure. \\n\\nWhy are the experiments with the entropy measure in a seperate section? Please include them along with the other methods in the same plot, i.e merge Figure 2 and Figure 4.\", \"my_suggestions_for_improving_the_experimental_section_are_as_follows\": \"- include all methods together in all the plots/tables\\n- repeat experiments multiple times with different seeds to get error bars. Include these both in the learning curves and in the tables. \\n- It's hard to see small differences in the learning curves, so including tables as well is important. Include best performance for all the methods in the tables. \\n\\nFinally, in 2019 CIFAR alone is not longer a sufficient dataset to report experiments on. Please report results on ImageNet as well. \\n\\nOne of the central premises of the paper is acceleration in terms of compute/time. To make this point, there should also be results in terms of walltime and floating-point operations. Please include these results in the paper. \\n \\n\\n\\n\\n### Code snippet timing forward/backward passes\\n\\n\\nimport torch, torch.nn as nn, time\\n\\nmodel =\\tnn.Sequential(nn.Linear(784, 1000),\\n nn.ReLU(),\\n nn.Linear(1000, 1000),\\n nn.ReLU(),\\n nn.Linear(1000, 10),\\n nn.LogSoftmax())\\n\\ndata = torch.randn(128, 784)\\nlabels = torch.ones(128).long()\\nt = time.time()\\npred = model.forward(data)\\nloss = nn.functional.nll_loss(pred, labels)\\nfprop_time = time.time() - t\\nt = time.time()\\nloss.backward()\\nbprop_time = time.time() - t\\nprint('fprop: {:.4}s'.format(fprop_time))\\nprint('bprop: {:.4f}s'.format(bprop_time))\\nprint('bprop/fprop: {:.4f}x'.format(bprop_time / fprop_time))\"}"
]
} |
SJxmfgSYDB | Representing Unordered Data Using Multiset Automata and Complex Numbers | [
"Justin DeBenedetto",
"David Chiang"
] | Unordered, variable-sized inputs arise in many settings across multiple fields. The ability for set- and multiset- oriented neural networks to handle this type of input has been the focus of much work in recent years. We propose to represent multisets using complex-weighted multiset automata and show how the multiset representations of certain existing neural architectures can be viewed as special cases of ours. Namely, (1) we provide a new theoretical and intuitive justification for the Transformer model's representation of positions using sinusoidal functions, and (2) we extend the DeepSets model to use complex numbers, enabling it to outperform the existing model on an extension of one of their tasks.
| [
"sets",
"multisets",
"automata",
"complex numbers",
"position encodings"
] | Reject | https://openreview.net/pdf?id=SJxmfgSYDB | https://openreview.net/forum?id=SJxmfgSYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jpCj9mth2O",
"rJg1UyAZoS",
"SyekMJR-jB",
"r1e1JyAWsH",
"BJlEUApWoS",
"BJepYfMAYB",
"BJlFL4VpFS",
"BkxOG9FhKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742273,
1573146439401,
1573146374575,
1573146327145,
1573146188102,
1571852933392,
1571796049394,
1571752463754
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2168/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2168/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2168/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2168/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2168/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2168/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2168/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Main summary: Paper is about generating feature representations for set elements using weighted multiset automata\", \"discussion\": \"\", \"reviewer_1\": \"paper is well written but experimental results are not convincing\", \"reviewer_2\": \"well written but weak motivation\", \"reviewer_3\": \"well written but reviewer has some questions around the motivation of weighted automata machinery.\", \"recommendation\": \"all the reviewers agree its well written but the paper could be stronger with motivation and experiments, all reviewers agree. I vote Reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for taking the time to read and review our paper. We appreciate your feedback and below address your major concern regarding connections to signal processing.\\n\\n> My major concern however is that the authors do not provide any reasoning as to why do we need the weighted automata machinery behind their approach? \\n\\nThere are some connections with signal processing as you mention, however in our system the input and output are not necessarily periodic in our setting. For example, the digit summation problem (task 1) is not periodic in input or output and our system is able to handle this task as effectively as the original DeepSets network. In task 2, the output has a periodic nature to it but the input again is not periodic. \\nThe signal processing connection is perhaps stronger in the Transformer case in which sines and cosines are used in the original encodings. Here the original encoding is a specific form of weighted multiset automaton and it seemed natural in viewing it this way. With this view in mind, we tried further generalizations that this suggests. There may be other ways to view this encoding in a more traditional signal processing manner, but we do not explore that here.\"}",
"{\"title\": \"Review response continued\", \"comment\": \"> Moreover, for the second experiment, the size of LSTM, GRU, deepset seems to be smaller comparing to the complex product layer in the authors\\u2019 architecture (about half of the size). It is true that the authors mentioned that with non-unary automata, the size of the automata is significantly larger, but is this set-up fair for the baselines, e.g. if you use 150 size of LSTM, will it perform equally well to the multiset automata?\\n\\nThe number of parameters for the LSTM and GRU networks is 31,351 and 44,621 respectively whereas in our method the number of parameters is 1,801. This is due to the fact that complex multiplication does not require any learned parameters, so only the embeddings and the final dense layer need to be learned. By contrast, the LSTM and GRU layers themselves have many learned parameters. Overall this results in more learned parameters required by the LSTM and GRU networks compared to ours.\\n\\n> In addition, I have a bit of difficulty understanding why three embedding layers are needed to learn the complex number, is it possible to learn r, a, b jointly with one embedding?\\n\\nSince each complex number is represented as $e^r(a+bi)$, there are three parts learned for each complex number. This could be implemented as a single embedding of size $3n$ for some $n$, but this is effectively the same as concatenating the three smaller embeddings we used in our implementation. We found it easier to keep the embeddings separate prior to the complex multiplication, but there is no difference in computational power.\\n\\n> Is there some real data experiment done for the second application (the extension to deepset), to further showcase the significance of using complex domain?\\n\\nThe two tasks that were used were chosen to demonstrate the effectiveness of our method on handling the type of problem that DeepSets is well suited for (task 1) and a related type of problem that it struggles with (task 2). This seemed like a good way to demonstrate, on a simple problem, one type of behavior that complex numbers are well suited to handle. We do plan to utilize this type of network for more complicated, real world applications in the future.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for taking the time to read and review our paper. We appreciate your feedback and below address questions you expressed.\\n\\n> How is the multiset automata learnt?\\n\\nWe learn the automaton by gradient descent as part of training the larger network. It would certainly be interesting to think about spectral methods as well.\\n\\n> For the first experiment...Is there any potential explanation here? Moreover, what is the advantage of using multiset automata in transformers instead of the original position encoding? \\n\\nWe think the explanation is simply that the \\\"per position, learned\\\" setting has more parameters and is more prone to overfitting. Our representation provides ways of parameterizing position encodings that generalize better. However, as long as overfitting is avoided, the Transformer seems fairly insensitive to the particular choice of position encoding.\\n\\nWe consider this result primarily of conceptual interest, especially for people (like ourselves) who are initially baffled by the use of sinusoidal functions in the original position encoding.\\n\\nIn terms of practical advantages, it's possible that in a situation where learned position encodings are needed, our parameterization might generalize better. For example, BERT uses learned, per-position encodings and some care is required to train them properly (train on sequences of length 128 for 90% of the steps, then sequences of length 512 for 10% of the steps). It's pure speculation, but maybe our parameterization would make this detail unnecessary.\\n\\n> For example, what happens if you don\\u2019t restrict it to be sinusoidal functions? e.g. set the transitions to be diagonal and directly optimize with gradient descent? \\n\\nIf we set the transition matrix to be diagonal and complex, we have to choose a representation for complex numbers. If we use polar coordinates, the result is similar to the line in Table 1 labeled \\\"learned angles.\\\" (This result was not shown, but could be included in a future version of the paper.)\\nWe did not try using rectangular coordinates, as for the units-digit experiment. That is certainly something that could be added to the paper, but we would not expect the results to be very different.\\n\\n> For the second experiment...First how do the authors incorporate multiset automata into the deepset models? In Figure 1, every neural architectures have this embedding layers with diagonal transition matrices, does this mean the multiset automata is applied to encode the input for every architecture? If so is it possible that the use of multiset automata in LSTM, GRU and deepset, is actually hurting the performance? Maybe a comparison with vanilla LSTM, GRU and deepset is also needed. \\n\\nFigure 1 is meant to show the corresponding similar structures in each architecture but does not indicate any change to the three baselines, I apologize if that was unclear. The input to each network is a multiset since addition itself is commutative, but as noted in the description from task 1, they are fed to each network in the order in which they were generated. This imposes an ordering for the GRU and LSTM whereas the input ordering information is discarded by DeepSets and our method. The experiments are run on vanilla LSTM, GRU, and DeepSets networks, with figure 1 meant to show where there are similarities across the systems.\\n\\n(please see next reply for more)\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for taking the time to read and review our paper. We appreciate your feedback and below address questions you expressed.\\n\\n> What is the general recipe for applying this technique to get representations of a multiset?\\n\\nSorry that this wasn't clear. The automaton is nondeterministic, so at each time step it could be in any state. The vector fw(w) of forward weights could be thought of as like a distribution over the state that the machine is in after reading w, except that the values don't have to be probabilities. This vector fw(w) is what we propose as the \\\"general recipe\\\" to represent multiset w.\\n\\n> It is not entirely clear what theoretical results are novel and which proofs are restatements of existing proofs.\\n\\nAgain, sorry for not making this clear. Proposition 1 and Lemma 4 are not novel. To our knowledge, Propositions 2 and 3 are novel, as are the results in the appendices.\\n\\n> In what sense is the diagonal with alternating complex conjugate entries fully general?\\n\\nTo be fully general, we should have put $r_j$ in front of each $\\\\exp i\\\\theta_j$ and $s_j$ in front of each $\\\\exp i\\\\phi_j$; apologies for this error. With that correction in place, the form at the top of page 5 is general in the sense that any real-weighted multiset automaton is close to a complex-weighted diagonal automaton (Prop 3), and because the original automaton had real weights, the diagonal entries that are complex must come in conjugate pairs. Those that are real can be duplicated to form conjugate pairs. Thus, any real-weighted multiset automaton is close to one that can be put into the form shown. We will try to make this clearer in a future version of the paper.\\n\\n> Since there are no confidence intervals it is impossible to draw conclusions from table 1.\\n\\nWe've run bootstrap resampling to compare the other lines against the first line (the original position encodings). Roughly, significance is at about 0.4 BLEU. In the last line (learned per-position), all differences are significant except for Urdu-English. This confirms our conclusion that learned per-position encodings are worse, but the rest are all about the same.\\n\\n> The \\\"units digit of a sum\\\" task seems slightly artificially constructed to be suitable for a network which uses complex numbers. Although this is not a bad thing, it doesn't necessarily validate that complex weighted automata have better representational power. If that was the case, wouldn't we expect better results for other tasks that don't explicitly have a cyclic nature?\\n\\nThe sum task demonstrates that DeepSets and our method are able to outperform LSTM and GRU models on multiset structured input, specifically being able to generalize results to multisets which are larger than were seen at training time. The units-digit-of-sum task is meant to be a simple extension of the sum task to demonstrate that our method can not only represent the same types of data as the DeepSets method, but also represent other behavior such as cycles. We have not run other tasks which don't explicitly have a cyclic nature for which DeepSets obtains less than 100% accuracy.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes generating feature representations for set elements using weighted multiset automata. Experiments show that this leads to better generalization performance in some tasks.\\n\\nI am leaning to reject this paper. The proposed algorithm for generating features seems relevant and correct, but there are shortcomings in the presentation and the experiments are not entirely convincing.\\n\\nIn particular, the paper begins by introducing weighted multiset automata quite clearly, but it fails to explain how exactly these automata would be used to generate set representations. I assumed that the set would be represented as the state of the automaton after processing a string (where each element of the set is a symbol from the alphabet in the string) but in section 4 the different states of the automaton while processing a string are used instead. If this paper proposes a new way of learning representations for sets, I would like to see a general recipe for the application of this idea.\\n\\nReading the paper it is not entirely clear what theoretical results are novel and which proofs are restatements of existing proofs. It would be useful to guide the reader a bit more clearly here.\\n\\nThe second statement in section 4.1 is not clear to me: In what sense is the diagonal with alternating complex conjugate entries fully general?\\n\\nThe experimental results are difficult to interpret. Since there are no confidence intervals it is impossible to draw conclusions from table 1. I am also not entirely convinced by figure 2. The \\\"unit digit of a sum\\\" task seems slightly artificially constructed to be suitable for a network which uses complex numbers. Although this is not a bad thing, it doesn't necessarily validate that complex weighted automata have better representational power. If that was the case, wouldn't we expect better results for other tasks that don't explicitly have a cyclic nature?\\n\\nThe main questions I would like to see answered (and adjusted in the paper) for me to accept this paper would be:\\n\\n* What is the general recipe for applying this technique to get representations of a multiset?\\n* How do the experimental results validate the increased representational power of complex-weighted diagonal automata?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a complex weights based multiset automata designed to represent unordered data. The main idea of multiset automata is that the transition matrices of the automata is pairwise commutative. To achieve this property, the authors proposed to restrict the transition matrices to be diagonal and shows that the latter is a close approximation of the former. The authors proceed to give two practical applications of the multiset automata: position encoding of the transformer and deepset networks. For the former, the authors showed that the position encodings from Vaswani et al. can be written as a weighted unary automaton and therefore it is a generalization of the original position encodings. For the latter, the authors extended the classical deepset networks into its complex domain, allowing more efficient representation of the data.\\n\\nI think this paper overall did a good job, and I really like the construction of the multiset automata and the theoretical guarantees the authors derived and the two applications are straight-forward to see. However I do find the motivation of this paper is a bit weak, and I\\u2019m having a hard time finding the highlight of the paper. Therefore, I\\u2019m giving this paper a weak accept.\", \"here_are_some_general_comments\": \"How is the multiset automata learnt? For weighted automata, one classical way is to use spectral learning algorithm (see Balle et. al. 2014). In this paper, the learning aspect of the multiset automata was not mentioned. I assume that the authors use some kind of gradient descent to optimize the weights w.r.t the whole networks. However, I do think it\\u2019s important to let the readers know this key step.\\n\\nFor the first experiment on the position encoding. I really like the derivation here and it seems that theoretically, multiset automata should be a generalization of the position encodings. However, the experiments didn\\u2019t show much difference. Is there any potential explanation here? Moreover, what is the advantage of using multiset automata in transformers instead of the original position encoding? Is it the runtime is faster? Cause you only need to compute the diagonal parameters and thus drastically reduce the number of parameters? If so, a comparison of runtime might be useful here to further showcase the advantage of the multiset automata. \\n\\nFor the first application, it is great that the authors shows the connection, but it seems that it just stops at the level of showing the position encodings can be viewed as a multiset automata. For example, what happens if you don\\u2019t restrict it to be sinusoidal functions? e.g. set the transitions to be diagonal and directly optimize with gradient descent? \\nFor the second experiment, I\\u2019m a little confused about the experiment setup and the baselines. First how do the authors incorporate multiset automata into the deepset models? In Figure 1, every neural architectures have this embedding layers with diagonal transition matrices, does this mean the multiset automata is applied to encode the input for every architecture? If so is it possible that the use of multiset automata in LSTM, GRU and deepset, is actually hurting the performance? Maybe a comparison with vanilla LSTM, GRU and deepset is also needed. \\n\\nMoreover, for the second experiment, the size of LSTM, GRU, deepset seems to be smaller comparing to the complex product layer in the authors\\u2019 architecture (about half of the size). It is true that the authors mentioned that with non-unary automata, the size of the automata is significantly larger, but is this set-up fair for the baselines, e.g. if you use 150 size of LSTM, will it perform equally well to the multiset automata?\\n\\nIn addition, I have a bit of difficulty understanding why three embedding layers are needed to learn the complex number, is it possible to learn r, a, b jointly with one embedding?\\n\\nIs there some real data experiment done for the second application (the extension to deepset), to further showcase the significance of using complex domain?\\n\\nHere are some writing comments (did not affect the decision):\\nIn page 6, the bullet point \\u201cDiagonal polar\\u201d, what does (#1 above) mean? Same goes for (#2 above) in the latter point. \\nIn \\u201cFull matrix\\u201d, the sentence \\u201c\\u2026, and transition matrix using orthogonal initialization\\u2026\\u201d does not have a verb and is a bit confusing to read. \\n\\nPage 7, second paragraph, line 4, in the brackets. There are two sentences in the brackets, and it feels a bit heavy and it actually says something important. Maybe either put it out or leave it as a footnote?\\n\\nOverall, I like the concept of the multiset automata, but I feel there is a lack of highlights to further showcase this paper. I feel maybe a further investigation of either of these application could make a great paper.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work presents an encoding approach for unordered set input to neural networks. The authors base their approach on weighted finite automata, where in order to absorb unordered sets, they enforce multiplicative commutativity on transition matrices by approximating them as complex diagonal matrices. The authors furthermore provide mathematical references and results to derive bounds for their approximation. They show that positional encoding in Transformer network can be seen as a special case of their multiset encoding scheme, which also generalizes DeepSets encoding from real to complex numbers.\\n\\nThe paper is well-written and easy to follow. The work tries to unify positional encoding in Transformers and Deepsets by establishing a different view to multiset encoding. My major concern however is that the authors do not provide any reasoning as to why do we need the weighted automata machinery behind their approach? Effectively what they do can simply be seen as embedding of inherently periodic multiset elements using periodic functions, which are parameterized by non-linear transformations of data. Such encoding schemes have long been used in signal processing. \\n\\nI might have missed something, but in my opinion the theoretical contribution of the work is rather tangential to the empirical analysis and results presented in the paper.\"}"
]
} |
HkxQzlHFPr | Robust Natural Language Representation Learning for Natural Language Inference by Projecting Superficial Words out | [
"Wanyun Cui",
"Guangyu Zheng",
"Wei Wang"
] | In natural language inference, the semantics of some words do not affect the inference. Such information is considered superficial and brings overfitting. How can we represent and discard such superficial information? In this paper, we use first order logic (FOL) - a classic technique from meaning representation language – to explain what information is superficial for a given sentence pair. Such explanation also suggests two inductive biases according to its properties. We proposed a neural network-based approach that utilizes the two inductive biases. We obtain substantial improvements over extensive experiments. | [
"natural language inference",
"first order logic"
] | Reject | https://openreview.net/pdf?id=HkxQzlHFPr | https://openreview.net/forum?id=HkxQzlHFPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mQm0s8ccJo",
"Byx4K0PJqH",
"rke9JLOAYS",
"r1lxoVPRKB"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742242,
1571942012100,
1571878370186,
1571873943586
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2167/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2167/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2167/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes using first order logic to rule out superficial information for improved natural language inference. While the topic is of interest, reviewers find that the paper misses much of the previous literature on semantics which is highly relevant.\\n\\nI thank the authors for submitting this paper to ICLR. Please take the reviewers' comments, especially recommended references, to improve the paper for future submission.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an approach to treat natural language inference using first-order logic, and to infuse neural NLI models with logical information to be more robust at inference. However, the paper does not contain a single reference to the computational semantics literature, where logical approaches towards semantics were the dominant trend for many years (see e.g. [1, 2]). Indeed, 'neuralising' first order logic has been an active area of recent research ([3] or indeed much of the recent work coming from Sebastian Riedel's group). This is a glaring oversight.\\n\\nThe paper starts by introducing background on first-order logic, and then gives a definition of a 'superficial' predicate, namely one whose extension is not necessary to prove an implication for any collection of background facts. However, by extension, this makes s_1 -> s_2 a tautology, which is the 'true' notion that the authors are looking for. Indeed, if |- (s_1 -> s_2), then for any collection of formulae \\\\Delta then \\\\Delta |- (s_1 -> s_2) (by monotonicity of entailment) and clearly if for any \\\\Delta we have \\\\Delta |- (s_1 -> s_2), we can take \\\\Delta to be the empty set. Finally, the authors show that tautologies are still tautologies under change of predicates (i.e. if we only require logical rules to prove one statement from another, then the extensions of predicates in those statements do not matter).\\n\\nThe authors then use this to motivate two extensions to inference models. One is to 'drop out' word information, and the other is to treat different occurrences of the same word as reflecting the same underlying predicate. The first somewhat transparently forces the model to care less about the exact meaning (i.e. extension in the logical world) of words (indeed, word vectors have been shown to capture extensional information [4, 5]), and so may force the inference model to learn more 'logical' inference rules. Further, the word dropout calculation includes whether the word is in both sentences, which is a strong signal that its extension may not be necessary. However, the second only forces the intuition that different mentions of the same word are likely to be coreferent, which is a weak assumption that models may already pick up. Indeed, it is noticeable that this component seems to be less necessary in the authors' ablation study.\\n\\nIn summary, while I am sympathetic to the aim of grounding neural models in explicit notions of semantics, this paper shows such a lack of awareness of previous literature that I cannot recommend acceptance. \\n\\n[1] The Meaning Factory: Formal Semantics for Recognizing Textual Entailment and Determining Semantic Similarity, Bjerva et al. 2014\\n[2] Natural Logic for Textual Inference, MacCartney and Manning 2009\\n[3] End-to-end Differentiable Proving, Rocktaschel and Riedel 2017\\n[4] Building a shared world: mapping distributional to model-theoretic semantic spaces, Vecchi and Herbelot 2015\\n[5] Deriving Boolean structures from distributional vectors, Kreuzewski et al 2015\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper tried to reduce superficial information in natural language inference (NLI) to prevent overfitting. It utilized the first order logic to explain what is superficial information.\\nThen, it introduced a superficial factors in the existing neural networks. Furthermore, they introduce a graph neural network (GNN) to model relation between premise and hypothesis.\\nIt was evaluated on a bunch of NLI benchmarks including SNLI, SciTail, MNLI etc, showing the effectiveness of the proposed model. \\n\\nThis paper is well motivated and the ideas are interesting. However, there are a few concerns detailed as follows: \\n\\n1. Do these methods only work on small tasks? For example, the big improvement only appears in small tasks such as MRPC and RTE. However, the proposed method experiences performance decreases on large tasks such as SNLI and MNLI. E.g., in CAFE settings, the proposed approach got 75.2/74.7 (matched/mismatched) vs 76.3/76 (baselines). The similar observations are found in MwAN settings on MNLI and SNLI. I\\u2019d like to see some discussion in the paper on this.\\n\\n2. What common logic patterns did the model learn when removing all the superficials? For different relations, e.g., entailment vs contradiction, are these patterns different? This may help the reader understand whether the model really filtered these information. \\n\\n3. It requires more discussions on Table 5. E.g., in NLI (2 classes), the random guess should be 50%. But the model performance was 41.4% on CAFE when transferring from RTE to SciTail, which is even worse than random guess. In contrary, when transferring from SciTail to RTE, the model performance was 56.1%, which seems reasonable. I believe more analysis and discussions are required to understand this model.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper uses first order logic (FOL) to help reduce so-called \\u201csuperficial\\u201d information/semantics that is less relevant to the judgement of natural language inference relations. The submission misses the major literature of and comparison to previous work that uses FOL for natural language inference (aka. RTE), for example, [Bos and Markert \\u201805], [Beltagy et al. \\u201813], [Abzianidze \\u201817], among others, as well as work based on natural logic, e.g., [MacCartney \\u201809], which operates directly on parsed sentences. The submission contains little contribution with regard to the exiting work. Key concepts such as \\u201csuperficial semantics\\u201d is vague and not well defined. I do not recommend it for the conference.\\n\\nBos and Markert \\u201805, Recognising Textual Entailment with Robust Logical Inference. \\nBeltagy et al. \\u201813, Montague Meets Markov: Deep Semantics with Probabilistic Logical Form.\\nAbzianidze \\u201917, A Natural Proof System for Natural Language.\\nMacCartney \\u201909, natural language inference (PhD thesis)\"}"
]
} |
H1gXzxHKvH | Deep Nonlinear Stochastic Optimal Control for Systems with Multiplicative Uncertainties | [
"Marcus Pereira",
"Ziyi Wang",
"Tianrong Chen",
"Evangelos Theodorou"
] | We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellman partial differential equations. Such PDEs arise when one considers stochastic dynamics characterized by uncertainties that are additive and control multiplicative. Stochastic models with the aforementioned characteristics have been used in computational neuroscience, biology, finance and aerospace systems and provide a more accurate representation of actuation than models with additive uncertainty. Previous literature has established the inadequacy of the linear HJB theory and instead rely on a non-linear Feynman-Kac lemma resulting in a second order forward-backward stochastic differential equations representation. However, the proposed solutions that use this representation suffer from compounding errors and computational complexity leading to lack of scalability. In this paper, we propose a deep learning based algorithm that leverages the second order Forward-Backward SDE representation and LSTM based recurrent neural networks to not only solve such Stochastic Optimal Control problems but also overcome the problems faced by previous approaches and scales well to high dimensional systems. The resulting control algorithm is tested on non-linear systems in robotics and biomechanics to demonstrate feasibility and out-performance against previous methods. | [
"Deep Learning",
"Stochastic Optimal Control",
"Robotics",
"Biomechanics",
"LSTM"
] | Reject | https://openreview.net/pdf?id=H1gXzxHKvH | https://openreview.net/forum?id=H1gXzxHKvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"z5kJBDSDid",
"rJgTgCqhjr",
"rygvaE5hsS",
"BkxGBZehjS",
"rkg6b0CjsB",
"ByxO66AiiB",
"S1xLIpRjsH",
"BJeaEz_ptS",
"Skg1ZY6utH",
"rJeVbb-bKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798742213,
1573854708854,
1573852350980,
1573810490484,
1573805573502,
1573805503749,
1573805390282,
1571811893436,
1571506423280,
1570996475867
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2166/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2166/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2166/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2166/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2166/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2166/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2166/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2166/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2166/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"A nice paper, but quite some unclarities; it's unclear in particular if the paper improves w.r.t. SOTA. Esp. scaling is an issue here. Also, the understandability is below par and more work can make this into an acceptable submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Additional Clarifications\", \"comment\": \"I believe I need to clarify my points further here.\\n\\n1. A separate experiment is needed for evaluate the effects of a guided second order BSDE for control. As this is the main modification from previous works, it's hard to judge whether seemingly minor change can lead to a significant benefit. It is entirely possible that the authors have a very novel and powerful simple idea, but at this point it's at best unclear. \\n\\n2. The authors claimed their method can be scaled to higher dimensions because they avoid computing a rank 3 tensor (in the state variable only) and hence should be faster. While this claim is generally true when state is very high in dimension, this experiments are only in roughly 10 dimensions, and we would expect little effects from computing a rank 3 tensor. At the same time, given that Han et al. were able to perform control in 100 dimensions, I believe the authors should demonstrate their method in solving a problem of comparable size. \\n\\n3. The main criticism of whether or not the control is better than iLQG depends on whether the two methods have the computational budget. I can always choose very crude step sizes and parameters so that the iLQG's variance is significantly larger than any method. It's important to tune the benchmark method and give it the same computational budget the LSTM has to make a valid comparison.\\n\\nTo raise my score, the authors will have to address these serious questions.\"}",
"{\"title\": \"Clarifying our initial comment on relevance of related work\", \"comment\": \"We understand the reviewer's opinion on this matter. There are different perspectives in terms of what prior work is relevant and what is not. One perspective is related to the general theme of approximate dynamic programming and model-based RL. There exist a plethora of prior work on approximate dynamic programming that is based on function approximations method for model learning such LWPR, Gaussian Processes, Mixture Models and trajectory optimization/optimal control, including the citations proposed by the reviewer.\\n\\nOur perspective however on the relevance with respect to prior work is more specific and relies on the methodological characteristics and similarities of our approach with others. We feel that in order to better facilitate audience\\u2019s understanding of our approach, we choose to familiarize our readers with the recent works that share a similar methodology (FBSDE. 2FBSDE, deep FBSDE, etc.) for stochastic optimal control. This helps us maintain a coherent introduction to this area of research and smoothly transition to the contributions of this paper.\"}",
"{\"title\": \"Disagreement on the relevance of the related work\", \"comment\": \"Just to make one thing clear: I am not affiliated with neither those works nor their authors.\\n\\nBut your work turns a problem (specified by known system equations) into a form tailored towards a certain class of solvers. E2C and RCE do the same, except that the problem is not specified by system equations but through data.\\n\\nThe claim that the work is not related is just wrong, especially given the community that you want to adress. If your work is not related to this method (the major difference being learning the necessary representation from data, which you don't do) I wonder why you send your article to a conference on learning representations anyway.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"We did not compare against the feedforward network because a detailed comparison was done in Pereira et. al., which demonstrated clear advantages of the LSTM-based architecture. Furthermore, for the time horizon and discretization used in our simulations (150 timesteps vs 30 in Han et. al.), having a different feedforward network at every timestep certainly won't scale in computational time and memory requirements.\\n\\nWe'd also like to clarify that the quadcopter dynamics used in the paper contains 12 states and 4 controls, and the human arm dynamics consists of 10 states and 6 controls.\\n\\nWhen compared against prior work (FBSDE and iLQG), our proposed method achieves a much smaller variance in its trajectories for the quadcotper experiment. For the human arm system, FBSDE can not handle any control multiplicative noise. Compared against iLQG, our method demonstrates better task performance (shown in the phase plot included in the supplementary materials fig. 9), and its performance deteriorates more slowly when the noise present in the system increases (shown in the main paper fig. 5). Therefore, we conclude that the proposed controller demonstrates improved performance over prior methods inspite of added complexity to the network architecture.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"The control multiplicative noise can be found in bio-mechanical systems [1] and financial problems such as portfolio optimization [2].\\n\\nPereira et. al. has shown that a recurrent network architecture using LSTM outperforms the fully connected networks at every time step proposed by Han et. al. in task completion, space and time complexity. Therefore, in this paper we choose to use an LSTM-based network architecture.\\n\\nAfter performing the comparison of 2FBSDE and iLQG under different levels of additive and multiplicative noise (results shown in fig. 5), we included the phase plot comparison of the two controllers in the low noise condition (multiplicative noise std = additive noise std = 0.1) in the supplementary materials (fig. 9). Under this condition, 2FBSDE can perform the task while iLQG diverges. As the noise level increases, the performance of 2FBSDE deteriorates but still outperforms iLQG.\\n\\nOur cost functions for all tasks were quadratic state (both running and terminal) and control costs. We have added the values of state cost and control cost coefficients for each simulation experiment to the supplementary materials. All the tasks were reaching tasks meaning that the state costs are squared distances from a target state. We refer the reviewer to sections E and F of the supplementary materials for the exact values of the parameters used in our experiments. \\n\\nRegarding suggestions for additional plots and experiments, we have included new plots in the main paper and supplementary materials (see section G).\\n\\n[1] Todorov, Emanuel. \\\"Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system.\\\" Neural computation 17.5 (2005): 1084-1108.\\n\\n[2] Davis, Mark, and Sebastien Lleo. \\\"Jump-diffusion risk-sensitive asset management II: jump-diffusion factor model.\\\" SIAM Journal on Control and Optimization 51.2 (2013): 1441-1480.\"}",
"{\"title\": \"Reponse to reviewer #3\", \"comment\": \"We want to clarify that the main contribution of the paper is the introduction of a recurrent neural network architecture tailored to solve a novel representation (second order FBSDEs with control) of the stochastic optimal control problem involving nonlinear systems wherein noise entering the system has a multiplicative effect with the applied controls. We work with known dynamics and full state information, hence the papers [1, 2] mentioned by the reviewer are not relevant to the problem we propose. Our 2FBSDE controller does not extract or rely on a latent representation of the observations.\\n\\nRegarding the exact choice of time discretization, the values were hand-tuned until we observed numerical stability, convergence of training (or optimization in case of iLQG) and reasonable task performance. We would like to reiterate that in the case of iLQG, finer time discretizations were required as compared to FBSDE and 2FBSDE. As far as the linear system time discretization, we used the same value as in Bakshi et al. (2017).\\n\\nSince our framework relies on the transformation of the HJB PDE to 2FBSDEs which in turn relies on mathematical results from stochastic calculus of continuous time systems, it is therefore necessary to start with the continuous time representation and only discretize the problem at the very end.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"**Summary:** The paper contains a method tailored to a certain kind of SOC problems involving multiplicative noise. The central idea is to use a recurrent network to transform the observations into a representation that can be used with solvers specifically tailored towards that class of problems.\\n\\n**Decision:** I recommend to accept the paper for publication.\\n\\n**Arguments for decision:** The paper clearly adresses an important problem and poroposes a method capable of solving it. The method appears to be theoretically founded and the experimental validation seems solid. The relevance of the method is there as the problem class is prevalent in practical applications. The venue is a good fit as well, as the focus is the representation of a control problem in a way that allows more efficient solutions.\\n\\n**Feedback for improvement:**\\n\\n- The type setting could be improved at times. E.g. below equation (1).\\n- I feel that the term \\\"exploration\\\" is overloaded. While it serves as an explicit mean to reduced the sample complexity of methods in RL, it appears to be about avoiding premature convergence in this work. I am too unfamiliar with the relevant SOC literature to judge how well the term fits, but coming from a ML background I stumbled over this expression.\\n- Some of the experimental details, e.g. the exact choice of time discretisation, don't appear motivated well. \\n- The paper needs to respect [1, 2] in the related work and show relations. From the perspective of learning state representations for optimal control, both works are relevant.\\n- Is it necessary to start the discussion from the continuous case? While I appreciate the elegance of starting out with a continuous problem and then discretising at the last step, it felt like a barrier to understanding in my case, as my understanding of continuous optimal control is limited\\u2013and I feel the audience of ICLR might have the same problem.\\n\\n**References:**\\n\\n[1] Watter, Manuel, et al. \\\"Embed to control: A locally linear latent dynamics model for control from raw images.\\\" *Advances in neural information processing systems*. 2015.\\n[2] Banijamali, Ershad, et al. \\\"Robust locally-linear controllable embedding.\\\" *arXiv preprint arXiv:1710.05373*(2017).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"######### Rebuttal Response:\\nThanks for the clarifications and especially for updating the formatting. The current state does not convince me to rate the paper as weak accept but I increased my rating to weak reject. \\n\\n\\\"Pereira et. al. has shown that a recurrent network architecture using LSTM outperforms the fully connected networks at every time step proposed by Han et. al. in task completion, space and time complexity. Therefore, in this paper we choose to use an LSTM-based network architecture.\\\"\\n\\n-> Yes it might be true that a recurrent function approximator does in practice perform better than a feed-forward function approximator. However, in theory a feed-forward network should be sufficient as the value function does not depend on the previous states. Therefore, the question is, why does the LSTM perform better? Does the recurrent nature of the LSTM make the predictions smoother compared to a feed-forward network? Can any other regularizing scheme be introduced s.t. the feed-forward networks performs equally well?\\n\\n######### Review:\", \"summary\": \"The paper builds on the work of Pereira et. al. and uses forward backward stochastic differential equations to learn the Hessian of the Value function Vxx and \\\\partial _t V_x + 1/2 tr(\\\\partial_{xx} V_x CC^T). In contrast to the prior work, this paper introduces multiplicative noise for the control and uses second order optimization. The performance is evaluated on different control tasks, e.g., linear system, cartpole, quadcopter & human arm actuated by tendons.\", \"conclusion\": \"All in all, I like the proposed research of combining theoretical approaches and deep learning to perform trajectory optimization and I would like to see much more of this research like this within the ICLR community. Furthermore, I think that the paper has a contribution and that the paper was improved compared to the initial Neurips submission (i.e., adding ILQG as baseline). However, the writeup and formatting is still very much sub-standard and must be improved to make this paper worth publishing. The current write-up is not accessible for the ICLR community and the understandability must be significantly improved (Details are provided below). Therefore, I currently rate this paper as a clear rejection but I am happy to improve the score to 7-8 if the write up is improved during the rebuttal.\", \"theoretical_structure\": \"I like the introduction, which covers the topic but might be a bit too long. Maybe you want to shorten the introduction and add an additional related work section at the end. The stochastic control introduction is nice and has the correct level of abstraction for the reader. However, the paper introduces many complex concepts which are not essential for understanding the paper (e.g., filtered probability space etc.). One might want to trade off understandability vs. mathematical rigor especially, if the paper does not rely on these concepts. Furthermore, you might want to make eq 1 more explicit as the multiplicative action noise is not visible from eq 1. Section 3 'A FBSDE Solution to the HJB PDE' is the most problematic section of this paper, which is not understandable for the common ICLR reader. Eq. 6, which just appears without any derivation, is not understandable and the reader has no intuition how to derive this eq. Furthermore, Eq 6 (page. 4) uses notations which is only clearly introduced later within the paper or even the appendix (e.g., Y being the propagated value function, Z being the propagated gradient of the value function is only mentioned in the appendix, i.e., page 12. \\\\Gamma is only introduced in page 5. Yes, Eq. 7 defines these variables but the style of definition is not standard and one does not expect the variables to be defined in this style.). Could the authors please provide an intuitive derivation of these equation and use clearer notation (Why would one want to abstract V, V_x, V_x, \\\\mathcal|{H}(V_x) in the first place as these are intuitive for the ICLR community and sufficiently short?) Especially, as this section highlights the difference to the prior works of Pereira et. al., this section should be very clear. Section 4 is clear but should include the loss function as the loss is not trivial and essential for the optimization. Currently, the loss description is buried in the appendix. All in all, the theoretical explanation and the bloated notation should be simplified and every equation should be embedded into an intuitive derivation. Currently these explanations are not understandable without reading the appendix and prior work.\", \"experiments\": \"The experiments apply 2FBSDE to 4 different control tasks (Linear system, quadcopter, cartpole & human arm) and compare the performance to the prior work of FBSDE and iLQG. The number of baselines and systems is sufficient. However, the paper should provide more evaluations:\\n\\n(1) Plot the histogram of the obtained cost distributions. \\n(2) Plot a single state- and action trajectory (and the action distribution). Using these plots, the level of noise and smoothness and hence the applicability to physical systems can be evaluated. \\n(3) Plot the noise free trajectories and show that these mean trajectories reach the desired solution.\\n(4) Specify the exact cost function for every experiment\", \"further_comments_to_the_individual_experiments\": \"\", \"cartpole\": \"The Cartpole iLQG seems to perform much better (swing-up the pendulum faster, don't deviate so much from x=0, much more coherent velocity compared to 2FBSDE, FBSDE). Could the authors please discuss these aspects in more detail and present experiments with longer time-horizons to check whether the proposed method can stabilize the cart at [0, 0, 0, 0]. The current plots don't reach this target state. Your plots also hint that the cartpole does not need to pre-swing the pendulum, which is most likely due to the very low action cost. This selection of action cost significantly simplifies the problem. Could the authors please include a cartpole with higher action cost and show that 2FBSDE can learn to pre-swing the pendulum.\", \"the_quadcopter\": \"Could the authors please specify the exact quadcopter dynamics. What kind of abstraction did you model? What are the control inputs? Furthermore, the citation for the dynamics is wrong and puts the supervisor of the master thesis as first author. Furthermore, can the authors please provide longer plots to highlight, which method can stabilize the system.\", \"human_arm\": \"For the human arm neither iLQG or 2FBSDE reach the desired target location. Can you explain why no trajectory optimization does reach the desired position.\", \"formatting\": [\"Please rework the formatting such that the inline math does not cause the formatting issues of different line spacings (e.g., sec. 2.1, sec. 5) and irregular whitespaces (e.g., last line of paragraph 2.1 Preliminaries). Please remove the color coding of text for the experiments and make sure that the legends are sufficiently large and include all lines. Currently the legends are missing the target state. You can also extend the figure captions to highlight the conclusion of the plots. Please rework the figures such that the figures do not cause so much whitespace (e.g., Figure 3, 4 & 5). When reconfiguring the plots, you gain space, which can be used for further explanation of the theory. Furthermore, you might want add dotted lines to the confidence intervals as the confidence intervals are important but the differences are not clearly visible from the plots. Also, the labelling in figure 3 seems wrong, the axis labeled cart velocity should be pendulum angle and the pendulum angle axis should be cart velocity. All in all, the formatting can be significantly improved, which is especially bad as this paper is most likely a resubmission from Neurips.\", \"Minor Comments / Questions:\", \"'where l :Rnx\\u00d7Rnu\\u2192R+ is the running cost and C1,23 \\\\phi :Rnx\\u2192R+ is the terminal state cost.' Is l and \\\\phi of class C^{1,2} or only one of them? The notation is confusing and should be simplified.\", \"Can you comment on how important this multiplicative noise in physical system? \\u00ac\\u00ac\\u00ac\", \"Why are you using a LSTM instead of a simple feed-forward neural network as the ff-nn should be sufficient to model V(x) as the value function is not recurrent. Have you tried using a simple ff-nn?\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary and Decision\\n\\nThe authors in the paper studied the problem of stochastic optimal control using deep learning. The main contributions of this work can be summarized by introducing an additional backward stochastic differential equation (BSDE) over Pereira et al. (2019), where instead of predicting the gradient of the value function (V_x), the authors predicted the values required to compute the Hamiltonian H(V_x) instead. The authors then compared the results against both closed form solutions of optimal control and a numerical approximate controller by Li and Todorov (2007). However, it is unclear whether this particular modification leads to any incremental benefits over Pereira et al. (2019), nor is it compared against the feedforward networks of Han et al. (2017). Furthermore, one of the main benefits of this modifications the authors claimed is scalability to higher dimensions, yet we do not have any experiments in more than 6 dimensions in the quadcopter case. \\n\\nWhile I believe the authors have a promising idea, the current paper do not provide enough justification to demonstrate an improvement. Therefore I recommend a weak reject. \\n\\n\\nBackground\\n\\nIn the classical optimal control set up, it is often very difficult to recover a solution in closed form. Yet at the same time, current numerical methods are far from satisfactory. There are two main approaches to numerically solving optimal control: \\n1. Numerically solve the Hamilton-Jacobi-Bellman (HJB) equation for the value function, which is difficult both due to non-linearity and curse of dimensionality. \\n2. Use the forward-backward stochastic differential equation (FBSDE) representation of the solution for the HJB equation, but simulation of the backward equation is difficult due to requirement of meeting a terminal condition. \\n\\nHan et al. (2017) worked around the simulation difficulty by instead training a feedforward network to minimize the error of terminal condition. In the same paper, Han et al. showed this method can solve an HJB equation in 100 dimensions. Pereira et al. (2019) extended this idea to recurrent networks. Bakshi et al. (2017b) introduced the second BSDE to encourage more controlled exploration, which is used in the current paper under review. \\n\\nThe ultimate goal of this line of work is to solve highly complex non-linear stochastic control problems, where we have no hope of recovering an optimal in closed form. Therefore, any method with efficient approximate computation is highly desirable. In particular, methods involving an approximation with deep networks are quite promising and deserve further exploration. \\n\\n\\nDetailed Comments \\n\\nThe authors in this paper essentially combined the idea of Bakshi et al. (2017b) and Pereira et al. (2019) in an attempt to improve both existing papers. Having a guided second order controlled BSDE will encourage further exploration, and intuitively this change should make training easier. Furthermore, by introducing the prediction of the quantity \\\\Omega_t, the authors avoid computing a rank 3 tensor. Hence, I believe this is a promising idea and deserves to be implemented and carefully studied. \\n\\nThe main concern of this work is that it's not clear whether this method is working. Essentially, I would like to see that despite introducing more complexity to the network compared to Pereira et al. (2019) and Han et al. (2017), the resulting network can still be trained to achieve improved results. It is very unsatisfying to only compare against a closed form solution and a simple approximate control. These experiments can serve to show that the trained network is not behaving erratically, but we cannot conclude any improvements. In fact, even when compared against the approximate control ILQG by Li and Todorov (2007), it's unclear if the current method is better when given the same computation budget. \\n\\nThe other main concern is scalability to higher dimensions. While computationally cheap, it is unclear whether or not predicting the quantity \\\\Omega_t does in fact work in high dimensions. This concern is mainly driven by the fact Han et al. (2017) has been able to solve control problems in 100 dimensions, where computing the rank 3 tensor is becoming costly. In this case, comparison against the closed form solution is sufficient to determine whether or not predicting \\\\Omega_t works in high dimensions. \\n\\n\\nMinor Additional Comments\\n\\nThere are more minor points I would like to make, but these do not contribute to the review decision. \\n1. On page 3, below equation (1), I believe C is a map [0,T] X R^{n_x} X R^{n_u} -> R^{ n_x * (n_w + 1) } \\n2. On Page 3, below equation (2), it's strange seeing this notation C^{1,2} for a function of x only; in stochastic control and PDE literature, C^{1,2} typically denotes a function u(t,x) once differentiable in t and twice differentiable in x. \\n3. On page 3, below equation (3), we do also require the matrix R to be symmetric? \\n4. On page 3, for equation (4), it should be mentioned at some point in this paper that we can only recover an HJB equation without a sup term due to the loss \\\\ell(x,u) being quadratic in the control u. In general, the HJB equation and the FBSDEs can be much more difficult to work with. \\n5. On page 4, it would be more satisfying to cite a more complete collection of works in stochastic control and FBSDEs. In particular, Bismut (1976) and Pardoux and Peng (1990) are seminal works that led up to El Karoui et al. (1997). It would be also nice to briefly mention a huge literature on the PDE methods to control. \\n6. On page 5, the notation for \\\\psi and \\\\zeta were not introduced. I interpreted from context that these were parameters for predicting \\\\Gamma and \\\\Omega, but it would nice to have a definition.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.